text
stringlengths 104
605k
|
---|
Willard Van Orman Quine
Born Willard Van Orman Quine June 25, 1908 Akron, Ohio December 25, 2000 (aged 92) Boston, Massachusetts 20th-century philosophy Western Philosophy Analytic Logic, ontology, epistemology, philosophy of language, philosophy of mathematics, philosophy of science, set theory New Foundations, indeterminacy of translation, naturalized epistemology, ontological relativity, Quine's paradox, Duhem–Quine thesis, radical translation, confirmation holism, Quine–McCluskey algorithm
Willard Van Orman Quine (June 25, 1908 – December 25, 2000) (known to intimates as "Van")1 was an American philosopher and logician in the analytic tradition. From 1930 until his death 70 years later, Quine was continually affiliated with Harvard University in one way or another, first as a student, then as a professor of philosophy and a teacher of logic and set theory, and finally as a professor emeritus who published or revised several books in retirement. He filled the Edgar Pierce Chair of Philosophy at Harvard from 1956 to 1978. A recent poll conducted among analytic philosophers named Quine as the fifth most important philosopher of the past two centuries.2 He won the first Schock Prize in Logic and Philosophy in 1993, for "his systematical and penetrating discussions of how learning of language and communication are based on socially available evidence and of the consequences of this for theories on knowledge and linguistic meaning."3 In 1996 he was awarded the Kyoto Prize in Arts and Philosophy for his "outstanding contributions to the progress of philosophy in the 20th century by proposing numerous theories based on keen insights in logic, epistemology, philosophy of science and philosophy of language."4
Quine falls squarely into the analytic philosophy tradition while also being the main proponent of the view that philosophy is not merely conceptual analysis. His major writings include "Two Dogmas of Empiricism" (1951), which attacked the distinction between analytic and synthetic propositions and advocated a form of semantic holism, and Word and Object (1960), which further developed these positions and introduced Quine's famous indeterminacy of translation thesis, advocating a behaviorist theory of meaning. He also developed an influential naturalized epistemology that tried to provide "an improved scientific explanation of how we have developed elaborate scientific theories on the basis of meager sensory input."5 He is also important in philosophy of science for his "systematic attempt to understand science from within the resources of science itself"5 and for his conception of philosophy as continuous with science. This led to his famous quip that "philosophy of science is philosophy enough."6 In philosophy of mathematics, he and his Harvard colleague Hilary Putnam developed the "Quine–Putnam indispensability thesis," an argument for the reality of mathematical entities.7
Biography
According to his autobiography, The Time of My Life (1986), Quine grew up in Akron, Ohio, where he lived with his parents and older brother Robert C. His father, Cloyd R., was a manufacturing entrepreneur and his mother, Harriett E. (also known as "Hattie" according to the 1920 census), was a schoolteacher and later a housewife.1 He received his B.A. in mathematics from Oberlin College in 1930, and his Ph.D. in philosophy from Harvard University in 1932. His thesis supervisor was Alfred North Whitehead. He was then appointed a Harvard Junior Fellow, which excused him from having to teach for four years. During the academic year 1932–33, he travelled in Europe thanks to a Sheldon fellowship, meeting Polish logicians (including Alfred Tarski) and members of the Vienna Circle (including Rudolf Carnap), as well as the logical positivist A.J. Ayer.1
It was through Quine's good offices that Alfred Tarski was invited to attend the September 1939 Unity of Science Congress in Cambridge. To attend that Congress, Tarski sailed for the USA on the last ship to leave Danzig before the Third Reich invaded Poland. Tarski survived the war and worked another 44 years in the USA.
During World War II, Quine lectured on logic in Brazil, in Portuguese, and served in the United States Navy in a military intelligence role, deciphering messages from German submarines, and reaching the rank of Lieutenant Commander.1
At Harvard, Quine helped supervise the Harvard theses of, among others, Donald Davidson, David Lewis, Daniel Dennett, Gilbert Harman, Dagfinn Føllesdal, Hao Wang, Hugues LeBlanc and Henry Hiz. For the academic year 1964–1965, Quine was a Fellow on the faculty in the Center for Advanced Studies at Wesleyan University.8
Quine was an atheist.9
Quine had four children by two marriages.1 Guitarist Robert Quine was his nephew.
Political beliefs
Quine was politically conservative, but the bulk of his writing was in technical areas of philosophy removed from direct political issues.10 He did, however, write in defense of several conservative positions: for example, in Quiddities: An Intermittently Philosophical Dictionary, he wrote a defense of moral censorship;11 while, in his autobiography, he made some criticisms of American postwar academic culture.1213
Quine, like many philosophers in the Anglo-American "analytic" tradition, was critical of Jacques Derrida; in 1992, Quine led an unsuccessful petition to stop Cambridge University from granting Derrida an honorary degree. Such criticism was, according to Derrida, directed at Derrida "no doubt because deconstructions query or put into question a good many divisions and distinctions, for example the distinction between the pretended neutrality of philosophical discourse, on the one hand, and existential passions and drives on the other, between what is public and what is private, and so on."14 Quine regarded Derrida's work as pseudophilosophy or sophistry.15
Notable teachers
Notable students
Work
Quine's Ph.D. thesis and early publications were on formal logic and set theory. Only after World War II did he, by virtue of seminal papers on ontology, epistemology and language, emerge as a major philosopher. By the 1960s, he had worked out his "naturalized epistemology" whose aim was to answer all substantive questions of knowledge and meaning using the methods and tools of the natural sciences. Quine roundly rejected the notion that there should be a "first philosophy", a theoretical standpoint somehow prior to natural science and capable of justifying it. These views are intrinsic to his naturalism.
Quine could lecture in French, Spanish, Portuguese and German, as well as his native English. But like the logical positivists, he evinced little interest in the philosophical canon: only once did he teach a course in the history of philosophy, on Hume. Quine has an Erdős number of 3.16
Rejection of the analytic–synthetic distinction
In the 1930s and 1940s, discussions with Rudolf Carnap, Nelson Goodman and Alfred Tarski, among others, led Quine to doubt the tenability of the distinction between "analytic" statements — those true simply by the meanings of their words, such as "All bachelors are unmarried" — and "synthetic" statements, those true or false by virtue of facts about the world, such as "There is a cat on the mat." This distinction was central to logical positivism. Although Quine is not normally associated with verificationism, some philosophers believe the tenet is not incompatible with his general philosophy of language, citing his Harvard colleague B. F. Skinner, and his analysis of language in Verbal Behavior.17
Like other Analytic philosophers before him, Quine accepted the definition of "analytic" as "true in virtue of meaning alone". Unlike them, however, he concluded that ultimately the definition was circular. In other words, Quine accepted that analytic statements are those that are true by definition, then argued that the notion of truth by definition was unsatisfactory.
Quine's chief objection to analyticity is with the notion of synonymy (sameness of meaning), a sentence being analytic, just in case it substitutes a synonym for one "black" in a proposition like "All black things are black" (or any other logical truth). The objection to synonymy hinges upon the problem of collateral information. We intuitively feel that there is a distinction between "All unmarried men are bachelors" and "There have been black dogs", but a competent English speaker will assent to both sentences under all conditions since such speakers also have access to collateral information bearing on the historical existence of black dogs. Quine maintains that there is no distinction between universally known collateral information and conceptual or analytic truths.
Another approach to Quine's objection to analyticity and synonymy emerges from the modal notion of logical possibility. A traditional Wittgensteinian view of meaning held that each meaningful sentence was associated with a region in the space of possible worlds. Quine finds the notion of such a space problematic, arguing that there is no distinction between those truths which are universally and confidently believed and those which are necessarily true.
Confirmation holism and ontological relativity
The central theses underlying the indeterminacy of translation and other extensions of Quine's work are ontological relativity and the related doctrine of confirmation holism. The premise of confirmation holism is that all theories (and the propositions derived from them) are under-determined by empirical data (data, sensory-data, evidence); although some theories are not justifiable, failing to fit with the data or being unworkably complex, there are many equally justifiable alternatives. While the Greeks' assumption that (unobservable) Homeric gods exist is false, and our supposition of (unobservable) electromagnetic waves is true, both are to be justified solely by their ability to explain our observations.
Quine concluded his "Two Dogmas of Empiricism" as follows:
As an empiricist I continue to think of the conceptual scheme of science as a tool, ultimately, for predicting future experience in the light of past experience. Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer . . . For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits.
Quine's ontological relativism (evident in the passage above) led him to agree with Pierre Duhem that for any collection of empirical evidence, there would always be many theories able to account for it. However, Duhem's holism is much more restricted and limited than Quine's. For Duhem, underdetermination applies only to physics or possibly to natural science, while for Quine it applies to all of human knowledge. Thus, while it is possible to verify or falsify whole theories, it is not possible to verify or falsify individual statements. Almost any particular statements can be saved, given sufficiently radical modifications of the containing theory. For Quine, scientific thought forms a coherent web in which any part could be altered in the light of empirical evidence, and in which no empirical evidence could force the revision of a given part.
Quine's writings have led to the wide acceptance of instrumentalism in the philosophy of science.
Existence and Its contrary
The problem of non-referring names is an old puzzle in philosophy, which Quine captured eloquently when he wrote,
"A curious thing about the ontological problem is its simplicity. It can be put into three Anglo-Saxon monosyllables: 'What is there?' It can be answered, moreover, in a word—'Everything'—and everyone will accept this answer as true."18
More directly, the controversy goes,
"How can we talk about Pegasus? To what does the word 'Pegasus' refer? If our answer is, 'Something,' then we seem to believe in mystical entities; if our answer is, 'nothing', then we seem to talk about nothing and what sense can be made of this? Certainly when we said that Pegasus was a mythological winged horse we make sense, and moreover we speak the truth! If we speak the truth, this must be truth about something. So we cannot be speaking of nothing."
Quine resists the temptation to say that non-referring terms are meaningless for reasons made clear above. Instead he tells us that we must first determine whether our terms refer or not before we know the proper way to understand them. However, Czesław Lejewski criticizes this belief for reducing the matter to empirical discovery when it seems we should have a formal distinction between referring and non-referring terms or elements of our domain. Lejewski writes further,
"This state of affairs does not seem to be very satisfactory. The idea that some of our rules of inference should depend on empirical information, which may not be forthcoming, is so foreign to the character of logical inquiry that a thorough re-examination of the two inferences [existential generalization and universal instantiation] may prove worth our while."
Lejewski then goes on to offer a description of free logic, which he claims accommodates an answer to the problem.
Lejewski also points out that free logic additionally can handle the problem of the empty set for statements like $\forall x\,Fx \rightarrow \exists x\,Fx$. Quine had considered the problem of the empty set unrealistic, which left Lejewski unsatisfied.19
Logic
Over the course of his career, Quine published numerous technical and expository papers on formal logic, some of which are reprinted in his Selected Logic Papers and in The Ways of Paradox.
Quine confined logic to classical bivalent first-order logic, hence to truth and falsity under any (nonempty) universe of discourse. Hence the following were not logic for Quine:
Quine wrote three undergraduate texts on formal logic:
• Elementary Logic. While teaching an introductory course in 1940, Quine discovered that extant texts for philosophy students did not do justice to quantification theory or first-order predicate logic. Quine wrote this book in 6 weeks as an ad hoc solution to his teaching needs.
• Methods of Logic. The four editions of this book resulted from a more advanced undergraduate course in logic Quine taught from the end of World War II until his 1978 retirement.
• Philosophy of Logic. A concise and witty undergraduate treatment of a number of Quinian themes, such as the prevalence of use-mention confusions, the dubiousness of quantified modal logic, and the non-logical character of higher-order logic.
Mathematical Logic is based on Quine's graduate teaching during the 1930s and 40s. It shows that much of what Principia Mathematica took more than 1000 pages to say can be said in 250 pages. The proofs are concise, even cryptic. The last chapter, on Gödel's incompleteness theorem and Tarski's indefinability theorem, along with the article Quine (1946), became a launching point for Raymond Smullyan's later lucid exposition of these and related results.
Quine's work in logic gradually became dated in some respects. Techniques he did not teach and discuss include analytic tableaux, recursive functions, and model theory. His treatment of metalogic left something to be desired. For example, Mathematical Logic does not include any proofs of soundness and completeness. Early in his career, the notation of his writings on logic was often idiosyncratic. His later writings nearly always employed the now-dated notation of Principia Mathematica. Set against all this are the simplicity of his preferred method (as exposited in his Methods of Logic) for determining the satisfiability of quantified formulas, the richness of his philosophical and linguistic insights, and the fine prose in which he expressed them.
Most of Quine's original work in formal logic from 1960 onwards was on variants of his predicate functor logic, one of several ways that have been proposed for doing logic without quantifiers. For a comprehensive treatment of predicate functor logic and its history, see Quine (1976). For an introduction, see chpt. 45 of his Methods of Logic.
Quine was very warm to the possibility that formal logic would eventually be applied outside of philosophy and mathematics. He wrote several papers on the sort of Boolean algebra employed in electrical engineering, and with Edward J. McCluskey, devised the Quine–McCluskey algorithm of reducing Boolean equations to a minimum covering sum of prime implicants.
Set theory
While his contributions to logic include elegant expositions and a number of technical results, it is in set theory that Quine was most innovative. He always maintained that mathematics required set theory and that set theory was quite distinct from logic. He flirted with Nelson Goodman's nominalism for a while, but backed away when he failed to find a nominalist grounding of mathematics.
Over the course of his career, Quine proposed three variants of axiomatic set theory, each including the axiom of extensionality:
• New Foundations, NF, creates and manipulates sets using a single axiom schema for set admissibility, namely an axiom schema of stratified comprehension, whereby all individuals satisfying a stratified formula compose a set. A stratified formula is one that type theory would allow, were the ontology to include types. However, Quine's set theory does not feature types. The metamathematics of NF are curious. NF allows many "large" sets the now-canonical ZFC set theory does not allow, even sets for which the axiom of choice does not hold. Since the axiom of choice holds for all finite sets, the failure of this axiom in NF proves that NF includes infinite sets. The (relative) consistency of NF is an open question. A modification of NF, NFU, due to R. B. Jensen and admitting urelements (entities that can be members of sets but that lack elements), turns out to be consistent relative to Peano arithmetic, thus vindicating the intuition behind NF. NF and NFU are the only Quinian set theories with a following. For a derivation of foundational mathematics in NF, see Rosser (1952);
• The set theory of Mathematical Logic is NF augmented by the proper classes of Von Neumann–Bernays–Gödel set theory, except axiomatized in a much simpler way;
• The set theory of Set Theory and Its Logic does away with stratification and is almost entirely derived from a single axiom schema. Quine derived the foundations of mathematics once again. This book includes the definitive exposition of Quine's theory of virtual sets and relations, and surveyed axiomatic set theory as it stood circa 1960. However, Fraenkel, Bar-Hillel and Levy (1973) do a better job of surveying set theory as it stood at mid-century.
All three set theories admit a universal class, but since they are free of any hierarchy of types, they have no need for a distinct universal class at each type level.
Quine's set theory and its background logic were driven by a desire to minimize posits; each innovation is pushed as far as it can be pushed before further innovations are introduced. For Quine, there is but one connective, the Sheffer stroke, and one quantifier, the universal quantifier. All polyadic predicates can be reduced to one dyadic predicate, interpretable as set membership. His rules of proof were limited to modus ponens and substitution. He preferred conjunction to either disjunction or the conditional, because conjunction has the least semantic ambiguity. He was delighted to discover early in his career that all of first order logic and set theory could be grounded in a mere two primitive notions: abstraction and inclusion. For an elegant introduction to the parsimony of Quine's approach to logic, see his "New Foundations for Mathematical Logic," ch. 5 in his From a Logical Point of View.
Quine's epistemology
Just as he challenged the dominant analytic–synthetic distinction, Quine also took aim at traditional normative epistemology. According to Quine, normative epistemology is the trend that assigns ought claims to conditions of knowledge. This approach, he argued, has failed to give us any real understanding of the necessary and sufficient conditions for knowledge. Quine recommended that, as an alternative, we look to natural sciences like psychology for a full explanation of knowledge. Thus, we must totally replace our entire epistemological paradigm. Quine's proposal is extremely controversial among contemporary philosophers and has several important critics, with Jaegwon Kim the most prominent among them.20
Bibliography
Selected books
• 1934 A System of Logistic. Harvard Univ. Press.21
• 1951 (1940). Mathematical Logic. Harvard Univ. Press. ISBN 0-674-55451-5.
• 1966. Selected Logic Papers. New York: Random House.
• 1970 (2nd ed., 1978). With J. S. Ullian. The Web of Belief. New York: Random House.
• 1980 (1941). Elementary Logic. Harvard Univ. Press. ISBN 0-674-24451-6.
• 1982 (1950). Methods of Logic. Harvard Univ. Press.
• 1980 (1953). From a Logical Point of View. Harvard Univ. Press. ISBN 0-674-32351-3. Contains "Two dogmas of Empiricism."
• 1960 Word and Object. MIT Press; ISBN 0-262-67001-1. The closest thing Quine wrote to a philosophical treatise. Chpt. 2 sets out the indeterminacy of translation thesis.
• 1974 (1971) The Roots of Reference. Open Court Publishing Company ISBN 0-8126-9101-6 (developed from Quine's Carus Lectures)
• 1976 (1966). The Ways of Paradox. Harvard Univ. Press.
• 1969 Ontological Relativity and Other Essays. Columbia Univ. Press. ISBN 0-231-08357-2. Contains chapters on ontological relativity, naturalized epistemology, and natural kinds.
• 1969 (1963). Set Theory and Its Logic. Harvard Univ. Press.
• 1985 The Time of My Life – An Autobiography. Cambridge, The MIT Press. ISBN 0-262-17003-5. 1986: Harvard Univ. Press.
• 1986 (1970). The Philosophy of Logic. Harvard Univ. Press.
• 1987 Quiddities: An Intermittently Philosophical Dictionary. Harvard Univ. Press. ISBN 0-14-012522-1. A work of essays, many subtly humorous, for lay readers, very revealing of the breadth of his interests.
• 1992 (1990). Pursuit of Truth. Harvard Univ. Press. A short, lively synthesis of his thought for advanced students and general readers not fooled by its simplicity. ISBN 0-674-73951-5.
Important articles
• 1946, "Concatenation as a basis for arithmetic." Reprinted in his Selected Logic Papers. Harvard Univ. Press.
• 1948, "On What There Is", Review of Metaphysics. Reprinted in his 1953 From a Logical Point of View. Harvard University Press.
• 1951, "Two Dogmas of Empiricism", The Philosophical Review 60: 20–43. Reprinted in his 1953 From a Logical Point of View. Harvard University Press.
• 1956, "Quantifiers and Propositional Attitudes," Journal of Philosophy 53. Reprinted in his 1976 Ways of Paradox. Harvard Univ. Press: 185–96.
• 1969, "Epistemology Naturalized" in Ontological Relativity and Other Essays. New York: Columbia University Press: 69–90.
Notes
1. ^ "So who *is* the most important philosopher of the past 200 years?" Leiter Reports. Leiterreports.typepad.com. 11 March 2009. Accessed 8 March 2010.
2. ^ "Prize winner page". The Royal Swedish Academy of Sciences. Kva.se. Retrieved 29 August 2010.
3. ^ "Willard Van Orman Quine". Inamori Foundation. Retrieved 15 December 2012.
4. ^ a b "Quine's Philosophy of Science". Internet Encyclopedia of Philosophy. Iep.utm.edu. 27 July 2009. Accessed 8 March 2010.
5. ^ "Mr Strawson on Logical Theory". WV Quine. Mind Vol. 62 No. 248. Oct. 1953.
6. ^ Colyvan, Mark, "Indispensability Arguments in the Philosophy of Mathematics", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), Edward N. Zalta (ed.)
7. ^ "Guide to the Center for Advanced Studies Records, 1958–1969". Weselyan University. Wesleyan.edu. Accessed 8 March 2010.
8. ^ The Philosophy of W.V. Quine. Open Court. 1986. p. 6. ISBN 9780812690101. "In my third year of high school I walked often with my new Jamaican friends, Fred and Harold Cassidy, trying to convert them from their Episcopalian faith to atheism."
9. ^ Wall Street Journal obituary for W V Quine – Jan 4 2001
10. ^ Quiddities: An Intermittently Philosophical Dictionary, entries for Tolerance (pp. 206–8) and Freedom (p.69)
11. ^ "Paradoxes of Plenty" in Theories and Things p.197
12. ^ The Time of My Life: An Autobiography, pp. 352–3
13. ^ The 'Derrida Affair' at Cambridge University, from "Honoris Causa" pp. 409–413
14. ^ J.E. D'Ulisse Derrida (1930–2004), New Partisan 12.24.2004
15. ^ "MR: Collaboration Distance". American Mathematical Society. Ams.org. Retrieved 29 August 2010.
16. ^ Prawitz, Dag. 'Quine and Verificationism.' In Inquiry, Stockholm, 1994, pp 487–494
17. ^ W.V.O. Quine, "On What There Is" The Review of Metaphysics, New Haven 1948, 2, 21
18. ^ Czeslaw Lejewski, "Logic and Existence" British Journal for the Philosophy of Science Vol. 5 (1954–5), pp. 104–119
19. ^ "Naturalized Epistemology". Stanford Encyclopedia of Philosophy. Plato.stanford.edu. 5 July 2001. Accessed 8 March 2010.
20. ^ Church, Alonzo (1935). "Review: A System of Logistic by Willard Van Orman Quine". Bull. Amer. Math. Soc. 41 (9): 598–603.
• Roger F Gibson, ed. (2004). The Cambridge companion to Quine. Cambridge University Press. ISBN 0521639492.
• ————, 1988. The Philosophy of W.V. Quine: An Expository Essay. Tampa: University of South Florida.
• ————, 1988. Enlightened Empiricism: An Examination of W. V. Quine's Theory of Knowledge Tampa: University of South Florida.
• ————, 2004. Quintessence: Basic Readings from the Philosophy of W. V. Quine. Harvard Univ. Press.
• ———— and Barrett, R., eds., 1990. Perspectives on Quine. Oxford: Blackwell.
• Gochet, Paul, 1978. Quine en perspective, Paris, Flammarion.
• Godfrey-Smith, Peter, 2003. Theory and Reality: An Introduction to the Philosophy of Science.
• Grattan-Guinness, Ivor, 2000. The Search for Mathematical Roots 1870–1940. Princeton University Press.
• Grice, Paul and Peter Strawson. "In Defense of a Dogma". The Philosophical Review 65 (1965).
• Hahn, L. E., and Schilpp, P. A., eds., 1986. The Philosophy of W. V. O. Quine (The Library of Living Philosophers). Open Court.
• Köhler, Dieter, 1999/2003. Sinnesreize, Sprache und Erfahrung: eine Studie zur Quineschen Erkenntnistheorie. Ph.D. thesis, Univ. of Heidelberg.
• Murray Murphey, The Development of Quine's Philosophy (Heidelberg, Springer, 2012) (Boston Studies in the Philosophy of Science, 291).
• Orenstein, Alex (2002). W.V. Quine. Princeton University Press.
• Putnam, Hilary. "The Greatest Logical Positivist." Reprinted in Realism with a Human Face, ed. James Conant. Cambridge, MA: Harvard University Press, 1990.
• Rosser, John Barkley, "The axiom of infinity in Quine's new foundations," Journal of Symbolic Logic 17 (4):238–242, 1952.
• Valore, Paolo, 2001. Questioni di ontologia quineana, Milano: Cusi.
|
# Math Help - Alpha Particles? (Don't really know a good title for this)
1. ## Alpha Particles? (Don't really know a good title for this)
A steady beam of alpha particles (q = +2e) traveling with constant kinetic energy 20 MeV carries a current of 0.25 $\mu A$.
(a) If the beam is directed perpendicular to a flat surface, how many alpha particles strike the surface in 3.0s?
(b) At any instant, how many alpha particles are there in a given 20 cm length of the beam?
(c) Through what potential difference is it necessary to accelerate each alpha particle from rest to bring it to an energy of 20 MeV?
2. Originally Posted by Aryth
A steady beam of alpha particles (q = +2e) traveling with constant kinetic energy 20 MeV carries a current of 0.25 $\mu A$.
(a) If the beam is directed perpendicular to a flat surface, how many alpha particles strike the surface in 3.0s?
(b) At any instant, how many alpha particles are there in a given 20 cm length of the beam?
(c) Through what potential difference is it necessary to accelerate each alpha particle from rest to bring it to an energy of 20 MeV?
(a) Note that Amp = C/sec ..... So convert q into Coulomb and divide that answer into 0.25 x 10^-6. Then multiply by 3.
(b) Calculate the speed of an alpha-particle and hence calculate the time it takes to travel 20 cm. Then calculate how many alpha particles strike the surface in that time.
(c) Is it over a particular distance?
3. (a) I got it, thanks
(b) We're studying circuits, Loops, Ohm's Law and all that... I don't have a formula or anything for $\alpha$-particles...
(c) It has no distance measurement. That's all it gives us.
4. Originally Posted by Aryth
(a) I got it, thanks
(b) We're studying circuits, Loops, Ohm's Law and all that... I don't have a formula or anything for $\alpha$-particles... Mr F says: Get the speed using the known K.E.
(c) It has no distance measurement. That's all it gives us.
I'll try to post on (c) later as I have no time now.
5. Awesome. I got b, thanks again. I have been pounding at c and can't figure it out...
6. Originally Posted by Aryth
Awesome. I got b, thanks again. I have been pounding at c and can't figure it out...
Consider this:
$E=Vq
$
$V=\frac{E}{q}$
$E=20MeV$ and $q=2e$
|
• A joint measurement is presented of the branching fractions $B^0_s\to\mu^+\mu^-$ and $B^0\to\mu^+\mu^-$ in proton-proton collisions at the LHC by the CMS and LHCb experiments. The data samples were collected in 2011 at a centre-of-mass energy of 7 TeV, and in 2012 at 8 TeV. The combined analysis produces the first observation of the $B^0_s\to\mu^+\mu^-$ decay, with a statistical significance exceeding six standard deviations, and the best measurement of its branching fraction so far. Furthermore, evidence for the $B^0\to\mu^+\mu^-$ decay is obtained with a statistical significance of three standard deviations. The branching fraction measurements are statistically compatible with SM predictions and impose stringent constraints on several theories beyond the SM.
• ### Imaging topological edge states in silicon photonics(1302.2153)
Topological features - global properties not discernible locally - emerge in systems from liquid crystals to magnets to fractional quantum Hall systems. Deeper understanding of the role of topology in physics has led to a new class of matter: topologically - ordered systems. The best known examples are quantum Hall effects, where insensitivity to local properties manifests itself as conductance through edge states that is insensitive to defects and disorder. Current research in engineering topological order primarily focuses on analogies to quantum Hall systems, where the required magnetic field is synthesized in non-magnetic systems. Here, we realize synthetic magnetic fields for photons at room temperature, using linear Silicon photonics. We observe, for the first time, topological edge states of light in a two - dimensional system and show their robustness against intrinsic and introduced disorder. Our experiment demonstrates the feasibility of using photonics to realize topological order in both the non-interacting and many-body regimes.
• ### Topologically Robust Transport of Photons in a Synthetic Gauge Field(1404.0090)
Electronic transport in low dimensions through a disordered medium leads to localization. The addition of gauge fields to disordered media leads to fundamental changes in the transport properties. For example, chiral edge states can emerge in two-dimensional systems with a perpendicular magnetic field. Here, we implement a "synthetic'' gauge field for photons using silicon-on-insulator technology. By determining the distribution of transport properties, we confirm the localized transport in the bulk and the suppression of localization in edge states, using the "gold standard'' for localization studies. Our system provides a new platform to investigate transport properties in the presence of synthetic gauge fields, which is important both from the fundamental perspective of studying photonic transport and for applications in classical and quantum information processing.
• ### Ultra-Sensitive Chip-Based Photonic Temperature Sensor Using Ring Resonator Structures(1312.5252)
Dec. 18, 2013 physics.optics
Resistance thermometry provides a time-tested method for taking temperature measurements. However, fundamental limits to resistance-based approaches has produced considerable interest in developing photonic temperature sensors to leverage advances in frequency metrology and to achieve greater mechanical and environmental stability. Here we show that silicon-based optical ring resonator devices can resolve temperature differences of 1 mK using the traditional wavelength scanning methodology. An even lower noise floor of 80 microkelvin for measuring temperature difference is achieved in the side-of-fringe, constant power mode measurement.
• ### Modulations to molecular high order harmonic generation by electron de Broglie wave(0801.4436)
We present a new theory that the molecular high order harmonic generation in an intense laser field is determined by molecular internal symmetry and momentum distribution of the tunneling-ionized electron. The molecular internal symmetry determines the quantum interference form of the returning electron inside the molecule. The electron momentum distribution determines the relative interference strength of each individual electron de Broglie wave. All individual electron de Broglie wave interferences add together to collectively modulate the molecular high harmonic generation. We specifically discuss the suppression of the generation on adjacent harmonic orders and the dependence of molecular high harmonic generation on laser intensities and molecular axis alignment. Our theoretical results are in good consistency with the experimental observations.
• ### Profile-Kernel likelihood inference with diverging number of parameters(math/0701004)
Sept. 21, 2007 math.ST, stat.TH
The generalized varying coefficient partially linear model with growing number of predictors arises in many contemporary scientific endeavor. In this paper we set foot on both theoretical and practical sides of profile likelihood estimation and inference. When the number of parameters grows with sample size, the existence and asymptotic normality of the profile likelihood estimator are established under some regularity conditions. Profile likelihood ratio inference for the growing number of parameters is proposed and Wilk's phenomenon is demonstrated. A new algorithm, called the accelerated profile-kernel algorithm, for computing profile-kernel estimator is proposed and investigated. Simulation studies show that the resulting estimates are as efficient as the fully iterative profile-kernel estimates. For moderate sample sizes, our proposed procedure saves much computational time over the fully iterative profile-kernel one and gives stabler estimates. A set of real data is analyzed using our proposed algorithm.
• ### To how many simultaneous hypothesis tests can normal, Student's t or bootstrap calibration be applied?(math/0701003)
Dec. 29, 2006 math.ST, stat.TH
In the analysis of microarray data, and in some other contemporary statistical problems, it is not uncommon to apply hypothesis tests in a highly simultaneous way. The number, $\nu$ say, of tests used can be much larger than the sample sizes, $n$, to which the tests are applied, yet we wish to calibrate the tests so that the overall level of the simultaneous test is accurate. Often the sampling distribution is quite different for each test, so there may not be an opportunity for combining data across samples. In this setting, how large can $\nu$ be, as a function of $n$, before level accuracy becomes poor? In the present paper we answer this question in cases where the statistic under test is of Student's $t$ type. We show that if either Normal or Student's $t$ distribution is used for calibration then the level of the simultaneous test is accurate provided $\log\nu$ increases at a strictly slower rate than $n^{1/3}$ as $n$ diverges. On the other hand, if bootstrap methods are used for calibration then we may choose $\log\nu$ almost as large as $n\half$ and still achieve asymptotic level accuracy. The implications of these results are explored both theoretically and numerically.
• ### Efficient generation of correlated photon pairs in a microstructure fiber(quant-ph/0505211)
May 27, 2005 quant-ph
We report efficient generation of correlated photon pairs through degenerate four-wave mixing in microstructure fibers. With 735.7 nm pump pulses producing conjugate signal (688.5 nm) and idler (789.8 nm) photons in a 1.8 m microstructure fiber, we detect photon pairs at a rate of 37.6 kHz with a coincidence/accidental contrast of 10:1 with a full-width-at-half-maximum bandwidth of 0.7 nm. This is the highest rate reported to date in a fiber-based photon source. The nonclassicality of this source, as defined by the Zou-Wang-Mandel inequality, is violated by 1100 times the uncertainty.
|
Article Text
Original research
Efficacy of a low FODMAP diet in irritable bowel syndrome: systematic review and network meta-analysis
1. Christopher J. Black1,2,
2. Heidi M. Staudacher3,
3. Alexander C. Ford1,2
1. 1 Leeds Institute of Medical Research at St. James’s, University of Leeds, Leeds, UK
2. 2 Leeds Gastroenterology Institute, Leeds Teaching Hospitals NHS Trust, Leeds, UK
3. 3 IMPACT (the Institute for Mental and Physical Health and Clinical Translation), Food & Mood Centre, Deakin University, Geelong, Victoria, Australia
1. Correspondence to Professor Alexander C. Ford, Leeds Institute of Medical Research at St. James's, University of Leeds, Leeds, Leeds, UK; alexf12399{at}yahoo.com
## Abstract
Objective A diet low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAP) is recommended for irritable bowel syndrome (IBS), if general lifestyle and dietary advice fails. However, although the impact of a low FODMAP diet on individual IBS symptoms has been examined in some randomised controlled trials (RCTs), there has been no recent systematic assessment, and individual trials have studied numerous alternative or control interventions, meaning the best comparator is unclear. We performed a network meta-analysis addressing these uncertainties.
Design We searched the medical literature through to 2 April 2021 to identify RCTs of a low FODMAP diet in IBS. Efficacy was judged using dichotomous assessment of improvement in global IBS symptoms or improvement in individual IBS symptoms, including abdominal pain, abdominal bloating or distension, and bowel habit. Data were pooled using a random effects model, with efficacy reported as pooled relative risks (RRs) with 95% CIs, and interventions ranked according to their P-score.
Results We identified 13 eligible RCTs (944 patients). Based on failure to achieve an improvement in global IBS symptoms, a low FODMAP diet ranked first vs habitual diet (RR of symptoms not improving=0.67; 95% CI 0.48 to 0.91, P-score=0.99), and was superior to all other interventions. Low FODMAP diet ranked first for abdominal pain severity, abdominal bloating or distension severity and bowel habit, although for the latter it was not superior to any other intervention. A low FODMAP diet was superior to British Dietetic Association (BDA)/National Institute for Health and Care Excellence (NICE) dietary advice for abdominal bloating or distension (RR=0.72; 95% CI 0.55 to 0.94). BDA/NICE dietary advice was not superior to any other intervention in any analysis.
Conclusion In a network analysis, low FODMAP diet ranked first for all endpoints studied. However, most trials were based in secondary or tertiary care and did not study effects of FODMAP reintroduction and personalisation on symptoms.
• irritable bowel syndrome
• meta-analysis
• diet
## Data availability statement
No data are available. No additional data available.
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
### Significance of this study
#### What is already known on this subject?
• Irritable bowel syndrome (IBS) is a common condition, and efficacy of most drug treatments is modest.
• Many patients with IBS report food-induced symptoms and are interested in making dietary modifications to manage symptoms.
• Management guidelines for IBS recommend a diet low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAPs), if general lifestyle and dietary advice have failed.
#### What are the new findings?
• A low FODMAP diet was ranked first for efficacy across all endpoints studied, compared with alternative interventions, including British Dietetic Association (BDA)/National Institute for Health and Care Excellence (NICE) dietary advice for people with IBS.
• A low FODMAP diet was significantly more efficacious than habitual diet for global IBS symptoms, and significantly more efficacious than BDA/NICE dietary advice for abdominal bloating or distension.
• BDA/NICE dietary advice was not superior to any of the other interventions in any of our analyses.
#### How might it impact on clinical practice in the foreseeable future?
• The low FODMAP diet ranked first for all endpoints studied, and was superior to all alternative interventions, including BDA/NICE dietary advice, supporting recommendations for its use in current management guidelines.
• Although guidelines recommend the use a low FODMAP diet for IBS in primary care, trials conducted in this setting, and which include the FODMAP reintroduction and personalisation phases, are needed.
## Introduction
Irritable bowel syndrome (IBS), characterised by abdominal pain in association with altered stool form or frequency,1 2 affects 4%–10% of the general population at any point in time.3 4 The condition is a disorder of gut–brain interaction,5 and is chronic with a relapsing and remitting natural history.6 Costs to the health service and society are substantial,7 8 and the impact of symptoms on quality of life is considerable,9 10 with patients willing to accept a median 1% risk of sudden death with a hypothetical medication in return for a 99% chance of symptom cure.11 However, efficacy of most drugs is modest,12–15 and placebo response rates are high.16 Even novel, more selectively targeted, therapies developed over the last 20 years produce a therapeutic gain over placebo of only 10%–15% and are expensive.17 As a result, many are not widely available, and when adverse events arise during postmarketing surveillance,18–20 they are often withdrawn, or their use restricted.21 22
Patients may, therefore, turn to other approaches. Over 80% of people with IBS report food-related symptoms,23 and in one survey more than 60% of patients had made dietary changes to manage their IBS.24 Perhaps as a result, there has been renewed interest in dietary therapies as a treatment. One of the most widely accepted approaches is a diet that is low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAPs). FODMAPs are present in a range of dietary sources including certain fruit, vegetables, legumes and artificial sweeteners. Unabsorbed fructose, polyols and lactose increase small intestinal water content and indigestible fructans and galacto-oligosaccharides undergo microbial fermentation in the colon, and contribute to symptoms in some patients.25 26 The low FODMAP diet consists of three phases: a period of FODMAP restriction, ideally lasting 4–6 weeks, reintroduction of individual food items to determine tolerance to each, and personalisation to create a modified FODMAP-containing diet based on individual tolerance to FODMAPs identified in the second phase.27 Several randomised controlled trials (RCTs) and meta-analyses conducted over the last 10 years have shown that the first phase of the diet is efficacious for global IBS symptoms.28–33
In the UK, the National Institute for Health and Care Excellence (NICE) guideline for the management of IBS in primary care recommends that if symptoms persist following general lifestyle and dietary advice further dietary management, including a low FODMAP diet, is offered.34 Limitations of the current evidence base for a low FODMAP diet in IBS, to date, include the fact that although the impact on individual IBS symptoms has been examined in some RCTs, there has been no recent systematic assessment, and the numerous different types of alternative or control interventions studied. These have included inactive controls such as habitual diet, sham dietary advice or even a high FODMAP diet, as well as alternative dietary advice for IBS, such as that from the British Dietetic Association (BDA)35 or NICE,34 which are largely empirical in nature. Both BDA and NICE advice include eating small, regular meals, keeping hydrated, reducing intake of tea, coffee, alcohol and carbonated fluids, and limiting fruit intake, and could be viewed as an active dietary intervention. However, there have been no RCTs of this approach versus habitual diet or a sham dietary intervention, and establishing its efficacy is crucial for addressing concerns about design bias in dietary trials.36
We conducted a network meta-analysis to estimate the efficacy of a low FODMAP diet in IBS, as well as the relative efficacy of the different comparators studied, for both global and individual IBS symptoms. Network meta-analysis allows indirect, as well as direct, comparisons to be made across different RCTs, increasing the number of participants’ data available for analysis, an advantage over published conventional pairwise meta-analyses. Importantly, it also allows a credible ranking system of the efficacy of different comparators to be developed, even in the absence of trials making direct comparisons. This may assist in developing a more robust design for future RCTs of a low FODMAP diet, in terms of which comparator should be selected to prevent overestimating its efficacy. It also allows the relative efficacy of alternative dietary advice to be examined vs ‘inactive’ control interventions.
## Methods
### Search strategy and selection criteria
We searched MEDLINE (1946 to second April 2021), EMBASE and EMBASE Classic (1947 to 2 April 2021), and the Cochrane central register of controlled trials. In addition, we searched ClinicalTrials.gov for unpublished trials or supplementary data for potentially eligible RCTs. We handsearched conference proceedings (Digestive Diseases Week, American College of Gastroenterology, United European Gastroenterology Week and the Asian Pacific Digestive Week) between 2001 and 2021 to identify trials published only in abstract form. Finally, we used bibliographies of all obtained articles to perform a recursive search.
RCTs examining the effect of a low FODMAP diet in adults (≥18 years) with IBS of any subtype were eligible (online supplemental table 1). Trials had to compare a low FODMAP diet with an alternative intervention. This could consist of any of habitual diet, sham dietary advice, a high FODMAP diet or alternative dietary advice, such as that for people with IBS from the BDA or NICE.34 35 Given the overlap between the latter two, we classed these as a single intervention. The first period of cross-over RCTs were eligible if they provided efficacy data prior to cross-over. We considered definitions of IBS that included either a clinician’s opinion, or those that met specific symptom-based criteria, for example the Rome criteria. We required a minimum treatment duration of 2 weeks.
### Supplemental material
Two investigators (CJB and ACF) conducted the literature search, independently from each other. We identified studies on IBS with the terms: IBS or functional diseases, colon (both as medical subject heading and free text terms), or IBS, spastic colon, irritable colon or functional adj5 bowel (as free-text terms). We combined these using the set operator AND with studies identified with the terms: fructan$, FODMAP$ or fructooligosaccharide (as free-text terms). There were no language restrictions. Two investigators (CJB and ACF) evaluated all abstracts identified by the search for eligibility, again independently from each other. We obtained all potentially relevant papers and evaluated them in more detail, using predesigned forms, to assess eligibility independently and according to the predefined criteria. We translated foreign language papers, where required. We resolved disagreements between investigators (CJB and ACF) by discussion.
### Outcome assessment
We assessed the efficacy of a low FODMAP diet in IBS, compared with the various alternative interventions, in terms of failure to respond to therapy, according to several endpoints of interest reported below. Other outcomes assessed included adverse events (total numbers of adverse events, as well as adverse events leading to study withdrawal, and individual adverse events), if reported.
### Data extraction
Two investigators (CJB and ACF) extracted all data independently onto a Microsoft Excel spreadsheet (XP professional edition; Microsoft Corp, Redmond, Washington, USA) as dichotomous outcomes (response or no response to therapy). We assessed efficacy according to the proportion of patients failing to achieve an improvement in the following: (1) global symptoms of IBS; (2) abdominal pain severity; (3) abdominal bloating or distension severity and (4) bowel habit. Where studies reported a dichotomous assessment of response to therapy according to these endpoints, for example a 50-point decrease in the IBS-SSS or a 30% improvement in abdominal pain severity on the IBS-SSS (approximating Food and Drug Administration (FDA)-recommended endpoints in drug trials in IBS), we extracted data from the article itself. Where studies reported mean individual symptom severity scores at baseline together with follow-up mean symptom severity scores and SD for these endpoints for each intervention arm, we imputed dichotomous responder and non-responder data using methodology previously described by Furukawa et al. 37 38 As an example, for a 30% improvement in abdominal pain severity on the IBS-SSS, this would be derived from the formula number of participants in each treatment arm at final follow-up × normal standard distribution. The latter corresponds to (70% of the baseline mean score – follow-up mean score)/follow-up SD. We contacted first and senior authors of studies to provide additional data for individual trials, where required.
We also extracted the following data for each trial, where available: country of origin, setting (primary, secondary or tertiary care), proportion of female patients, diagnostic criteria used to define IBS and proportion of patients with IBS according to subtype. We also recorded the duration of follow-up and mode of delivery of the low FODMAP diet and the alternative intervention, in terms of the intervention itself and the length of the initial consultation, where reported. We extracted data as intention-to-treat analyses, with dropouts assumed to be treatment failures (ie, no response to a low FODMAP diet or the comparator), wherever trial reporting allowed. If this was not clear from the original article, we performed an analysis on all patients with reported evaluable data.
### Quality assessment and risk of bias
We used the Cochrane risk of bias tool to assess this at the study level.39 Two investigators (CJB and ACF) performed this independently; we resolved disagreements by discussion. We recorded the method used to generate the randomisation schedule and conceal treatment allocation, as well as whether blinding was implemented for participants, personnel, and outcomes assessment, whether there was evidence of incomplete outcomes data, and whether there was evidence of selective reporting of outcomes.
### Data synthesis and statistical analysis
We performed a network meta-analysis using the frequentist model, with the statistical package ‘netmeta’ (V.0.9–0, https://cran.r-project.org/web/packages/netmeta/index.html) in R (V.4.0.2). We reported this according to the PRISMA extension statement for network meta-analyses,40 to explore direct and indirect treatment comparisons of the efficacy and safety of each intervention. Network meta-analysis results usually give a more precise estimate, compared with results from standard, pairwise analyses,41 42 and can rank interventions to inform clinical decisions.43
We examined the symmetry and geometry of the evidence by producing a network plot with node size corresponding to number of study subjects, and connection size corresponding to number of studies. We produced comparison adjusted funnel plots to explore publication bias or other small study effects, for all available comparisons, using Stata V.16 (StataCorp). This is a scatterplot of effect size versus precision, measured via the inverse of the SE. Symmetry around the effect estimate line indicates absence of publication bias, or small study effects.44 We produced a pooled relative risk (RR) with 95% CIs to summarise effect of each comparison tested, using a random effects model as a conservative estimate. We used an RR of failure to achieve each of the endpoints of interest, where if the RR was less than 1 and the 95% CI did not cross 1, there was a significant benefit of one intervention over another. This approach is the most stable, compared with RR of improvement, or using the OR, for some meta-analyses.45 In each RCT, direct comparisons were made between a low FODMAP diet and a single comparator, but there were no direct comparisons made between any of the alternative interventions themselves, meaning that there was insufficient direct evidence to perform consistency modelling to check the correlation between direct and indirect evidence across the network.46
Many meta-analyses use the I2 statistic to measure heterogeneity, which ranges between 0% and 100%.47 This statistic is easy to interpret and does not vary with the number of studies. However, the I2 value can increase with the number of patients included in the meta-analysis.48 We, therefore, assessed global statistical heterogeneity across all comparisons using the τ2 measure from the ‘netmeta’ statistical package. Estimates of τ2 of approximately 0.04, 0.16 and 0.36 are considered to represent a low, moderate and high degree of heterogeneity, respectively.49
We ranked both the low FODMAP diet and all comparators studied according to their P-score, which is a value between 0 and 1. P-scores are based solely on the point estimates and standard errors of the network estimates, and measure the mean extent of certainty that one intervention is better than another, averaged over all competing interventions.50 Higher scores indicate a greater probability of the intervention being ranked as best,50 but the magnitude of the P-score should be considered, as well as the treatment rank. As the mean value of the P-score is always 0.5 if individual interventions cluster around this value they are likely to be of similar efficacy. However, when interpreting the results, it is also important to take the RR and corresponding 95% CI for each comparison into account, rather than relying on rankings alone.51 In our primary analyses, we pooled data for the risk of being symptomatic at the final point of follow-up in each study for all included RCTs using an intention-to-treat analysis, but we also performed a priori analyses restricted to trials that used identical endpoints to judge efficacy, and trials that recruited patients with IBS with diarrhoea (IBS-D), or excluded those with IBS with constipation (IBS-C).
## Results
The search strategy generated 1231 citations, 79 of which appeared relevant and were retrieved for further assessment (online supplemental figure 1). Of these, we excluded 66 that did not fulfil eligibility criteria, leaving 13 eligible articles,28 29 33 52–61 which included 944 patients, 472 of whom were allocated to a low FODMAP diet. Twelve RCTs evaluated low FODMAP dietary advice,28 33 52–61 and one RCT evaluated a low FODMAP diet in which participants were provided with the majority of food to be consumed and advised about fluid choices throughout the duration of the intervention.29 In terms of the alternative intervention, 237 patients received BDA/NICE dietary advice for IBS in five RCTs,33 52–55 106 were allocated to habitual diet in four RCTs,28 29 56 57 76 were randomised to sham dietary advice in two trials,58 59 33 were allocated to alternative brief dietary advice in one RCT60 and 20 received a high FODMAP diet in one trial (online supplemental table 2).61 Agreement between investigators for trial eligibility was excellent (kappa statistic=0.82). Seven trials recruited only patients with IBS-D or excluded those with IBS-C specifically.28 33 53–55 58 59 Detailed characteristics of individual RCTs are provided in table 1.
Table 1
Characteristics of randomised controlled trials of a low FODMAP diet for IBS
All trials were published in full. We obtained extra data from the investigators of seven RCTs.53 55–60 Risk of bias for all included trials is reported in online supplemental table 3. No RCT was at low risk of bias across all domains, although nine trials were low risk of bias across all domains other than double blinding.28 52 54–59 61 Dietary trials are inherently difficult to blind, but two trials stated that investigators were blinded to treatment allocation,33 55 and eight that patients were blinded.29 52–54 58–61 Endpoints used, or imputed, in each trial are provided in online supplemental table 4. Adverse events were not reported in sufficient detail in most trials to allow any meaningful pooling of data.
### Global IBS symptoms
Twelve RCTs provided extractable dichotomous data,28 29 33 52–59 61 and data were imputed for another study,60 meaning that all 13 trials contributed to this analysis. The network plot is provided in online supplemental figure 2. When data were pooled, there was no heterogeneity (τ2=0), and the funnel plot appeared symmetrical (online supplemental figure 3). However, there was evidence of funnel plot asymmetry when pooling pairwise data, suggesting publication bias or other small study effects (Egger’s test, p=0.043) (online supplemental figure 4). Compared with habitual diet, a low FODMAP diet was ranked first (RR of global IBS symptoms not improving=0.67; 95% CI 0.48 to 0.91, P-score =0.99) (figure 1). This means that the probability of a low FODMAP diet being the most efficacious when all interventions were compared with each other was 99%. Among alternative interventions, compared with habitual diet, BDA/NICE dietary advice was ranked second (RR=0.82; 95% CI 0.57 to 1.18, P-score=0.71) and high FODMAP diet last (P-score=0.10). Low FODMAP diet was superior to all other interventions, including BDA/NICE dietary advice (table 2). None of the alternative interventions was superior to habitual diet, or any of the other alternative interventions.
Figure 1
Forest plot for failure to achieve an improvement in global IBS symptoms. The P-score is the probability of each intervention being ranked as best in the network. BDA/NICE, British Dietetic Association/National Institute for Health and Care Excellence; FODMAP, fermentable oligosaccharides, disaccharides, monosaccharides, and polyols; IBS, irritable bowel syndrome; RR, relative risk.
Table 2
Summary treatment effects from the network meta-analysis for failure to achieve an improvement in global IBS symptoms
There were seven RCTs that used a 50-point decrease in the IBS-SSS to define response.52–57 61 When we restricted the analysis to these studies, low FODMAP diet was still ranked first, although it was no more efficacious than habitual diet (RR=0.76; 95% CI 0.53 to 1.11, P-score=0.97) (online supplemental figure 5). However, it was more efficacious than both BDA/NICE dietary advice and a high FODMAP diet (online supplemental table 5). There were no other significant differences. When we restricted the analysis to seven trials that recruited only patients with IBS-D, or excluded those with IBS-C specifically,28 33 53–55 58 59 low FODMAP diet again ranked first for global IBS symptoms (RR=0.41; 95% CI 0.20 to 0.82, P-score=0.99) (online supplemental figure 6) and was superior to all alternative interventions (online supplemental table 6). There were no significant differences between alternative interventions.
### Abdominal pain severity
There were 12 trials reporting data on effect on abdominal pain severity,28 33 52–61 recruiting 914 patients, 459 of whom received a low FODMAP diet. Five trials compared a low FODMAP diet with BDA/NICE dietary advice for IBS,33 52–55 three habitual diet,28 56 57 two sham dietary advice,58 59 one alternative brief dietary advice60 and one high FODMAP diet.61 The network plot is provided in online supplemental figure 7. When data were pooled, there was moderate heterogeneity (τ2=0.068), and the funnel plot appeared symmetrical (online supplemental figure 8), but again there was funnel plot asymmetry when pooling pairwise data (Egger’s test, p=0.025) (online supplemental figure 9). Compared with habitual diet, a low FODMAP diet ranked first, but it was not superior in terms of efficacy (RR of abdominal pain severity not improving=0.72; 95% CI 0.47 to 1.10, p=0.92) (figure 2). A low FODMAP diet was superior to sham dietary advice (table 3), but there were no other significant differences.
Figure 2
Forest plot for failure to achieve an improvement in abdominal pain severity. The P-score is the probability of each intervention being ranked as best in the network. BDA/NICE, British Dietetic Association/National Institute for Health and Care Excellence; FODMAP, fermentable oligosaccharides, disaccharides, monosaccharides, and polyols; RR, relative risk.
Table 3
Summary treatment effects from the network meta-analysis for failure to achieve an improvement in abdominal pain severity
There were nine RCTs that used an endpoint of a 30% improvement in abdominal pain severity on the IBS-SSS.33 52–54 56–59 61 Restricting the analysis to these studies, low FODMAP diet still ranked first, although again it was no more efficacious than habitual diet (RR=0.74; 95% CI 0.43 to 1.28, P-score=0.94) (online supplemental figure 10). However, it was more efficacious than sham dietary advice, although there were no other significant differences (online supplemental table 7). When we restricted the analysis to seven trials that recruited only patients with IBS-D, or excluded those with IBS-C specifically,28 33 53–55 58 59 low FODMAP diet again ranked first but was not superior to habitual diet (RR=0.63; 95% CI 0.22 to 1.81, P-score=0.91) (online supplemental figure 11). However, low FODMAP diet was again superior to sham dietary advice (online supplemental table 8). There were no significant differences between alternative interventions.
### Abdominal bloating or distension severity
The same 12 RCTs, recruiting 914 patients, provided data for effect on abdominal bloating or distension severity.28 33 52–61 The network plot is provided in online supplemental figure 12. There was moderate heterogeneity (τ2=0.058), and the funnel plot appeared symmetrical (online supplemental figure 13), with no evidence of funnel plot asymmetry when pooling pairwise data (Egger’s test, p=0.31). Compared with habitual diet, low FODMAP diet ranked first, but it was not superior in terms of efficacy (RR of abdominal bloating or distension severity not improving=0.71; 95% CI 0.47 to 1.06, P-score=0.82) (figure 3). However, a low FODMAP diet was superior to BDA/NICE dietary advice (table 4). There were no other significant differences.
Figure 3
Forest plot for failure to achieve an improvement in abdominal bloating or distension severity. The P-score is the probability of each intervention being ranked as best in the network. BDA/NICE, British Dietetic Association/National Institute for Health and Care Excellence; FODMAP, fermentable oligosaccharides, disaccharides, monosaccharides, and polyols; RR, relative risk.
Table 4
Summary treatment effects from the network meta-analysis for failure to achieve an improvement in abdominal bloating or distension severity
There were nine RCTs that used an endpoint of a 30% improvement in abdominal distension severity on the IBS-SSS.33 52–54 56–59 61 When we restricted the analysis to these studies, low FODMAP diet still ranked first, although again it was no more efficacious than habitual diet (RR=0.80; 95% CI 0.49 to 1.30, P-score=0.84) (online supplemental figure 14). However, it was more efficacious than BDA/NICE dietary advice, which ranked last (online supplemental table 9). There were no other significant differences. When we restricted the analysis to seven trials that recruited only patients with IBS-D or excluded those with IBS-C,28 33 53–55 58 59 low FODMAP diet again ranked first but was not superior to habitual diet (RR=0.46; 95% CI 0.18 to 1.20, P-score=0.86) (online supplemental figure 15). However, low FODMAP diet was superior to BDA/NICE dietary advice (online supplemental table 10). There were no significant differences between alternative interventions.
### Improvement in bowel habit
Ten trials provided data on effect on improvement in bowel habit,33 52–59 61 randomising 807 patients. Of these, 407 received a low FODMAP diet. Five trials compared a low FODMAP diet with BDA/NICE dietary advice for IBS,33 52–55 two habitual diet,56 57 two sham dietary advice58 59 and one high FODMAP diet.61 The network plot is provided in online supplemental figure 16. When data were pooled, there was moderate heterogeneity (τ2=0.071), and the funnel plot appeared symmetrical (online supplemental figure 17). However, there was funnel plot asymmetry/ when pooling pairwise data (Egger’s test, p=0.0034) (online supplemental figure 18). Compared with habitual diet, a low FODMAP diet ranked first, but again it was not superior in terms of efficacy (RR of bowel habit not improving=0.62; 95% CI 0.37 to 1.04, P-score=0.88) (figure 4). There were no significant differences between low FODMAP diet and any of the comparators (table 5).
Figure 4
Forest plot for failure to achieve an improvement in bowel habit. The P-score is the probability of each intervention being ranked as best in the network. BDA/NICE, British Dietetic Association/National Institute for Health and Care Excellence; FODMAP, fermentable oligosaccharides, disaccharides, monosaccharides, and polyols; RR, relative risk.
Table 5
Summary treatment effects from the network meta-analysis for failure to achieve an improvement in bowel habit
There were eight RCTs that used an endpoint of a 30% improvement in bowel habit on the IBS-SSS.52–54 56–59 61 When we restricted the analysis to these studies, low FODMAP diet ranked first, although it was no more efficacious than habitual diet (RR=0.60; 95% CI 0.31 to 1.18, P-score=0.84) (online supplemental figure 19), or to any other alternative intervention (online supplemental table 11). When restricting the analysis to the six trials that recruited only patients with IBS-D or excluded those with IBS-C specifically,33 53–55 58 59 a low FODMAP diet again ranked first but was not superior to sham dietary advice (RR=0.82; 95% CI 0.51 to 1.32, P-score=0.87) (online supplemental figure 20), and there were no significant differences between any of the other interventions (online supplemental table 12).
## Discussion
This is the first systematic review and network meta-analysis of a low FODMAP diet for IBS, comparing its efficacy against alternative dietary advice for IBS, such as that provided by the BDA and NICE, as well as inactive control interventions. A low FODMAP diet ranked first for global IBS symptoms, and was superior to all alternative interventions studied, including BDA/NICE dietary advice. In terms of its effects on individual symptoms a low FODMAP diet was superior to sham dietary advice for abdominal pain severity, and it was superior to BDA/NICE dietary advice for abdominal bloating or distension severity. We did not detect any significant effect of a low FODMAP diet on bowel habit when data from these trials were pooled. When we restricted the analysis to trials that used identical dichotomous endpoints to assess response to treatment, or trials excluding patients with IBS-C, results were broadly similar. Most trials did not report adverse events in detail, precluding any relevant meaningful analysis.
We undertook the literature search, eligibility assessment and data extraction in duplicate and independently, with any discrepancies resolved by consensus. We used an intention-to-treat analysis, assuming all dropouts failed therapy, and pooled data with a random effects model, to reduce the likelihood that any beneficial effect of a low FODMAP diet in IBS, or the alternative or control interventions studied, has been overestimated. We also contacted authors of seven studies to obtain supplementary data to maximise the number of eligible RCTs in the network,53 55–60 and imputed dichotomous responder data using means and SD according to validated methods.37 38 This allowed us to include global IBS symptom data from three trials, and 242 patients, that would otherwise have been excluded altogether,53 56 57 as well as to study the effect of a low FODMAP diet on individual symptoms of abdominal pain severity, abdominal bloating or distension severity and improvement in bowel habit, using endpoints that were relatively standardised between trials, and which are closely aligned to those recommended by the FDA. This network meta-analysis, therefore, represents a considerable advance over previous pairwise meta-analyses.
There are some limitations. No trials were at low risk of bias, due to a lack of double blinding, although this is almost impossible in dietary trials, and 10 trials blinded either investigators or patients to treatment allocation. Based on quality assessment criteria intended for pharmacotherapy trials, it would be recommended the results of the network meta-analysis are interpreted with caution, as trials that do not employ double blinding tend to overestimate efficacy of the intervention studied.62 However, it could be argued that this is not possible in dietary and other non-pharmacotherapy trials (eg, psychological therapies). Four of the RCTs restricted recruitment to patients with IBS-D,33 53–55 and a further three did not recruit patients with IBS-C,28 58 59 meaning the efficacy of a low FODMAP diet in those with IBS-C or IBS with a mixed stool pattern is less clear. Even though a low FODMAP diet is recommended as a dietary intervention in primary care,34 all but one of the trials was conducted in secondary or tertiary care.57 There was no heterogeneity in our analysis for global IBS symptoms, but moderate heterogeneity in our other analyses, which may relate to mode of delivery and nature of the interventions studied. There was also evidence of funnel plot asymmetry for all analyses, except abdominal bloating or distension severity. Despite these limitations, the results of our study are still useful for informing treatment decisions for patients and can be used in future updates of evidence-based IBS management guidelines.17 32 34
Restriction of FODMAPs is not recommended long-term, to minimise risk of nutritional inadequacy. Further, short-term alterations in the gastrointestinal microbiota have been reported, including a consistent finding of reduced abundance of Bifidobacteria.28 59 63 Although the long-term consequences of these changes are unknown, reintroduction of high FODMAP foods to tolerance is a critical phase of the low FODMAP diet in clinical practice and may curb the impacts on dietary intake and the microbiota. However, very few of the included RCTs incorporated this phase of the diet into their design, meaning that the effect of FODMAP reintroduction on IBS symptoms remains unclear. One 4-week trial comparing a low FODMAP diet with BDA/NICE dietary advice reported data at 12 weeks, after FODMAP reintroduction, and still demonstrated a significant difference in responder rates favouring a low FODMAP diet at 16 weeks.54 Uncontrolled studies support the long-term efficacy of the diet after FODMAP reintroduction.64 65 However, RCTs are needed to confirm this, although it is acknowledged these are difficult to carry out, particularly with regard to blinding over longer periods of time and minimising attrition.66 Outside of a clinical trial setting individual patients may struggle with the FODMAP restriction phase of the diet, although a real-world study demonstrated less than 10% of patients were non-adherent during this part of the intervention.67
Our results confirm that a low FODMAP diet is an efficacious treatment for global IBS symptoms in secondary and tertiary care. Importantly, a dietitian delivered counselling in all but one of the 12 low FODMAP dietary advice RCTs.60 These findings support the use of a low FODMAP diet under dietetic supervision, although it is important to point out that RCTs in primary care are lacking, which is in contrast with its placement in current NICE guidance for the management of IBS.34 The recent British Society of Gastroenterology guidelines for the management of IBS also recommend the use of a low FODMAP diet as a second-line dietary approach in those individuals who have not responded to first-line advice.32 These guidelines stated that it was likely to be beneficial for both global IBS symptoms, and abdominal pain, based on an update of a prior systematic review and pairwise meta-analysis.30 However, our analysis suggests that any effect on abdominal pain severity, versus alternative dietary advice or a habitual diet, is less certain.
## Data availability statement
No data are available. No additional data available.
## Acknowledgments
We are grateful to Vahideh Behrouz, Sutep Gonlachanvit, Ruth Harvie, Chung Owyang, Natalia Pedersen, and Bridgette Wilson for providing extra information and data from their studies.
|
# LaTeX Jobs
LaTeX is a document preparation and markup language for the TeX typesetting program. It is widely used in the academic and commercial world. If your business needs help with LaTeX typesetting, you can hire the help of LaTeX freelancers for the same. You can start today simply by posting a LaTeX job today in this portal. Hire LaTeX Developers
### Filter
to
to
to
##### Job State
3 jobs found, pricing in SGD
We have will provide you our website username and password. Your job is to do type same text, mathematical equations, diagrams shown in the image using ckeditor provider with the form. You should have knowledge of latex code and mathjax..etc.
$190 (Avg Bid)$190 Avg Bid
27 bids
How to write in TEXMAKER 4 days left
VERIFIED
I am looking for a one can teach me how to write LATEX language using editor TEXMAKER.
$40 / hr (Avg Bid)$40 / hr Avg Bid
13 bids
Long term document writer job 4 days left
VERIFIED
I need 12 documents per day. Each document can be a topic on history, economics, mathematics, science. single spaced, 12 font, minimum 6 pages. No copyrighted images or too much images. These docs should not contain a lot of documents. No copying from the internet - I am okay with it being paraphrased. NO DIRECT COPYING Give me a price of per month and not per doc or per word.
$190 (Avg Bid)$190 Avg Bid
33 bids
### Proyectos freelance en nubelo
by JamesSanchez1 -
Proyectos freelance en nubelo
|
# questions about using image preview in v1.6.0
I was happy to install The Archive v1.6.0 and esp. use the new image preview. Thanks for the work on it!
I was concerned that only 3 of 6 images I already had in my Archive would display.
I found a comment about maxInlineImagePreviewHeight setting and thought that might help but it didn't (and I see the previews do resize horizontally, so that seems good.)
I found two things that seem like issues:
1. .svg images don't display, where .jpg and .png do; I have some .svg diagrams (including ones I lifted from wikipedia); they do show in OSX preview.
2. Images with absolute path of form "file:///Users/..." don't preview but those with "/Users/..." do. I have been using the "" format, because when I click on links like thatthe are opened in the Finder to the correct folder (for both outside files and images), but if the path is just "" nothing opens. So perhaps this is an issue both with clicking on a link and also with image preview.
Thanks. (Sorry I didn't try these with the pre-release.)
• Escapes around formatting in my point #2: "I have been using the "" ...'
• I'd like to get SVG to work eventually as well -- it's an odd file format because it is essentially a text file that needs rendering on a canvas, so I would have to do much more myself than "load image file at path" and figure out how to not break the rest of the editor around that So it'll take some more time to get around to this.
I took note of the file:// scheme thing. We already have fixes for percent encoded paths, so getting rid of the URL scheme prefix will work.
The Archive should now also reveal files in Finder without the full path and the file:// URL scheme prefix, by the way. Could you check that? As a quick fix for your 6 images, that might be an option. We'll still be releasing a fix for this, though, because in the long run, supporting absolute paths with the URL prefix is important for us under the credo of not locking users into our app.
Author at Zettelkasten.de • https://christiantietze.de/
• "The Archive should now also reveal files in Finder without the full path and the file:// URL scheme prefix"
This works, unless the path has a '%20' in it. In that case the preview works, the cursor does turn to a pointer, but it does nothing if you click on it. That confused me since I've used %20 for all spaces in paths.
If the path has a ' ' space then both preview and clicking on it work.
And also 'file:///' does not permit any ' ' spaces in it (which I would imagine is officially correct for the url path).
• I see! When the file:// stuff works, the other quirks you mention should go away, too. Thanks for pointing these out!
Author at Zettelkasten.de • https://christiantietze.de/
|
# Triple integral over a sphere in rectangular coordinates
1. Apr 1, 2008
### Batmaniac
1. The problem statement, all variables and given/known data
Evaluate the following integral:
$$\iiint \,x\,y\,z\,dV$$
Where the boundaries are given by a sphere in the first octant with radius 2.
The question asks for this to be done using rectangular, spherical, and cylindrical coordinates.
I did this fairly easily in spherical and rectangular coordinates, except for the fact that I got two different answers and I can't figure out where I went wrong! That's not a problem though because I can fix that.
3. The attempt at a solution
How would I do this problem in rectangular coordinates? My integral would look like this:
$$\int_{0}^{{\sqrt{4-x^2-y^2}}}\int_{0}^{{\sqrt{4-x^2-z^2}}}\int_{0}^{{\sqrt{4-z^2-y^2}}}xyz\,dz\,dy\,dx$$
Which, without some clever transformations and an extremely messy Jacobian calculation, looks unsolvable.
2. Apr 1, 2008
### Mystic998
I think you need to rethink your bounds on that one...
3. Apr 1, 2008
### Batmaniac
How does this look then?
$$\int_{0}^{2}\int_{0}^{2}\int_{0}^{{\sqrt{4-z^2-y^2}}}xyz\,dz\,dy\,dx$$
4. Apr 1, 2008
### Batmaniac
Hmm, MATLAB tells me that's zero.
5. Apr 1, 2008
### Dick
Your dy limit should depend on x.
6. Apr 2, 2008
### HallsofIvy
Staff Emeritus
That would be over a square in the xy-plane rising up to the sphere.
Projecting the sphere into the xy-plane gives you the quarter circle x2+ y2= 4, with $0\le x\le 2$, $0\le y\le 2$. You can let x go from 0 to 2 but then, for each x, y ranges from 0 to $\sqrt{4- x^2}$.
|
# What is sqrt2( 5 - sqrt2) ?
$5 \setminus \sqrt{2} - 2$
#### Explanation:
Given that
$\setminus \sqrt{2} \left(5 - \setminus \sqrt{2}\right)$
$= \setminus \sqrt{2} \setminus \cdot 5 - \setminus \sqrt{2} \setminus \cdot \setminus \sqrt{2}$
$= 5 \setminus \sqrt{2} - 2$
Jul 26, 2018
$5 \sqrt{2} - 2$
We can distribute $\sqrt{2}$ to both of the terms in the parenthesis. Recall that $\sqrt{2} \cdot \sqrt{2} = 2$. We now have
$5 \sqrt{2} - 2$
|
# Espressif Wireshark User Guide¶
[中文]
## 1. Overview¶
### 1.1 What is Wireshark?¶
Wireshark (originally named “Ethereal”) is a network packet analyzer that captures network packets and displays the packet data as detailed as possible. It uses WinPcap as its interface to directly capture network traffic going through a network interface controller (NIC).
You could think of a network packet analyzer as a measuring device used to examine what is going on inside a network cable, just like a voltmeter is used by an electrician to examine what is going on inside an electric cable.
In the past, such tools were either very expensive, proprietary, or both. However, with the advent of Wireshark, all that has changed.
Wireshark is released under the terms of the GNU General Public License, which means you can use the software and the source code free of charge. It also allows you to modify and customize the source code.
Wireshark is, perhaps, one of the best open source packet analyzers available today.
### 1.2 Some Intended Purposes¶
Here are some examples of how Wireshark is typically used:
• Network administrators use it to troubleshoot network problems.
• Network security engineers use it to examine security problems.
• Developers use it to debug protocol implementations.
Beside these examples, Wireshark can be used for many other purposes.
### 1.3 Features¶
The main features of Wireshark are as follows:
• Available for UNIX and Windows
• Captures live packet data from a network interface
• Displays packets along with detailed protocol information
• Opens/saves the captured packet data
• Imports/exports packets into a number of file formats, supported by other capture programs
• Searches for packets based on multiple criteria
• Colorizes packets according to display filters
• Calculates statistics
• … and a lot more!
### 1.4 Wireshark Can or Can’t Do¶
• Live capture from different network media.
Wireshark can capture traffic from different network media, including wireless LAN.
• Import files from many other capture programs.
Wireshark can import data from a large number of file formats, supported by other capture programs.
• Export files for many other capture programs.
Wireshark can export data into a large number of file formats, supported by other capture programs.
• Numerous protocol dissectors.
Wireshark can dissect, or decode, a large number of protocols.
• Wireshark is not an intrusion detection system.
It will not warn you if there are any suspicious activities on your network. However, if strange things happen, Wireshark might help you figure out what is really going on.
• Wireshark does not manipulate processes on the network, it can only perform “measurements” within it.
Wireshark does not send packets on the network or influence it in any other way, except for resolving names (converting numerical address values into a human readable format), but even that can be disabled.
## 2. Where to Get Wireshark¶
Wireshark can run on various operating systems. Please download the correct version according to the operating system you are using.
## 3. Step-by-step Guide¶
This demonstration uses Wireshark 2.2.6 on Linux.
a) Start Wireshark
On Linux, you can run the shell script provided below. It starts Wireshark, then configures NIC and the channel for packet capture.
ifconfig $1 down iwconfig$1 mode monitor
iwconfig $1 channel$2
ifconfig $1 up Wireshark& In the above script, the parameter $1 represents NIC and \$2 represents channel. For example, wlan0 in ./xxx.sh wlan0 6, specifies the NIC for packet capture, and 6 identifies the channel of an AP or Soft-AP.
b) Run the Shell Script to Open Wireshark and Display Capture Interface
Wireshark Capture Interface
c) Select the Interface to Start Packet Capture
As the red markup shows in the picture above, many interfaces are available. The first one is a local NIC and the second one is a wireless NIC.
Please select the NIC according to your requirements. This document will use the wireless NIC to demonstrate packet capture.
Double click wlan0 to start packet capture.
d) Set up Filters
Since all packets in the channel will be captured, and many of them are not needed, you have to set up filters to get the packets that you need.
Please find the picture below with the red markup, indicating where the filters should be set up.
Setting up Filters in Wireshark
Click Filter, the top left blue button in the picture below. The display filter dialogue box will appear.
Display Filter Dialogue Box
Click the Expression button to bring up the Filter Expression dialogue box and set the filter according to your requirements.
Filter Expression Dialogue Box
The quickest way: enter the filters directly in the toolbar.
Filter Toolbar
Click on this area to enter or modify the filters. If you enter a wrong or unfinished filter, the built-in syntax check turns the background red. As soon as the correct expression is entered, the background becomes green.
The previously entered filters are automatically saved. You can access them anytime by opening the drop down list.
For example, as shown in the picture below, enter two MAC addresses as the filters and click Apply (the blue arrow). In this case, only the packet data transmitted between these two MAC addresses will be captured.
Example of MAC Addresses applied in the Filter Toolbar
e) Packet List
You can click any packet in the packet list and check the detailed information about it in the box below the list. For example, if you click the first packet, its details will appear in that box.
Example of Packet List Details
f) Stop/Start Packet Capture
As shown in the picture below, click the red button to stop capturing the current packet.
Stopping Packet Capture
Click the top left blue button to start or resume packet capture.
Starting or Resuming the Packets Capture
g) Save the Current Packet
On Linux, go to File -> Export Packet Dissections -> As Plain Text File to save the packet.
Saving Captured Packets
Please note that All packets, Displayed and All expanded must be selected.
By default, Wireshark saves the captured packet in a libpcap file. You can also save the file in other formats, e.g. txt, to analyze it in other tools.
|
# Interval scheduling problem with priorities
I have a problem that is similar to the interval scheduling algorithm but it involves priorities. My data sets consist of the following data:
• Cars with the start and end time of parking, along with their one or more attributes (e.g. electric vehicle, motorcycle, handicapped).
• Parking spots along with zero or more attributes and lot number.
• Attributes with their priorities. For example if the property handicapped is given a value of 1, cars that have that attribute should be assigned a parking spot first. Attributes are hard constraints, the priorities of the attributes determine the order of assignment.
There is no overnight parking so I have divided the data into buckets of days. Start and end times are in increments of 5 minutes (not sure if this is important).
To be considered a valid assignment, a car's attributes must be a subset of the attributes for the assigned spot. See examples below.
### Objectives
This problem comes from overhauling an existing algorithm, which after observing how users interact with the system, it could definitely use improvement.
My first step is to get something going that can produce one or more possible solutions that meet all of the provided attributes. For example, a limo cannot be assigned to a motorcycle spot. There may not be a complete solution given the inputs, if there are 5 electric vehicles but only 4 spots, the algorithm should still try to assign 4 of them (the 4 that have the highest priority).
Given multiple solutions, the "best" solution would minimize the number of open lots at any given time (ideally all the cars parked in the same lot). Even if it is a small block of time in the middle of the day, the lot can still be closed to minimize the cost of security guards.
### Example input/output
Set 1
• Attributes: [bus: -1; electric: -2; handicapped: -2]
• Cars: [C1: bus, electric; C2: handicapped, C3: electric]
• Spots: [P1: bus, electric; P2: bus, electric; P3: electric, handicapped]
• Valid assignments: [C1-P1, C2-P3, C3-P2] and [C1-P2, C2-P3, C3-P1]
Set 2
• Attributes: [bus: -1; electric: -2; handicapped: -2]
• Cars: [C1: bus, handicapped; C2: bus, C3: electric]
• Spots: [P1: bus, handicapped, electric; P2: bus, electric; P3: handicapped]
• Valid assignments: [C1-P1, C2-P2, C3-null]
Spot 1 is the only spot that can accommodate car 1. Both cars 2 and 3 can take spot 2 but priority is given to the bus, leaving car 3 unassigned.
Set 3
• Attributes: [bus: -1; electric: -2; handicapped: -2]
• Cars: [C1: bus, electric; C2: electric, C3: bus]
• Spots: [P1: bus, electric; P2: electric; P3: electric, handicapped]
• Valid assignments: [C1-P1, C2-P2, C3-null] or [C1-P1, C2-P3, C3-null]
There are two buses but only one bus parking spot. Since C1 has a greater priority sum, it is assigned to the available spot even though C3 could have taken it.
## Verifying a solution
1. For each assigned car A, if any, verify the assigned spot (P) that it has been assigned to has all of its attributes. In other words, Attributes(A) is a subset of Attributes(P).
2. For each unassigned car B, let X be the set of spots in the input data that meet the car's attribute criteria.
• If one or more spots in X is unassigned, abort these steps and mark the solution as invalid
• If one or more cars assigned to spots in X has a greater maximum priority than MaxPriority(B), abort these steps and mark the solution as invalid
• Let Z be the subset of cars assigned to spots in X where the maximum priority of the car = MaxPriority(B). If one or more cars in Z has a greater priority sum than SumPriority(B), abort these steps and mark the solution as invalid
## What I have tried
1. Find all the valid parking spots for each car.
2. Sort each parking spot list in ascending order of the sum of priorities for the parking spot.
3. Sort the list of cars in descending order of sum of priorities.
4. Attempt to assign each car in order of the sorted parking spots. If the spot is taken for that time then try the next one and so on.
I am hoping to make this more efficient by taking into account the interval for each car, as it currently isn't being taken into account when sorting.
I stumbled upon Google Optimization Tools and it looks similar to the nurse scheduling problem but with more constraints. A key difference is that each shift in the NSP is defined whereas the intervals in my problem can partially overlap.
## Questions
1. How can I model the problem?
2. Are tools like Google OR-Tools or pyschedule appropriate for solving this?
• FYI, tool recommendation questions are off-topic here. – D.W. Apr 2 '18 at 18:29
• @D.W. I have clarified the output. The attributes are hard constraints. The priorities determine which to assign first, provided that the attributes are met. The number of open lots is a "should", example: given 3 motorcycles and 3 lots with 3, 1, and 1 motorcycle spots respectively, the optimal solution would assign all 3 to the same lot instead of spreading them out. – rink.attendant.6 Apr 2 '18 at 21:12
• I don't understand what the hard constraints are, then. What we need is a criteria that, given a proposed assignment, lets us tell whether that assignment is valid. What are those criteria? I don't see them stated anywhere. (We need a criteria that is based solely on the contents of the assignment, not on how it was obtained. In other words, if the criteria is "first assign this, then assign that", that's not useful -- that might be part of the specification of an algorithm, but it's not a requirement that an algorithm must meet.) – D.W. Apr 2 '18 at 21:45
• @D.W. The attributes themselves are hard constraints. To determine whether an assignment is valid, the attributes of each assigned car must be a subset of the attributes of its assigned spot. To determine whether an assignment is valid in the case that not everything can be assigned, the car with the higher priority should be assigned. Should I provide some concrete example inputs/outputs? – rink.attendant.6 Apr 2 '18 at 21:55
• It would help if you could state that more clearly in the question. Are you saying that if there exists two cars C,C' such that car C is assigned to a spot, and C' isn't assigned any spot, and C' has higher priority, then the solution is invalid? What if there is no valid assignment where C' receives a parking spot? Is the original assignment still invalid? Also, is that the only kind of hard requirement? The explanation of attributes starts with "For example", hinting that there might be other hard requirements not explained. – D.W. Apr 2 '18 at 22:02
The problem is probably fairly hard. One approach is to formulate it as an instance of integer linear programming. Divide the time period into short time segments. Let $x_{i,j,t}$ be a zero-or-one variable, with the intended meaning that car $i$ is assigned to parking spot $j$ at time segment $t$. Also let $y_i$ be a zero-or-one variable, with the intended meaning that car $i$ is assigned a parking spot (somewhere). Then you can express each of your requirements as a set of linear inequalities on these $x$'s:
• If car $i$ needs a parking spot for the time window $t_0..t_1$, then add the constraint $\sum_j x_{i,j,t_0}=y_i$, to require that it is assigned exactly one slot if $y_i$ says it should be. Here the sum is over all parking spots $j$ that are compatible with car $i$ (given their attributes).
• Also, add the constraint $x_{i,j,t_0} = x_{i,j,t_0+1} = \cdots = x_{i,j,t_1}$ for all $j$, to indicate that if car $i$ is assigned to parking spot $j$, then it should be there for its entire time window.
• To take into account that you can't have two cars parking in the same spot at the same time, add the constraint $\sum_i x_{i,j,t} \le 1$ for all $j,t$.
• To take into account the priorities, if cars $i,i'$ are both vying for a parking spot at the same time $t$, and if $i$ has higher priority than $i'$ for parking spot $j$, then add the constraint $x_{i,j,t} \ge x_{i',j,t}$. (Add this for all times $t$ in the intersection of their time windows.)
Finally, maximize the objective function $\sum_i y_i$. This is a system of linear inequalities, with a linear objective function, so it can be solved using an off-the-shelf integer linear programming (ILP) solver.
Keep in mind that solving ILP can take exponential time in the worst case, so on large problems, it's possible that the ILP solver might take a very long time. However, the hope is that if your problem is not too large, then an ILP solver might be able to find a good solution in a reasonable amount of time.
|
0
2013
Impact Factor
# Alejandro Bravo-Doddoli
Circuito Exterior S/N, Ciudad Universitaria, Mexico City, 04510, Mexico
Depto. de Matemáticas, Facultad de Ciencias, UNAM
## Publications:
Bravo-Doddoli A., García-Naranjo L. The Dynamics of an Articulated $n$-trailer Vehicle 2015, vol. 20, no. 5, pp. 497-517 Abstract We derive the reduced equations of motion for an articulated $n$-trailer vehicle that moves under its own inertia on the plane. We show that the energy level surfaces in the reduced space are $(n + 1)$-tori and we classify the equilibria within them, determining their stability. A thorough description of the dynamics is given in the case $n = 1$. Keywords: dynamics, nonholonomic constraints, $n$-trailer vehicle Citation: Bravo-Doddoli A., García-Naranjo L., The Dynamics of an Articulated $n$-trailer Vehicle, Regular and Chaotic Dynamics, 2015, vol. 20, no. 5, pp. 497-517 DOI:10.1134/S1560354715050019
|
# Every almost-Lebesgue measurable set is Lebesgue measurable.
The following problem is from exercise 8 of Tao's introductory measure theory book.
$\textbf{Prove:}$
If for all $\epsilon > 0$ one can find a Lebesgue measurable set $E_{\epsilon}$ such that $m^*(E_{\epsilon} \Delta E) \leq \epsilon$, then $E$ itself must be Lebesgue measurable.
The hint that the book gives is: use the $\epsilon/2^n$ trick to show that $E \subset E_{\epsilon}'$ where $E_{\epsilon}'$ is measurable and $m^*(E_{\epsilon}' \Delta E) \leq \epsilon$; then I should take countable intersections to show that $E$ differs from a Lebesgue measurable set by a null set.
The follow Lemma 10 will probably be useful:
(i) Every open set is Lebesgue measurable.
(ii) Every closed set is Lebesgue measurable.
(iii) Every set of Lebesgue outer measure zero is measurable. (Such sets are called null sets.)
(iv) The empty set is Lebesgue measurable.
(v) If $E \subset {\bf R}^d$ is Lebesgue measurable, then so is its complement ${{\bf R}^d \backslash E}$.
(vi) If ${E_1, E_2, E_3, \ldots \subset {\bf R}^d}$ are a sequence of Lebesgue measurable sets, then the union ${\bigcup_{n=1}^\infty E_n}$ is Lebesgue measurable.
(vii) If ${E_1, E_2, E_3, \ldots \subset {\bf R}^d}$ are a sequence of Lebesgue measurable sets, then the intersection ${\bigcap_{n=1}^\infty E_n}$ is Lebesgue measurable.
I am not sure at all how to follow the hint. Specifically I have been unable to come up with an $E_{\epsilon}'$ which satisfies the properties that I want. I find it very hard to work with $m^*(A \Delta B)$ in general. Does anyone have any tips how to construct $E_{\epsilon}'$? My guess is that we use the fact that $E_{\epsilon}$ is Lebesgue measurable in some way to approximate it from the outside, perhaps by an open set which contains $E$?
Let $\epsilon>0$ be arbitrary. For each $n\in\Bbb N$, there is a measurable set $E_n$ such that $m^*(E_n\Delta E) \le \epsilon/2^n$. We claim that after neglecting a set of measure $0$, in fact $E\subset\bigcup_{n=1}^\infty E_n$. That is, that the set $E-\bigcup E_n$ has measure $0$. Indeed, for every $N\in\Bbb N$,
\begin{align*} m^*(E-\bigcup E_n) &\le m^*(E\cap E_N^c) \\ &\le m^*(E\Delta E_N) \le \epsilon/2^N. \end{align*}
Since this holds for every $N$, our claim is proved. Let $\tilde E = \bigcup E_n-E$. Then $\tilde E \subset \bigcup E_n$ and $m^*(\bigcup E_n\Delta \tilde E) = m^*(\bigcup E_n- \tilde E) \le \sum \epsilon/2^n = \epsilon$. Put $E_\epsilon' = \bigcup E_n$. By outer regularity, $\tilde E$ differs from a measurable set by a null set (pick a sequence $E_{\epsilon}'\searrow \tilde E$ as $\epsilon\to 0$), and is thereby measurable. Since $E$ is measurable if and only if $\tilde E$ is measurable, $E$ is also measurable, and the claim is proved.
This is a problem in Folland stated in terms of premeasures and associated outer measures (defined by taking the infimum over premeasures of countable covers). The construction of the Lebesgue measure from lengths of intervals will be a special case. I have to sleep in the near future so I'll sketch what I did when I did this for homework, and if you need further clarification, I'll come back to this thread tomorrow.
The first step is to show that if $E \subset X$, $\mathscr{A}$ is an algebra on $X$ and $\epsilon > 0$ is given, there is an $A \in \mathscr{A}_\sigma$ such that $E \subset A$ and $\mu^*(A) \leq \mu^*(E) + \epsilon$. Since the outer measure is the infimum, there is a countable cover of $E$ such that $\sum \mu_0(A_i) \leq \mu^*(E) + \epsilon$. If you take $A$ to be the union over this sequence, this is in $\mathscr{A}_\sigma$, contains $E$ by construction, and since the elements $A_i$ need not be disjoint, and premeasurable sets are outer measurable, it follows that $\mu^*(A) \leq \sum \mu_0(A_i) \leq \mu^*(E) + \epsilon$
This doesn't quite finish the argument, but it's an important first step. Since $\epsilon$ is arbitrary, we can find $B_n \in \mathscr{A}_\sigma$ so that $\mu^*(B_n) \leq \mu^*(E) + 1/n$ by the above argument. Now define $B = \cap_n B_n$. $E$ is contained in each $B_i$ and so is contained in $B$. You can do so some set algebra to prove that $\mu^*(B- E) \leq 1/n$ for every $n$, and this shows that in fact the measure is $0$.
This finishes it since in $E \subset B$ implies that $E \Delta B = B - E$, and so this is what you want to prove.
Hope this helps!
• I have been following Tao's book closely so I am unfamiliar with some of the notions you have used here (I don't know what an algebra is). Sep 20, 2017 at 3:31
HINT:
Let $E_{\epsilon}$ a set that approximates $E$. For every $A$ subset we have $$A \cap E \subset (A \cap E_{\epsilon} ) \cup(E \backslash E_{\epsilon})\\ A \backslash E \subset (A\backslash E_{\epsilon}) \cup (E_{\epsilon} \backslash E)$$
We get for the outer measure $$\mu^{\star}(A\cap E)+ \mu^{\star}(A\backslash E) \le \mu^{\star}(A\cap E_{\epsilon}) + \mu^{\star}(A\backslash E_{\epsilon}) +\mu^{\star}(E\backslash E_{\epsilon}) + \mu^{\star}(E_{\epsilon} \backslash E)$$
If $E_{\epsilon}$ is measurable then we have $\mu^{\star}(A\cap E_{\epsilon}) + \mu^{\star}(A\backslash E_{\epsilon}) = \mu^{\star}(A)$. Also, we have $\mu^{\star}(E\backslash E_{\epsilon}) + \mu^{\star}(E_{\epsilon} \backslash E)\le 2 \mu^{\star}(E\Delta E_{\epsilon})\le 2 \epsilon$.
Since $\epsilon>0$ is arbitrary we get $$\mu^{\star}(A\cap E)+ \mu^{\star}(A\backslash E) \le \mu^{\star}(A)$$ and since the opposite inequality holds, we get equality for all $A$. Hence $E$ is measurable.
|
Someone asked whether it was possible to fit a mixed model in lme4 with box constraints on the fixed-effect parameters. It is, although (1) it requires a little bit of extra hacking (see below) and (2) it works most easily for generalized (non-Gaussian) linear MMs rather than LMMs (but see the second section).
library("lme4")
library("ggplot2"); theme_set(theme_bw())
Fit the basic (unconstrained) model:
gm0 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd),
data = cbpp, family = binomial)
(beta0 <- fixef(gm0))
## (Intercept) period2 period3 period4
## -1.398343 -0.991925 -1.128216 -1.579745
Start following the sequence laid out in ?modular:
glmod <- glFormula(cbind(incidence, size - incidence) ~ period + (1 | herd),
data = cbpp, family = binomial)
devfun <- do.call(mkGlmerDevfun, glmod)
glmer uses a two-stage optimization process. During the first stage (nAGQ=0) we can’t constrain the fixed effect parameters (they’re profiled out of the optimization), so we just run optimizeGlmer in the usual way from ?modular:
opt1 <- optimizeGlmer(devfun)
Now set up the next optimization:
theta.lwr <- environment(devfun)$lower ## this changes after update! devfun <- updateGlmerDevfun(devfun, glmod$reTrms)
rho <- environment(devfun)
nbeta <- ncol(rho$pp$X)
theta <- rho$pp$theta
Now do the second-stage optimization, by hand rather than using optimizeGlmer(...,stage=2) as in ?modular. The code below is actually a little bit simpler than the guts of optimizeGlmer, because it’s not handling as wide a range of cases (e.g. user-specified starting values, additional control parameters, …). Instead of the usual constraints (theta parameters bounded below by rho$lower, which corresponds to 0 for diagonal and -Inf for off-diagonal elements, fixed-effect parameters unbounded), we constrain the fixed-effects parameters other than the intercept to [-1,1]. opt2 <- nloptwrap(par=c(rho$pp$theta,rep(0,nbeta)), fn=devfun, lower=c(theta.lwr,-Inf,rep(-1,nbeta-1)), upper=c(rep(Inf,length(theta)),Inf,rep(1,nbeta-1))) opt2$par
## [1] 0.6682799 -1.5037866 -0.8798190 -1.0000000 -1.0000000
opt2$fval ## [1] 186.114 ## compare with 2*log(L) of original model -2*c(logLik(gm0)) ## [1] 184.0531 Now we have the estimated parameters and log-likelihood, but it’s a little bit prettier if we put the results into a merMod object. (The only place we take a shortcut below is in the mc argument, for which we’ll just substitute the matched call from the original model. This will only be a problem if we try to use update() on the fitted model …) (gm1 <- mkMerMod(rho, opt2, glmod$reTrms, glmod$fr, mc=gm0@call)) ## Generalized linear mixed model fit by maximum likelihood (Laplace ## Approximation) [glmerMod] ## Family: binomial ( logit ) ## Formula: cbind(incidence, size - incidence) ~ period + (1 | herd) ## Data: cbpp ## AIC BIC logLik deviance df.resid ## 196.1140 206.2408 -93.0570 186.1140 51 ## Random effects: ## Groups Name Std.Dev. ## herd (Intercept) 0.6683 ## Number of obs: 56, groups: herd, 15 ## Fixed Effects: ## (Intercept) period2 period3 period4 ## -1.5038 -0.8798 -1.0000 -1.0000 Alternatively, it is (at least in principle) possible to skip the nAGQ=0 stage entirely and just fit the final model; this might be a good idea if the reason for constraining the fixed-effect parameters was to keep them away from a bad region. The stuff below here is not really working properly. You probably shouldn’t even bother reading on unless you’re interested in the guts of lme4. ## For linear mixed models This is harder for LMMs because by default lmer profiles out the fixed effect parameters, so we have to work a little bit harder to run a GLMM with a Gaussian response. All of the latter part of the machinery stays the same, so we’ll package it into a function: glmmConstr <- function(devfun,mod,mc,beta.lwr,beta.upr,debug=FALSE) { opt0 <- minqa::bobyqa(fn=devfun,par=1,lower=0,upper=Inf) if (debug) cat("initial theta: ",opt0$par,"\n")
opt1 <- optimizeGlmer(devfun)
if (debug) cat("next theta: ",opt1$par,"\n") rho <- environment(devfun) nbeta <- ncol(rho$pp$X) theta <- rho$pp$theta theta.lwr <- rho$lower ## this changes after update!
devfun <- updateGlmerDevfun(devfun, mod$reTrms) opt2 <- nloptwrap(par=c(theta,rep(0,nbeta)), fn=devfun, lower=c(theta.lwr,beta.lwr), upper=c(rep(Inf,length(theta.lwr)),beta.upr)) if (debug) cat("final theta: ",opt2$par,"\n")
mkMerMod(rho, opt2, mod$reTrms, mod$fr, mc=mc)
}
Note that we also have to be careful because we’re working with reference class objects, so it’s quite possible to mess up an object by modifying it or a copy of it, even within a function …
Fit a basic lmer example and set up a deviance function (slightly trickier)
fm0 <- lmer(Reaction~Days+(1|Subject),sleepstudy,REML=FALSE)
fmod <- glFormula(Reaction~Days+(1|Subject),
data=sleepstudy,family=gaussian,REML=FALSE)
## note we need family=gaussian() here -- an actual family object,
## not a string ("gaussian") or a family function (gaussian)
ldevfun <- do.call(mkGlmerDevfun, c(fmod, list(family=gaussian())))
Fit the unconstrained model:
fm1 <- glmmConstr(ldevfun,fmod,getCall(fm0),rep(-Inf,2),rep(Inf,2),
debug=TRUE)
## initial theta: 37.30982
## next theta: 37.30983
## final theta: 37.2637 250.2542 10.50123
fm0
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: Reaction ~ Days + (1 | Subject)
## Data: sleepstudy
## AIC BIC logLik deviance df.resid
## 1802.0786 1814.8505 -897.0393 1794.0786 176
## Random effects:
## Groups Name Std.Dev.
## Subject (Intercept) 36.01
## Residual 30.90
## Number of obs: 180, groups: Subject, 18
## Fixed Effects:
## (Intercept) Days
## 251.41 10.47
fm1
## Generalized linear mixed model fit by maximum likelihood (Laplace
## Approximation) [glmerMod]
## Family: gaussian ( identity )
## Formula: Reaction ~ Days + (1 | Subject)
## Data: sleepstudy
## AIC BIC logLik deviance df.resid
## 1926.6293 1939.4011 -959.3147 1918.6293 176
## Random effects:
## Groups Name Std.Dev.
## Subject (Intercept) 1092.27
## Residual 29.31
## Number of obs: 180, groups: Subject, 18
## Fixed Effects:
## (Intercept) Days
## 250.3 10.5
Estimated fixed effects, std. dev. from lmer:
c(fixef(fm0),attr(VarCorr(fm0)[[1]],"stddev"))
## (Intercept) Days (Intercept)
## 251.40510 10.46729 36.01208
from hacked glmer:
c(fixef(fm1),getME(fm1,"theta"))
## (Intercept) Days Subject.(Intercept)
## 250.25415 10.50123 37.26370
• The results seem to be very sensitive to the order in which things are run – there’s something funny going on in the guts of the reference classes, can’t figure it out right now …
• the reporting of the variance-covariance matrix is confused
• the estimates of the among-subjects standard deviations are similar but not identical (see above)
ldevfun <- do.call(mkGlmerDevfun, c(fmod, list(family=gaussian())))
tvec <- seq(0,2,length.out=51)
lvec <- sapply(tvec,ldevfun)
qplot(tvec,lvec,geom=c("line","point"))
Hmmm. This seems upside-down, and has its peak in the wrong place (max $$\approx 0.3$$ rather than getME(fm1,"theta")?)
|
# Tag Info
19
A BFF Word is one where: Namely: Furthermore, the non-BFF Words exhibit a different but related pattern - namely that:
14
We have (I am not sure I have correctly identified the room but it seems somewhat plausible). Here So a doctor might provide ...
14
13
Okay, so as I understand it So I think the patty, cheese and special bun are and so our burgers are as follows
13
Substantial and Insubstantial words are Thus: Credit where due: Stiv
12
The answer is The individual answers, starting from top left and moving right, are Then use A-Z = 0-25.
11
I think that the patty cheese and lettuce are and the special bun is Burgers
11
I believe you can complete the sequence with: Why? The title refers to:
10
Note 1: Number 3 was solved by hagfy in the comments. Note 2: Where two or more digits are equal in any ordering below, the digits are sorted numerically. 1. 2. 3. 4. 5.
10
Today's Vozzellbaugor Surprise Fries comes with: Hang on, wait - there seems to be some kind of raid going on?! Oh gosh, it's those Vowelburger guys from up the street! I'm going to make myself scarce...
10
8
The Vowelburger™ Side Dishes act as an alternative to the usual letter-based diet, being made entirely of: In particular, the top and bottom layers of these dishes are: Producing: Make sure you take advantage of our introductory offer - each side dish is just $1.97! 8 8 8 I solved the problem via integer linear programming, and Here are optimal values for$n\times n$grids with$n \le 10$: \begin{matrix} n &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 \\ \hline \text{maximum} &1 &3 &8 &12 &19 &25 &37 &45 &59 &71 \\ \end{matrix} By the way, you lifted your ... 8 Independently of @Randal'Thor's answer (which was posted in its original form just a few minutes before this one), I have explanations for all 16 puzzles (15 solid, 1 less so), and must say from the off: The puzzles resolve as follows (I am least sure on #6): Altogether, these spell out: 7 The rule is: Thanks to @Stiv, we also know that: Starting with 0: You can't go further because This may not be all there is to it though. 7 I think the next word is Reasoning 7 Very very doubtful solution Perhaps what is extraordinary is using buns and filling But there's an obvious hole here because 7 Answer: Explanation: 6 I believe today’s burgers are brought to you by the buns As the side dishes complement the burgers, they are served as follows 5 Well, judging from the hint, each$[m\ast n]$means In our case, we seem to have only$[n\ast n]$, in which case So rewriting all the expressions we've been given: Word 1 (2 letters):$[3*3]+[1*1]+[2*2]+[2*2]$Word 2 (5 letters):$((-[1*1] + [3*3] + [4*4] + [8*8] + [27*27]) \cdot [2*2] + [1*1] + [1*1] + [5*5] \cdot [27*27]) \cdot [2*2]\$ Word 3 (2 ...
5
5
COMPLETE SOLUTION: Answers provided for all 12 clues, including 4 found by @Randal'Thor... These give us the letters: This letter sequence: Translated, this reveals:
5
The missing numbers are Reasoning So
4
I don't get the pattern but I have some words which could fit the given clues. They don't really follow vowelburger rules other than maintaining the first and last letter. Not sure whether that is allowed considering they are side dishes and not burgers. I mainly just wanna keep this question alive so I can find out the actual answer. :P Short Delete ...
4
Solving the number puzzles: See Jens's answer - I didn't manage to get this one. See Jens's answer - I didn't manage to get this one. Turning the numbers into letters: The final step, motivated by the appearance of is to yielding the solution.
4
I think the letters are all Based off your hint: Going off your title: Using this idea for DBE should mean the ??? is:
4
The buns for your Spite Bite burger are: making The extra ingredient
4
Going through the sixteen clues (four or five I still couldn't solve completely): Note that most of these are actually number puzzles in disguise :-) Overall: So I should say:
Only top voted, non community-wiki answers of a minimum length are eligible
|
Ask Your Question
# Revision history [back]
### Detection of table tennis balls and color correction
Hello
I am making a robot project and I am trying to detect the tables tennis balls in pictures (from a webcam) like this.
I have tried different smoothing functions and tried a lot different numbers in the parameters to the functions, but the image (pre-processed) that gives best results look like this.
The camera stand foot got detected as a circle but I removed that result and 4 balls are not found. This is the balls I find at the end.
This is my code:
src = Highgui.imread("Picture 10.jpg",1);
Mat srcH = new Mat();
src.convertTo(srcH, -1, 0.7, 0);
Highgui.imwrite("contrast.jpg", srcH);
Imgproc.cvtColor(srcH, src_gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist(src_gray, src_gray);
Highgui.imwrite("outgray.jpg", src_gray);
Imgproc.GaussianBlur(src_gray, smooth, new Size(11,11),4, 4);
Highgui.imwrite("blur.jpg", smooth);
Imgproc.HoughCircles(smooth, circles, Imgproc.CV_HOUGH_GRADIENT, 2, 20, 81, 29, 10, 13);
System.out.println("Found "+circles.cols() + " circles.");
for (int i = 0; i < circles.cols(); i++) {
double[] circle = circles.get(0,i);
if (src.get((int)circle[1], (int)circle[0])[2]>140){
list.add(new Ball((int)circle[0],(int)circle[1]));
Point center = new Point((int)circle[0], (int)circle[1]);
int radius = (int) circle[2];
// circle center
Core.circle( src, center, 3, new Scalar(0,255,0), -1, 8, 0 );
// circle outline
Core.circle( src, center, radius, new Scalar(0,0,255), 3, 8, 0 );
}
}
Do you guys have any ideas about why the last 4 balls are not detected and how the pre-processing can be improved?
I am also having trouble detecting the colored circles on the robot. Sometimes it works, sometimes it doesn't. I think the sunlight affects the detection. I found this color balance technique which is implemented in Matlab (I think) and I have no idea how I would translate that to OpenCV. Any advice on how to translate that would also be appreciated.
### Detection of table tennis balls and color correction
Hello
I am making a robot project and I am trying to detect the tables tennis balls in pictures (from a webcam) like this.
I have tried different smoothing functions and tried a lot different numbers in the parameters to the functions, but the image (pre-processed) that gives best results look like this.
The camera stand foot got detected as a circle but I removed that result and 4 balls are not found. This is the balls I find at the end.
This is my code:
src = Highgui.imread("Picture 10.jpg",1);
Mat srcH = new Mat();
src.convertTo(srcH, -1, 0.7, 0);
Highgui.imwrite("contrast.jpg", srcH);
Imgproc.cvtColor(srcH, src_gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist(src_gray, src_gray);
Highgui.imwrite("outgray.jpg", src_gray);
Imgproc.GaussianBlur(src_gray, smooth, new Size(11,11),4, 4);
Highgui.imwrite("blur.jpg", smooth);
Imgproc.HoughCircles(smooth, circles, Imgproc.CV_HOUGH_GRADIENT, 2, 20, 81, 29, 10, 13);
System.out.println("Found "+circles.cols() + " circles.");
for (int i = 0; i < circles.cols(); i++) {
double[] circle = circles.get(0,i);
if (src.get((int)circle[1], (int)circle[0])[2]>140){
list.add(new Ball((int)circle[0],(int)circle[1]));
Point center = new Point((int)circle[0], (int)circle[1]);
int radius = (int) circle[2];
// circle center
Core.circle( src, center, 3, new Scalar(0,255,0), -1, 8, 0 );
// circle outline
Core.circle( src, center, radius, new Scalar(0,0,255), 3, 8, 0 );
}
}
Do you guys have any ideas about why the last 4 balls are not detected and how the pre-processing can be improved?
I am also having trouble detecting the colored circles on the robot. Sometimes it works, sometimes it doesn't. I think the sunlight affects the detection. I found this color balance technique which is implemented in Matlab (I think) and I have no idea how I would translate that to OpenCV. Any advice on how to translate that would also be appreciated.
### Detection of table tennis balls and color correction
Hello
I am making a robot project and I am trying to detect the tables tennis balls in pictures (from a webcam) like this.
I have tried different smoothing functions and tried a lot different numbers in the parameters to the functions, but the image (pre-processed) that gives best results look like this.
The camera stand foot got detected as a circle but I removed that result and 4 balls are not found. This is the balls I find at the end.
This is my code:
src = Highgui.imread("Picture 10.jpg",1);
Mat srcH = new Mat();
src.convertTo(srcH, -1, 0.7, 0);
Highgui.imwrite("contrast.jpg", srcH);
Imgproc.cvtColor(srcH, src_gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist(src_gray, src_gray);
Highgui.imwrite("outgray.jpg", src_gray);
Imgproc.GaussianBlur(src_gray, smooth, new Size(11,11),4, 4);
Highgui.imwrite("blur.jpg", smooth);
Imgproc.HoughCircles(smooth, circles, Imgproc.CV_HOUGH_GRADIENT, 2, 20, 81, 29, 10, 13);
System.out.println("Found "+circles.cols() + " circles.");
for (int i = 0; i < circles.cols(); i++) {
double[] circle = circles.get(0,i);
if (src.get((int)circle[1], (int)circle[0])[2]>140){
list.add(new Ball((int)circle[0],(int)circle[1]));
Point center = new Point((int)circle[0], (int)circle[1]);
int radius = (int) circle[2];
// circle center
Core.circle( src, center, 3, new Scalar(0,255,0), -1, 8, 0 );
// circle outline
Core.circle( src, center, radius, new Scalar(0,0,255), 3, 8, 0 );
}
}
Do you guys have any ideas about why the last 4 balls are not detected and how the pre-processing can be improved?
I am also having trouble detecting the colored circles on the robot. Sometimes it works, sometimes it doesn't. I think the sunlight affects the detection. I found this color balance technique which is implemented in Matlab (I think) and I have no idea how I would translate that to OpenCV. Any advice on how to translate that would also be appreciated.
4 retagged sturkmen 6717 ●3 ●47 ●79 https://github.com/stu...
### Detection of table tennis balls and color correction
Hello
I am making a robot project and I am trying to detect the tables tennis balls in pictures (from a webcam) like this.
I have tried different smoothing functions and tried a lot different numbers in the parameters to the functions, but the image (pre-processed) that gives best results look like this.
The camera stand foot got detected as a circle but I removed that result and 4 balls are not found. This is the balls I find at the end.
This is my code:
src = Highgui.imread("Picture 10.jpg",1);
Mat srcH = new Mat();
src.convertTo(srcH, -1, 0.7, 0);
Highgui.imwrite("contrast.jpg", srcH);
Imgproc.cvtColor(srcH, src_gray, Imgproc.COLOR_BGR2GRAY);
Imgproc.equalizeHist(src_gray, src_gray);
Highgui.imwrite("outgray.jpg", src_gray);
Imgproc.GaussianBlur(src_gray, smooth, new Size(11,11),4, 4);
Highgui.imwrite("blur.jpg", smooth);
Imgproc.HoughCircles(smooth, circles, Imgproc.CV_HOUGH_GRADIENT, 2, 20, 81, 29, 10, 13);
System.out.println("Found "+circles.cols() + " circles.");
for (int i = 0; i < circles.cols(); i++) {
double[] circle = circles.get(0,i);
if (src.get((int)circle[1], (int)circle[0])[2]>140){
list.add(new Ball((int)circle[0],(int)circle[1]));
Point center = new Point((int)circle[0], (int)circle[1]);
int radius = (int) circle[2];
// circle center
Core.circle( src, center, 3, new Scalar(0,255,0), -1, 8, 0 );
// circle outline
Core.circle( src, center, radius, new Scalar(0,0,255), 3, 8, 0 );
}
}
Do you guys have any ideas about why the last 4 balls are not detected and how the pre-processing can be improved?
I am also having trouble detecting the colored circles on the robot. Sometimes it works, sometimes it doesn't. I think the sunlight affects the detection. I found this color balance technique which is implemented in Matlab (I think) and I have no idea how I would translate that to OpenCV. Any advice on how to translate that would also be appreciated.
|
+0
# If the product , what is the sum of a and b?
0
40
1
If the product $$\frac{3}{2}\cdot\frac{4}{3}\cdot\frac{5}{4}\cdot\frac{6}{5}\cdot\ldots\cdot\frac{a}{b}=9$$, what is the sum of a and b?
Jun 24, 2022
|
21732 articles – 15570 references [version française]
hal-00419939, version 1
## Hilden Braid Groups
Paolo Bellingeri () 1, Cattabriga Alessia () 2
(2009-09-25)
Abstract: Let $\H_g$ be a genus $g$ handlebody and $\MCG_{2n}(\T_g)$ be the $2n$-punctured mapping class group of $\T_g=\partial\H_g$. In this paper we study two particular subgroups of $\MCG_{2n}(\T_g)$ which generalize Hilden groups. As well as Hilden groups are related to plate closures of braids, these generalizations are related to Heegaard splittings of manifolds and to bridge decompositions of links. Connections between these subgroups and motion groups of links in closed 3-manifolds are also provided.
• 1: Laboratoire de Mathématiques Nicolas Oresme (LMNO)
• CNRS : UMR6139 – Université de Caen Basse-Normandie
• 2: Università di Bologna (UNIBO)
• Università degli studi di Bologna
• Domain : Mathematics/Algebraic Topology
Mathematics/Geometric Topology
• Keywords : mapping class groups – extending homeomorphisms – handlebodies
• hal-00419939, version 1
• oai:hal.archives-ouvertes.fr:hal-00419939
• From:
• Submitted on: Friday, 25 September 2009 21:20:11
• Updated on: Sunday, 4 October 2009 18:21:01
|
# Proportion word problems
Practice setting up and solving proportions to solve word problems.
### Problem
Vito uses 9 liters of water to water 24 flower pots. He is wondering how many liters of water left parenthesis, w, right parenthesis it would take to water 40 flower pots. He assumes he'll use the same amount of water on each pot.
Which proportion could Vito use to model this situation?
Please choose from one of the following options.
Solve the proportion to determine how many liters of water it takes to water 40 flower pots.
liters
Get 3 answers correct in a row
|
# EU statistics on income and living conditions (EU-SILC) methodology - monetary poverty
This article is part of a set of articles describing the methodology applied for the computation of the statistical indicators pertinent to the subject area of Monetary poverty (ilc_li) within the overall domain of Income and living conditions. For these indicators, the article provides a methodological and practical framework of reference. The indicators relevant to the subject area of monetary poverty are the following:
• At-risk-of-poverty thresholds
• At-risk-of-poverty rate
• At-risk-of poverty rate before social transfers (pensions included in social transfers)
• At-risk-of poverty rate before social transfers (pensions excluded from social transfers)
• Relative at-risk-of poverty gap
• Persistent at-risk-of poverty rate
• At-risk-of poverty rate after deducing housing costs
• Distribution of population by number of years spent in poverty within a four-year period
• At-risk-of-poverty rate anchored at a fixed moment in time
• At-risk-of-poverty rate for children by citizenship of their parents (population aged 0 to 17 years)
• At-risk-of-poverty rate for children by country of birth of their parents (population aged 0 to 17 years)
Moreover, since the indicators are of multidimensional structure and can be analysed simultaneously along several dimensions, the separate datasets providing these indicators along with the different combinations of dimensions are also presented.
### Description
• Five different at-risk-of poverty thresholds are calculated as follows: (a) 40% of the national median equivalised disposable income (ARPT40), (b) 50% of the national median equivalised disposable income (ARPT50), (c) 60% of the national median equivalised disposable income (ARPT60), (d) 70% of the national median equivalised disposable income (ARPT70), (e) 40% of the national mean equivalised disposable income (ARPTM40), (f) 50% of the national mean equivalised disposable income (ARPTM50), (g) 60% of the national mean equivalised disposable income (ARPTM60).
• The at-risk-of-poverty rate before social transfers (pensions excluded from social transfers) refers to the percentage of persons in the total population who are at-risk-of-poverty based on the equivalised disposable income before all social transfers excluding pensions) (EQ_INC22), i.e. with an equivalised disposable income before all social transfers below the ‘at-risk-of-poverty thresholds’ calculated after social transfers.
• The at-risk-of-poverty rate after deducting housing costs refers to the percentage of persons in the total population who are at-risk-of-poverty with the housing costs being deducted, i.e. with an equivalised disposable income without total housing cost below the at-risk-of-poverty threshold calculated in the standard way (ARPT60).
• The distribution of the population by the number of years spent in poverty within a four-year period depicts the percentage of persons who are at-risk-of-poverty broken down by the number of years spent in poverty within a four-year period.
• For a given year (T), the at-risk-of-poverty rate anchored at a fixed moment in time is defined as the percentage of persons in the total population who are at-risk-of-poverty anchored at a fixed moment in time (at-risk-of-poverty threshold calculated in the standard way for the base year) and adjusted for inflation.
• the at-risk-of-poverty rate for children by citizenship of their parents (population aged 0 to 17 years) is defined as the percentage of persons (population aged 0 to 17 years) in the total population and in the relevant citizenship of their parents breakdown who are at-risk-of-poverty.
• the at-risk-of-poverty rate for children by country of birth of their parents (population aged 0 to 17 years) is defined as the percentage of persons (population aged 0 to 17 years) in the total population and in the relevant country of birth of their parents breakdown who are at-risk-of-poverty.
### Statistical population
The statistical population consists of all persons living in private private households. Persons living in collective households and in institutions are generally excluded from the target population.
However, the at-risk-of-poverty rate covers different subsets of the population when presented along with different dimensions. More specifically, it covers the population aged 18 and over when broken down in the following dimensions: level of education, the broad group of citizenship and broad group of the country of birth. The population aged 0 to 59 is covered when the indicator is broken down by work intensity of the household. Additionally, when calculated for children (i.e. children at risk of poverty or social exclusion), it refers to the population aged 0 to 17 living in private households. Specifically, when broken down by most frequent activity in the previous year, it refers to the population aged 16 and over living in private households, excluding those with less than 7 months declared in the calendar of activities.
For the computation of the at-risk-of-poverty thresholds, all persons living in the specific types of private households are included. More specifically, all thresholds are calculated for two illustrative household types: (a) Single person household and (b) household with 2 adults, two dependent children under 14 years.
The persistent at-risk-of-poverty rate covers all the persons who have been living for four years in private households and who have been on the panel for all the four relevant years. 'Not current household members', i.e. persons for which RB110>4, are excluded.
The at-risk-of-poverty rate anchored at a fixed point in time refers to all persons who have been living in private households for the current year (T). For the calculation of the at-risk-of-poverty threshold in the base year (2008) the population consists of the persons who lived in private households during the base year (2008), whereas for the calculation of the at-risk-of-poverty threshold in the base year (2005), the population consists of the persons that lived in private households during the base year (2005).
The reference population for the distribution of population by the number of years spent in poverty within a four-year period comprises all persons living in private households for the duration of the 4 last years.
In any case, people with missing values for equivalised income and for any of the different dimensions that the indicators are presented, are excluded from calculations.
### Reference period
All indicators are collected and disseminated on an annual basis and refer to the survey year. The indicators distribution of population by the number of years spent in poverty within a four-year period and the persistent at-risk-of-poverty rate cover a longer period: 4 years.
The reference period for all dimensions along with the indicators are disseminated is the survey year, except for age, income, activity status, household type and work intensity of the household. As far as age is concerned, it refers to the age of the respondent at the end of the income reference period, based on which the household type is also derived. For income, the income reference period is a fixed 12-month period (such as the previous calendar or tax year) for all countries except the United Kingdom, for which the income reference period is the current year, and Ireland, for which the survey is continuous and income is collected for the last twelve months. For activity status, the reference year is the year previous to survey year, while work intensity of the household refers to the number of months that all working-age household members have been working during the income reference year. Additionally, the household type refers to the household type of the respondent at the end of income reference period.
### Unit of measurement
The at-risk-of-poverty thresholds are expressed in Euro (from 1.1.1999)/ECU (up to 31.12.1998), in national currency (including ‘euro fixed’ series for euro area countries) or Purchasing Power Standards (PPS).
The number of persons at risk of poverty (in thousands of persons) is provided. The at-risk-of-poverty rate is also made available as a percentage.
The at-risk-of-poverty rate before social transfers (pensions included in social transfers), the at-risk-of-poverty rate before social transfers (pensions excluded from social transfers), the relative at-risk-of-poverty gap, the persistent at risk of poverty rate, the at-risk-of-poverty rate (after deducting housing costs), the at-risk-of-poverty rate anchored at a fixed point in time and the distribution of population by number of years spent in poverty within a four-year period are given as a percentage.
### Dimensions
The separate datasets provide each indicator along with the Geopolitical entity and time dimensions and the dimensions presented below.
The at-risk-of-poverty thresholds are presented broken down by household type (only for single person household and households with 2 adults with two dependent children under 14 years).
The at-risk-of-poverty rate is presented along with the following dimensions:
The at-risk-of-poverty rate before social transfers (pensions excluded from social transfers) is presented along with the dimensions:
• poverty threshold, age group and sex
• household type
The relative at risk of poverty gap is given broken down by age group, sex and poverty threshold.
The persistent at-risk-of-poverty rate is disseminated along with the following dimensions:
• age group and sex
• household type
• sex and education level (ISCED)
The at-risk-of-poverty rate anchored at a fixed moment in time is given broken down by age group and sex.
The at-risk-of-poverty rate after deducting housing costs is disseminated along with the following dimensions:
• age group and sex
• degree of urbanisation (DEGURBA)
## Calculation method
1. At-risk-of-poverty thresholds:
At-risk-of-poverty thresholds (ARPTXX and ARPTMXX) broken down by household type are calculated as the percentage of the Median Equivalised disposable Income after social transfers (MEDIAN20) and the Mean Equivalised disposable Income after social transfers (MEAN20) respectively.
$ARPTXX_{at\_HHTYP}=XX\%\times\;EQ\_INC_{median/at\_HHTYP}$
$ARPTMXX_{at\_HHTYP}=XX\%\times\;EQ\_INC_{mean/at\_HHTYP}$
The XX percentage takes the values 40, 50, 60, 70, depending on the threshold we want to calculate. The equivalised disposable income (EQ_INC) can be expressed in National Currency (EQ_INC20nac), Euros (EQ_INC20eur) or Purchasing Power Standards (EQ_INC20ppp).
With regard to the calculation of the at-risk-of-poverty thresholds, the following methodological issues should be taken into consideration:
• When comparing the value of thresholds in different Member States, the thresholds are converted to Purchasing Power Standards (PPS). These convert different national currencies to a single currency, whilst controlling for differences in price levels between countries.
• The choice of the percentage of the median has major consequences with regard to the level of the poverty risk rate. On the one hand, there are normative and political considerations with regard to the level of the poverty threshold. On the other hand, there are methodological issues. For instance, the choice of a lower percentage might result in a poverty threshold that is very close to the bottom of the distribution, hence more subject to problems of reliability.
• For each country, the poverty risk indicator must be assessed by looking at both the number of people whose income is below the threshold and the comparative level (in PPS) of this threshold.
• More in general, by comparing the results obtained with different thresholds within one Member state, one can assess the robustness of conclusions based on the 60% threshold.
2. At-risk-of-poverty rate:
At-risk-of-poverty rate (ARPT) broken down by each combination of dimensions (k) $(ARPT_{at\_k})$ is calculated as the percentage of people (or thousands of people) in each k who are at-risk-of-poverty (calculated for different cut-off points) over the total population in that k.
The weight variable used is the Adjusted Cross Sectional Weight (RB050a).
$ARPT_{at\_k}=\frac{\sum\limits_{i=j\_at\_k}\;RB050a_i}{\sum\limits_{i\_at\_k}\; RB050a_i}\times 100$
$ARPT_{at\_k}=\frac{\sum\limits_{i=j\_at\_k}\;RB050a_i}{1000}$
where j denotes the population, or subset of the population, who is at risk of poverty. At-risk-of-poverty thresholds (ARPTXX) can be any of the following: ARPT40, ARPT50, ARPT60, ARPT70, ARPTM40, ARPTM50, ARPTM60.
The Personal Cross-Sectional Weight (PB040) is used for the calculation of the indicator along with the following combinations of dimensions: (a) age group, sex and activity status and (b) age group, sex and education level.
$ARPT_{at\_k}=\frac{\sum\limits_{i=j\_at\_k}\;PB040_i}{\sum\limits_{i\_at\_k}\; PB040_i}\times 100$
With regard to the calculation of the at-risk-of-poverty rate, the following methodological issues should be taken into consideration:
• Unless specified, at-risk-of-poverty rates are assumed to be ‘after social transfers’ (i.e. they include social benefits such as pensions and unemployment benefits).
• Income poverty risk at a given point in time may not necessarily imply low living standards in the short term, for example, if the persons at risk have access to savings, to credit, to private insurance, tax credits, to financial assistance from friends and relatives etc. In particular, the cumulative impact of extended periods at risk is to be further assessed.
• Measuring incomes at the level of private households may have certain implications. The exclusion of collective households might lead to an underrepresentation of certain groups (e.g. the elderly, persons with disabilities).
• An approach based on relative income poverty gives a proxy for the risk of being affected by poverty within each country but makes it more difficult to compare the situation between countries that would be the case with a common threshold.
• Income-based indicators are presented for individuals by reference to their household distribution: no information is available about the actual distribution of income between household members. The attribution of the household income to each of its members may impede a detailed analysis of the sex dimension.
3. At-risk-of-poverty rate before social transfers (pensions included in social transfers):
At-risk-of-poverty rate (ARPT23) broken down by each combination of dimensions (k) $(ARPT_{at\_k})$ is calculated as the percentage of people (or thousands of people) in each k who are at-risk-of-poverty (based on the equivalised disposable income before all social transfers – including pensions - (EQ_INC23) over the total population in that (k).
The weight variable used is the Adjusted Cross-Sectional Weight (RB050a).
$ARPT23_{at\_k}=\frac{\sum\limits_{i=j\_at\_k}\;RB050a_i}{\sum\limits_{i\_at\_k}\; RB050a_i}\times 100$
where j denotes the population, or subset of the population, who is at risk of poverty (based on the equivalised disposable income before all social transfers, including pensions). At-risk-of-poverty thresholds (ARPTXX) can be any of the following: ARPT40, ARPT50, ARPT60, ARPT70, ARPTM40, ARPTM50, ARPTM60.
With regard to the calculation of the at-risk-of-poverty rate before social transfers (pensions included in social transfers), the following methodological issues should be taken into consideration:
• The ‘at-risk-of-poverty threshold’ is the same as the one used to calculate the at-risk-of-poverty rate after transfers.
• The indicator ‘poverty rate before social transfers’ should only be used in connection with the indicator ‘poverty rate (after social transfers)’. On its own, it does not have any explanatory value.
• Social transfers are defined as current transfers received during the income reference period, which are intended to relieve them from the financial burden of a number of risks or needs, made through collectively organised schemes or outside such schemes by government units and Non-Profit Institutions Serving Households. In order to be included as a social benefit, the transfer must be (a) compulsory for the group in question and (b) based on a principle of social solidarity.
• Social benefits do not include tax rebates, benefits in kind or benefits paid from schemes into which the recipient has made voluntary payments only, independently of his/her employer or government.
4. At-risk-of-poverty rate before social transfers (pensions excluded from social transfers):
At-risk-of-poverty rate (ARPT22) broken down by each combination of dimensions (k) $(ARPT_{at\_k})$ is calculated as the percentage of people (or thousands of people) in each k who are at-risk-of-poverty (based on the equivalised disposable income before all social transfers - excluding pensions - (EQ_INC22)) over the total population in that k.
The weight variable used is the Adjusted Cross-Sectional Weight (RB050a).
$ARPT22_{at\_k}=\frac{\sum\limits_{i=j\_at\_k}\;RB050a_{i}}{\sum\limits_{i\_at\_k}\;RB050a_{i}}\times 100$
where j denotes the population, or subset of the population, who is at risk of poverty (based on the equivalised disposable income before all social transfers, excluding pensions). At-risk-of-poverty thresholds (ARPTXX) can be any of the following: ARPT40, ARPT50, ARPT60, ARPT70, ARPTM40, ARPTM50, ARPTM60.
With regard to the calculation of the at-risk-of-poverty rate before social transfers (pensions excluded from social transfers), the following methodological issues should be taken into consideration:
• The ‘at-risk-of-poverty threshold’ is the same as the one used to calculate the at-risk-of-poverty rate after transfers.
• The indicator ‘poverty rate before social transfers’ should only be used in connection with the indicator ‘poverty rate (after social transfers)’. On its own, it does not have any explanatory value.
• Social transfers are defined as current transfers received during the income reference period, which are intended to relieve them from the financial burden of a number of risks or needs, made through collectively organised schemes or outside such schemes by government units and Non-Profit Institutions Serving Households. In order to be included as a social benefit, the transfer must be (a) compulsory for the group in question and (b) based on a principle of social solidarity.
• Social benefits do not include tax rebates, benefits in kind or benefits paid from schemes into which the recipient has made voluntary payments only, independently of his/her employer or government.
5. Relative at-risk-of-poverty gap:
Relative at-risk-of-poverty gap rate (RAROPG) broken down by each combination of dimensions (k) $(RAROPG_{at\_k})$ is calculated as the difference between the median equavalised disposable income of people below the at-risk-of-poverty threshold $(EQ\_INC20_{median/at\_poor\_k})$ and the at-risk-of poverty threshold (ARPTXX), expressed as a percentage of the at-risk-of-poverty threshold, in each k.
$RAROPG_{at\_k}=\frac{ARPTXX-EQ\_INC20_{median/at\_poor\_k}}{ARPTXX}\times 100$
At-risk-of-poverty thresholds (ARPTXX) can be any of the following: ARPT40, ARPT50, ARPT60, ARPT70, ARPTM40, ARPTM50, ARPTM60.
With regard to the calculation of the relative at-risk-of-poverty gap rate, the following methodological issues should be taken into consideration:
• The poverty gap represents the poverty gap of the ‘median person’ who is at risk of poverty. However, it does not convey any information on the distribution of the poverty gap among the population at-risk-of-poverty.
• The median poverty gap is preferred to the total poverty gap or mean poverty gap, in as far as the latter is more sensitive to extremely low and negative incomes (which may be due to income measurement errors).
• The poverty gap is expressed as a percentage of the at-risk of poverty threshold in order to make data comparable across countries.
6. Persistent at-risk-of-poverty rate:
Let L be the subset of the total population consisting of persons that have been in the panel for the last four years and for whom the Equivalised disposable income (EQ_INC) is not missing for any of the years.
Let $P_{T}\,(P_{T}\subset L)$ be the subset of individuals who are at-risk-of poverty in the current year (T), i.e. $\forall i\in L$ for which $EQ\_INC20^{T}_i\lt ARPTXX^{T}$.
Let $P_{T-1}(P_{T-1}\subset L)$ be the subset of individuals who are at-risk-of poverty in T-1, i.e. $\forall i\in L$ for which $EQ\_INC20^{T-1}_i\lt ARPTXX^{T-1}$.
Let $P_{T-2}(P_{T-2}\subset L)$ be the subset of individuals who are at-risk-of poverty in T-2, i.e. $\forall i\in L$ for which $EQ\_INC20^{T-2}_i\lt ARPTXX^{T-2}$.
Let $P_{T-3}(P_{T-3}\subset L)$ be the subset of individuals who are at-risk-of poverty in T-3, i.e. $\forall i\in L$ for which $EQ\_INC20^{T-3}_i\lt ARPTXX^{T-3}$.
Note: i denotes each individual in the dataset. All information for each individual i and all years in the panel has been stored in one line.
Let L* ($L^* \subset L)$ be the subset who are ‘persistent at-risk-of-poverty’, i.e. those being at-risk- of-poverty in the current year (T) and at least 2 out of the preceding 3 years:
$L^*=(P_T\bigcap P_{T-1}\bigcap P_{T-2}\bigcap P_{T-3})\bigcup (P_T\bigcap P_{T-1}\bigcap P_{T-2})\bigcup(P_T\bigcap P_{T-1}\bigcap P_{T-3})\bigcup(P_T\bigcap P_{T-2}\bigcap P_{T-3})$
Persistent at-risk-of-poverty rate (L_ARPT) broken down by each combination of dimensions (k) ($L\_ARPT_{at\_k}$ is calculated as the percentage of people in each k whose equivalised disposable income was below the ‘at-risk-of-poverty threshold’ (taken from cross-sectional calculations – external threshold) for the current year and at least 2 out of the preceding 3 years over the total population in (k).
The weight variable used is an estimation of the Longitudinal weight estimate – Four-year duration (RB064e).
$L\_ARPT_{at\_k}=\frac{\sum\limits_{\forall{i}\in L^*\_at\_k}RB064e_i }{\sum\limits_{\forall{i}\in L\_at\_k}RB064e_i }\times 100$
At-risk-of-poverty thresholds ARPTXX can be any of the following: ARPT40, ARPT50, ARPT60, ARPT70, ARPTM40, ARPTM50, ARPTM60.
With regard to the calculation of the persistent at-risk-of-poverty rate, the following methodological issues should be taken into consideration:
• In any longitudinal panel survey there can be problems of attrition, with respondents ceasing to participate for a variety of reasons. For the persistent at-risk-of-poverty rate, an important question is whether there are higher drop-out rates for low-income households than for other households.
• The indicator specifies ‘at least two out of three previous years’ to allow for fluctuations around the poverty line.
7. At-risk-of-poverty rate anchored at a fixed point in time:
At-risk-of-poverty rate (ARPT) anchored at a fixed time (t) broken down by each combination of dimensions (k), $(ARPT_{t_{k}})$ is calculated as the percentage of people in each k who are at-risk-of-poverty based on the at-risk-of-poverty thresholds of t adjusted for inflation $(ARPT60_{(t)(T)})$ over the total population in k. More specifically people at-risk-of poverty anchored at t are those with an equivalised disposable income (for a given year T) below the at-risk-of-poverty threshold calculated in t adjusted for inflation $(\text{i.e.} EQ\_INC20\lt ARPT60_{(t)(T)})$ where $ARPT60_{(t)(T)}$ is the at-risk-of-poverty threshold adjusted for inflation from t to T). Adjustment is based on the annual Harmonised Indices of Consumer Prices (HICPs).
$ARPT60_{(t)(T)}=ARPT60_{(t)}\times \frac{idx_t}{100}$
where ${idx_t}/{100}$ is the official inflation rate between t and T and $ARPT_{t}$ denotes the at-risk of poverty threshold in t. The indicator is calculated for the following two base years: 2005 and 2008, i.e. t takes the values 2005 and 2008.
The weight variable used is the Adjusted Cross-Sectional Weight(RB050a).
$ARPT_{(t)(T)}=\frac{\sum\limits_{i=j\_at\_k}\;RB050a_i}{\sum\limits_{i\_at\_k}\;RB050a_{i}}\times 100$
where j denotes the population, or subset of the population, who is at risk of poverty based on the at-risk-of-poverty thresholds of t adjusted for inflation.
With regard to the calculation of the at-risk-of-poverty rate anchored at a fixed moment in time (t), the following methodological issues should be taken into consideration:
• The poverty threshold of the base year (t=2005 or 2008) is adjusted for inflation. This operation results in the ‘real’ value of the threshold base year, i.e. adjusted for price increases in subsequent years. The remaining difference between the ‘inflation adjusted’ threshold of the base year and the threshold of the current year reflects evolutions in living standards.
• The base or reference year (t) is meant to change in regular intervals.
• The inflation rate to be applied should correspond to the survey years both for the base year (t) and T.
8. At-risk-of-poverty rate after deducting housing costs:
At-risk-of-poverty rate, with the housing costs (HH070) being deducted, (ARPThc), broken down by each combination of dimensions (k), $(ARPThc_{t_{k}})$ is calculated as the percentage of people in each (k) who are at-risk-of-poverty (EQ_INC20hc<ARPT60) after deducting housing costs over the total population in (k).
The weight variable used is the Adjusted Cross-Sectional Weight (RB050a).
$ARPThc_{at\_k}=\frac{\sum\limits_{i=j\_at\_k}\;RB050a_i}{\sum\limits_{i\_at\_k}\;RB050a_{i}}\times 100$
where $j$ denotes the population, or subset of the population, who is at risk of poverty after deducing housing costs.
With regard to the calculation of the at-risk-of-poverty rate after deducting housing costs, the following methodological issues should be taken into consideration:
• As the housing costs faced by households do not always reflect the true value of the housing they enjoy, housing costs should be deducted in calculated disposable income. However, a disadvantage of using an after housing costs measure of disposable income is that it has the effect of understating the relative standard of living of those individuals who benefit from a better quality of housing by paying more for better accommodation. In the case of making comparisons across age groups, it can be argued that as a large proportion of pensioners own their homes and therefore typically have lower housing costs than those of a working age.
• The strength of the case for measuring the risk of poverty after housing costs, depends on how far housing costs can be regarded as a fixed and inescapable charge on income and how far, on the contrary, they represent payment for a consumer good which like any other produces a stream of satisfaction which varies between households according to how much the individuals concerned value having an attractive or spacious place in which to live.
9. Distribution of population by number of years spent in poverty within a four-year period:
Let L be the subset of the total population consisting of persons that have been in the panel for the last four years and for whom (EQ_INC) is not missing for any of the years. The distribution of population at-risk-of poverty within a four-year period broken down by each combination of dimensions (k), $DISP_{T,D_{at\_k}}$ is calculated as the percentage of people who are at-risk-of poverty (calculated for different cut-off points) within a four-year period in each k over the total population in that k.
The weight variable used is an estimation of the Longitudinal weight estimate - four year duration (RB064a). The variable is estimated for the years for which the real longitudinal weight (RB064) is not provided.
$DISP_{T,D_{at\_k}}=\frac{\sum\limits_{\forall{i}\in L,SumT_i=D\_at\_k}RB064e_i }{\sum\limits_{\forall{i}\in L\_at\_k}RB064e_i }\times 100$
where D in (0,1,2,3,4) denoting the number of years spent in poverty and counts how many times within the four-year period each individual is at-risk-of poverty (calculated for the different cut-off points).
With regard to the calculation of the distribution of population by number of years spent in poverty within a four-year period, the following methodological issues should be taken into consideration:
• Income poverty risk at a given point in time may not necessarily imply low living standards in the short term, for example if the persons at-risk have access to savings, to credit, to private insurance, tax credits, to financial assistance from friends and relatives etc. In particular, the cumulative impact of extended periods at risk is to be further assessed.
10. At-risk-of-poverty rate for children by citizenship of their parents (population aged 0 to 17 years):
At-risk-of-poverty rate (ARPT) of population aged 0 to 17 years broken down by citizenship of their parents $(ARPT_{agex_{at\_citizen}})$ is calculated as the percentage of people (aged 0 to 17 years) in each citizenship group of their parents (C_SHIP) who are at-risk-of-poverty (EQ_INC20<ARPT60) over the total population in that breakdown (i.e., citizenship group of their parents).
The weight variable used is the Adjusted Cross Sectional Weight (RB050a).
$ARPT_{agex_{at\_citizen}}=\frac{\sum\limits_{\forall{i}EQ\_INC20\lt ARPT60\_at\_citizen}RB050a_i} {\sum\limits_{\forall{i}\ {at\_citizen}}RB050a_i}\times 100$
Where agex takes the values from 0 to 17 years.
11. At-risk-of-poverty rate for children by country of birth of their parents (population aged 0 to 17 years):
At-risk-of-poverty rate (ARPT) of population aged 0 to 17 years broken down by country of birth of their parents $(ARPT_{agex_{at\_c\_birth}})$ is calculated as the percentage of people (aged 0 to 17 years) in each country of birth of their parents (C_SHIP) who are at-risk-of-poverty (EQ_INC20<ARPT60) over the total population in that breakdown (i.e., country of birth of their parents).
The weight variable used is the Adjusted Cross Sectional Weight (RB050a).
$ARPT_{agex_{at\_c\_birth}}=\frac{\sum\limits_{\forall{i}EQ\_INC20\lt ARPT60\_at\_c\_birth}RB050a_i} {\sum\limits_{\forall{i}\ {at\_c\_birth}}RB050a_i}\times 100$
Where agex takes the values from 0 to 17 years.
Moreover, there are some methodological limitations that pertain to the following dimensions accompanying the indicators: Age, Activity status, Citizenship, Country of birth, Degree of urbanisation, Educational level, Highest educational level of parents, Household type, NUTS region, Tenure status, Work intensity of the household.
### Main concepts used
For the production of the indicators relevant to the subject area of monetary poverty, the variables listed below are also involved in computations:
Poverty status (ARPTXXi), At-risk-of-poverty threshold (ARPTXX), Equivalised disposable Income (EQ_INC) (see article EU statistics on income and living conditions (EU-SILC) methodology – concepts and contents)
### SAS program files
SAS programming routines developed for the computation of the EU-SILC monetary poverty datasets along with the different dimensions, are listed below.
Dataset SAS program file
At-risk-of-poverty thresholds (ilc_li01) LI01.sas
At-risk-of-poverty rate by poverty threshold, age and sex (ilc_li02) LI02.sas
At-risk-of-poverty rate by poverty threshold and household type (ilc_li03) LI03.sas
At-risk-of-poverty rate by poverty threshold and most frequent activity in the previous year (ilc_li04) LI04.sas
At-risk-of-poverty rate by poverty threshold and work intensity of the household (population aged 0 to 59 years) (ilc_li06) LI06.sas
At-risk-of-poverty rate by poverty threshold and education level (ilc_li07) LI07.sas
At-risk-of-poverty rate by poverty threshold and tenure status (ilc_li08) LI08.sas
At-risk-of-poverty rate before social transfers (pensions included in social transfers) by poverty threshold, age and sex (ilc_li09) LI09.sas
At-risk-of-poverty rate before social transfers (pensions included in social transfers) by household type (ilc_li09b) LI09b.sas
At-risk-of-poverty rate before social transfers (pensions excluded from social transfers) by poverty threshold, age and sex (ilc_li10) LI10.sas
At-risk-of-poverty rate before social transfers (pensions excluded from social transfers) by household type (ilc_li10b) LI10b.sas
Relative at-risk-of-poverty gap by poverty threshold (ilc_li11) LI11.sas
Persistent at-risk-of-poverty rate by sex and age (ilc_li21) LI21.sas
At-risk-of-poverty rate anchored at a fixed moment in time (2005) by age and sex (ilc_li22) LI22.sas
At-risk-of-poverty rate anchored at a fixed moment in time (2008) by age and sex (ilc_li22b) LI22b.sas
Persistent at-risk-of-poverty rate by household type (ilc_li23) L_li23.sas
Persistent at-risk-of poverty rate by educational level (ilc_li24) L_li24.sas
At-risk-of poverty rate by broad group of citizenship (population aged 18 and over) (ilc_li31) LI31.sas
At-risk-of poverty rate by broad group of country of birth (population aged 18 and over) (ilc_li32) LI32.sas
At-risk-of poverty rate for children by citizenship of their parents (population aged 0 to 17 years(ilc_li33) LI33.sas
At-risk-of poverty rate for children by country of birth of their parents (population aged 0 to 17 years (ilc_li34) LI34.sas
At-risk-of poverty rate by NUTS region (ilc_li41) LI41.sas
At-risk-of poverty rate by degree of urbanisation (ilc_li43) LI43.sas
At-risk-of poverty rate after deducting housing costs by age and sex (ilc_li45) LI45.sas
At-risk-of poverty rate after deducting housing costs by degree of urbanisation (ilc_li48) LI48.sas
Distribution of population by number of years spent in poverty within a four-year period (ilc_li51) LI51.sas
At-risk-of-poverty rate for children by highest education level of their parents (population aged 0 to 17 years) (ilc_li60) LI60.sas
Other articles
Tables
Database
Dedicated section
Publications
Methodology
Visualisations
• Living conditions and welfare (livcon), see:
Income and living conditions (ilc)
Income distribution and monetary poverty (ilc_ip)
Monetary poverty (ilc_li)
|
## $\Lambda$-adic Kolyvagin systems
##### Authors
In this paper, we study the deformations of Kolyvagin systems that are known to exist in a wide variety of cases, by the work of B. Howard, B. Mazur, and K. Rubin for the residual Galois representations, along the cyclotomic Iwasawa algebra. We prove, under certain technical hypotheses, that a cyclotomic deformation of a Kolyvagin system exists. We also briefly discuss how our techniques could be extended to prove that one could deform Kolyvagin systems for other deformations as well. We discuss several applications of this result, particularly relation of these $\Lambda$-adic Kolyvagin systems to p-adic L-functions (in view of the conjectures of Perrin-Riou on p-adic L-functions) and applications to main conjectures; also applications to the study of Iwasawa theory of Rubin-Stark units.
|
Hw12 Problem 14
Problem: (*) Show that the characteristic of an integral domain must be either $0$ or prime.
Solution: Using Theorem 19.15 we can look solely at $mn * 1 = 0$ for $m > 1$ and $n > 1$, where $mn$ is the characteristic of the integral domain. Now,
(1)
$$mn * 1 = (m * 1)(n * 1)$$
(2)
$$= 0$$
Since we are operating within an integral domain, either $m * 1 = 0$ or $n * 1 = 0$. So then the characteristic is both at most $m$ and at most $n$. Then the characteristic cannot be a composite integer. Therefore the characteristic of an integral domain is either $0$ or prime.
|
# The Appeal of the Lift Web Framework
## The extreme end of weird (as far as web frameworks go)
Lift is one of the better-known web frameworks for Scala. Version 2.5 has just been released, so it seems like a good time to show features of Lift that I particularly like.
Lift is different from other web frameworks (in fact, I labeled it at the extreme end of weird in the first presentation I gave about it), but people who get into Lift seem to love the approach it takes. It’s productive and enjoyable, which goes well with Scala.
I’ll keep this post short. Just two things:
## Transforms
You might be familiar with an MVC approach to the Web, where you have code that forwards to a view, and in that view you maybe use a little bit of mark-up to loop or display values. That’s not how it goes in Lift.
Instead, you start with the view first, and use HTML5 attributes to mark the parts of the view that need transforming. Here’s an example:
That’s valid HTML5. You can view it in your browser, or edit it in Adobe Fireworks, or whatever tool you want. The only part of it that looks a little strange is the data-lift attribute. What that’s doing is naming a Lift snippet, and a snippet is just a class. It might look like this:
What’s going on here? We’re using a nifty DSL in Lift to transform the <p> tag so that it contains the content we want. We’re saying…
To render whatever template we’re given:
• find those things with the “author” class (if any), use the * selector to focus on the content, and replace it with the text “Philip Larkin”; and
• select the content of the title class, and replace it with the title of his most famous poem.
Things like ".author" are CSS Selectors, which you’re probably familiar with from .css files or jQuery. The right hand side of the #> method is the replacement function. (If you don’t like the #> symbol, use replaceWith instead.)
Under the hood—which you can open if you have something more complex to do—the transform takes a NodeSeq (a Scala representation of XML, a HTML <p> in our case) and returns a replacement NodeSeq. In other words, it’s a NodeSeq => NodeSeq function.
The final HTML sent to the browser would be:
Two things to note from this. First, we have an encapsulated piece of functionality that doesn’t care what page it is put on.
Second, you can give the HTML to a designer, and they can work on it how they like, with whatever data they want to put on the page. When they give you the page back (or push it to your repository), there’s nothing to change. Did the designer change the content and put it in a table? Doesn’t matter, because we’re matching on the CSS class name.
Sure, you need to have some agreement on structure and CSS classes, and there are a few tricks to learn around repeated content, but this is a long way from having to rework HTML after each change.
That’s the first thing I like about Lift: it’s an addictive way of working with HTML.
## REST
Lift’s RestHelper is a powerful and concise way to produce RESTful web services. At its heart, it transforms a request into a response, making use of Scala’s pattern matching, and other features. An example will help explain.
Sticking with the poetry theme, we can build a service that takes URLs like /poems/by/Larkin and returns JSON:
The code to implement that in Lift might be this:
We have an embarrassingly small repertoire of poets associated to titles of their work, stored in a regular Scala Map called poems.
You need to know that JValue is the Lift way of representing JSON data, and we use this to create a function to turn titles into the JSON structure we want (in the asJSON function). The ("titles" -> titles) code (a regular Scala tuple) is, in this instance, triggering an implicit conversion in the JSON DSL to give us a JValue. If you don’t like that, you can construct JSON from more basic building blocks.
Finally, the serve block defines the pattern we want to match on. It has to be a GET request starting “poems,” followed with “by” and then some String value we’re calling author.
The right-hand side of the => is what we produce if the pattern matches. That’s going to be the list of titles from our “database,” transformed by asJSON.
Lift is figuring out that as we’re producing a JValue, it should send back the right kind of response, with the application/json mime type set. We could have been more implicit and constructed a JsonResponse instance ourselves, or even something else entirely, like an OutputStreamResponse or a RedirectResponse.
The point is that we’ve matching on a request, and returning some kind of LiftResponse. That’s what I like about this: the model is simple. At the same time, there are implicit conversions (a.k.a “magic,” “evil,” depending on your point of view) that you can make use of to get your job done; or you can be more explicit in what kind of response you send back.
There are also sensible default behaviors. If you ask our database for /poems/by/Tennyson, you’ll get a 404.
|
Electromagnetic induction
Electromagnetic induction is where a current is produced in a conductor through a changing magnetic flux.
Magnetic flux
When a coil is introduced near a magnet (usually a bar magnet), then the magnetic lines of force passing through the coil is called magnetic flux. Magnetic flux is represented by the symbol ${\Phi}$, therefore we can say that ${\Phi}$ = BAcos(a) and the resulting unit will be $Tm^2$, where T is the unit for magnetic field and $m^2$ is the unit for area.
The changing magnetic flux generates an electromotive force (EMF). This force then pushes free electrons in a certain way, which in turn creates a current.
Michael Faraday found that an electromotive force is generated when there is a change in magnetic flux in a conductor.
His laws state that:
$\mathcal{E} = {-{d\Phi} \over dt}$
where,
$\mathcal{E}$ is the electromotive force, measured in volts;
${d\Phi}$ is the change in magnetic flux, measured in webers;
$dt$ is the change in time, measured in seconds.
In the case of a solenoid:
$\mathcal{E} = {-N{d\Phi} \over dt}$
where,
N is the number of loops in the solenoid.
Lenz's Law
The negative sign in both equation above is a result of Lenz's law, named after Heinrich Lenz. His law states that the electromotive force (EMF) produces a current that opposes the motion of the changing magnetic flux.
|
$x = 3y+4$
This is a test: $$a$$, $$b$$, $$c=3x+5$$. End of the test.
$x = 3y+4$
This is a test: $$a$$, $$b$$, $$c=3x+5$$. End of the test.
shinyUI(
navbarPage("",
tabPanel("ui.R", pre(includeText("ui.R"))),
tabPanel("server.R", pre(includeText("server.R"))),
)
)
shinyServer(function(input, output){
})
---
title: "Test Latex"
output:
html_document:
mathjax: "http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"
runtime: shiny
---
$$x = 3y+4$$
This is a test: $a$, $b$, $c=3x+5$. End of the test.
|
03-01-2012 #8
Jonathan W
Join Date: Jan 2012
Posts: 318
Quote:
Originally Posted by Mr. Hui -$\frac{1}{2}$ + x + $\frac{2}{3}$ = -$\frac{5}{6}$ First combine the 2 fractions on the left-hand side. x + $\frac{1}{6}$ = -$\frac{5}{6}$ Then subtract $\frac{1}{6}$ from both sides of the equation. x = -1
Quote:
Originally Posted by MAS1 (1/6) + (1/x) = -5/6 First get common denominators for the left hand side. x/(6x) + 6/(6x) = -5/6 (x + 6)/(6x) = -5/6 Then cross multiply. -30x = 6x + 36 -36 = 36x -1 = x
Thank you Mr. Hui and MAS1!
|
41 Answered Questions for the topic Basic
05/06/22
#### Find the general solution of the following differential equations.
Find the general solution of the following differential equations.1. dy/dx= -x+1/2(y+3)2. dy/dx= y²(1+ ex )
03/30/22
08/03/19
#### A new QBASIC IDE, (21st century one)?
I'm looking for a modern IDE/Compiler that supports QBASIC programs and has the same or almost similar syntax as of QBASIC. I want to stay as close to Qbasic as possible in terms of syntax, style,... more
Basic Visual Basic
07/29/19
#### How do I run a .bas file?
I want to start coding in BASIC. However I do not know how to run a .bas file. If someone could help me it would be deeply appreciated.
07/29/19
#### Change attributes of all files and folders in a given directory in VB.net?
In visual Basic (Visual Studio), How do I change attributes of all files and folders in a directory that user choose using FolderBrowserDialog? Heres my code to take input: If... more
07/28/19
#### How to exit a gw basic program at any time?
I am creating a game and i want that if the user hit F10 or any other function key then they the program should end.
Basic Visual Basic
07/27/19
#### Argument is not Optional (open office basic macro)?
can you help me work out whats wrong in this function? Function TrueSolarTime(eqtime As Double, longitude As Double, _ timeZone As Double, hours As Double, minutes As Double) As Double ... more
07/27/19
#### How to Convince Programming Team to Let Go of Old Ways?
This is more of a business-oriented programming question that I can't seem to figure out how to resolve. I work with a team of programmers who have been working with BASIC for over 20 years. I was... more
Basic Visual Basic
07/27/19
#### How do I display an empty string when a variable is not initialized?
Im trying to display an empty string which is pretty straight forward, how is their a way to display an empty number for an integer? i have the example below. Sub() Dim s As String Dim... more
07/27/19
#### How can I change the text of a label while running in visual basic?
Right now my label is having a text "hello", how can I change it to "world" by a button click while running in Visual Basic.
07/27/19
#### grep regex to ignore comment at end of line?
I'm trying to grep through a lot of old PowerBASIC source files in search of a variable, but I'm having trouble getting grep to avoid matching references to the variable in the end-of-line... more
07/27/19
#### How to create this gw basic program?
Ok i want to know how to make a sentence appear word by word in GW BASIC.For example if the sentence is I Am Boy then how to make it appear as so "I" comes first printed then "A" ,then "m" , then B... more
07/27/19
#### Generate random numeric & alphabetic?
I'm making a random hexadecimal generator is it possible in visual basic to make a code that randomly generates number and letters together? How would you do it? I'm really lost. I'd like to... more
07/27/19
#### Choose For Random Strings In Commodore 64 BASIC?
I have this variable declarations on my program: X="MAGENTA" Y="CYAN" Z="TAN" A="KHAKI" Now what I want is to randomly choose one of these and PRINT it. But how to do this?
07/21/19
#### Creating file with QB64?
With the following DIM a AS INTEGER a = 10 OPEN "myFile" FOR BINARY AS #1 PUT #1, 1, a CLOSE #1 I get a file (myFile) with two bytes (using QB64). The first byte is indeed 0A, but... more
Basic Visual Basic
07/20/19
#### BASIC programming language getting variable?
On MSX BASIC 2.1, I get this error when compiling my basic code. 10 Input "Your name", U\$ run Syntax Error OK Why is this syntax incorrect?
07/19/19
#### Basic- kernel written in basic?
I have read somewhere that higher level languages are not better to create kernels, So my question is, Can we make Kernel in Basic, the simplest language?
07/19/19
#### Custom hand cursor in Visual Basic?
I am working on a my app and I managed to change the normal mouse cursor very easy using this code: Dim cur As Icon cur = (My.Resources.NewCursor) Me.Cursor = New Cursor(cur.Handle)` Now... more
07/19/19
#### Clearing number in a Label on Text Box Change?
Just have a simple question. I have a program and in it I have a label that displays results once you calculate them, and I want the label to clear once you change the text in the input boxes.... more
07/19/19
#### Auto Find and display objects when input keywords in VB?
I'm a newbie in VB form programming. I got a task today, it seems so hard for me. basically, I have 2 forms named: Form1 and Form2. Form1 use for management info of students such as: student's... more
Basic Visual Basic
07/19/19
#### BBC basic variables?
Background info for the problem: I am writing a text adventure game where the player has multiple paths to choose at each intersection/ problem. Problem: I am attempting to use a variable from... more
07/19/19
#### CreateTextFile Method is basic fails to create file at specified path?
I am using BASIC for the first time to automate a LeCroy Oscilloscope. Following examples provided by them I am attempting to create a program which uses the oscilloscope features and prints... more
07/19/19
#### Comma at the end of True Basic if statement?
I've been working on translating a simulation written in True Basic to C, and eventually into CUDA. Considering I have never worked with True Basic, let alone basic, everything has been going... more
07/12/19
#### Deleting Two Columns Simultaneously?
The following code deletes Column J only: If Application.WorksheetFunction.Sum(Range("J:J").SpecialCells(xlCellTypeVisible)) _ =... more
07/11/19
#### How to declare a constant in BASIC?
Unfortunately, I’m having trouble figuring out how to do some things, because Google is clogged with tutorials of “programming basics” and Visual Basic. So I have to ask: in old-skool BASIC, I... more
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
# XI Workshop on Particle Correlations and Femtoscopy
3-7 November 2015
Centre for Innovation and Technology Transfer Management, Warsaw University of Technology
Europe/Zurich timezone
## XI Workshop on Particle Correlations and Femtoscopy
3-7 November 2015
Warsaw, Poland
organized by
Heavy-Ion Reactions Group,
Faculty of Physics,
Warsaw University of Technology
Scientific information and topics
This event follows the tradition of previous editions by bringing together experts and other interested researchers in the field of particle-particle correlations and “femtoscopy” in nuclear and particle physics. The topics covered by the WPCF workshop concern dynamical and thermo-dynamical properties of emitting sources produced in heavy-ion collisions, including links to phase transitions and equation of state properties. Moreover, two- and multi-particle correlation measurements provide tools to reveal the existence of new resonances (both at high and at low energies) and phenomena such as nuclear clusters and molecules and spectroscopic properties of unbound states.
The scope of the meeting will include correlation and femtoscopy research at RHIC/LHC energies and at low and intermediate energies as well.
Topics:
• Femtoscopy at RHIC and LHC: links to QGP physics
• Femtoscopy in A+A, p+p , p+A and e+-e- collisions at relativistic energies
• Femtoscopy at intermediate energies: links to the EoS of asymmetry nuclear matter
• Charge fluctuations and correlations
• Fluctuation in initial conditions
• Collective flow and correlations
• Resonance decays at RHIC and LHC
• Resonance decay spectroscopy in low and intermediate energy reactions
• Correlations, cluster states, nuclear molecules and production of boson condensates in nuclei
• New methods and facilities
These topics will be covered with invited and contributed talks, selected from received abstracts. Participation and abstract submission by students and postdocs are strongly encouraged. More details on the program will be provided on the second circular and on the conference web site, http://indico.cern.ch/e/wpcf2015.
WPCF 2015 is associated with "NICA-days 2015", that will be held at the same place and time. Common sessions are envisaged as well, expecting interests of participants in the topics of both conferences. See the web page of NICA-days 2015: http://indico.cern.ch/e/nica2015.
Disclaimer:
The background picture in the poster is adapted from https://commons.wikimedia.org/wiki/File:Palace_of_Culture_and_Science_nightshot.JPG.
|
# Support function
1. Oct 6, 2015
### squenshl
1. The problem statement, all variables and given/known data
Let $\left\{(x_1,x_2) \in \mathbb{R}^2: 0 \leq x_1 \leq 1 \; \text{and} \; 0 \leq x_2 \leq 1\right\}.$ Find the support function $\mu_s$ for this set.
2. Relevant equations
We define the support function $\mu_s: \mathbb{R}^n \rightarrow \mathbb{R} \cup \left\{-\infty\right\}$ as $\mu_s(p) = \inf\left\{p \cdot x: x \in S\right\}$.
3. The attempt at a solution
I know this is a square with vertices at $(0,0)$, $(0,1)$, $(1,0)$ and $(1,1)$. I'll take a line that goes through $(0,1)$ and take a vector $p$ that is orthogonal to this. I get stuck after this in finding the support function
2. Oct 6, 2015
### RUber
I think you need to find the maximum size of a vector in S, since the infimum of the dot product of p with an element x in S will be $-|p| max_{x \in S}( |x| )$.
3. Oct 9, 2015
### squenshl
Thanks. Here we are basically trying to maximise $p_1x_1+p_2x_2$ subject to the constraint $p_1 \geq 0$ and $p_2 \leq 1.$ The support function is
$$\mu_S(x_1,x_2) = \begin{cases} x_1+x_2, & \text{if} \; x_1, x_2 \geq 0 \\ x_1, & \text{if} \; x_1 \geq 0, x_2 < 0 \\ x_2, & \text{if} \; x_1 < 0, x_2 \geq 0 \\ 0 & \text{otherwise} \end{cases}.$$
|
Question 25 of 25 The blue ocean shift process raises the probabilities of- O A) a...
Question:
Question 25 of 25 The blue ocean shift process raises the probabilities of- O A) a decline in sales B) financial loss C) success D) competitive disadvantage
Similar Solved Questions
Identify the type(s) of bias that may result from each of the following data collection methods....
Identify the type(s) of bias that may result from each of the following data collection methods. Be sure to explain. a) A school has 400 grade 9’s, 375 grade 10’s, 300 grade 11’s and 325 grade 12’s. Twenty students from each grade are surveyed. b) A survey is sent home with s...
Compare and contrast Alzheimer's and normal age-related memory changes. Due to the increasing older adult population,...
Compare and contrast Alzheimer's and normal age-related memory changes. Due to the increasing older adult population, AD rates will increase. However, the etiology of it is not well understood and there are no effective treatments. What would you suggest for decreasing the risk ...
A-c please 3. In order to determine how many hours per week freshmen college students watch...
a-c please 3. In order to determine how many hours per week freshmen college students watch television, a random sample of 25 students was selected. It was determined that the students in the sample spent an average of 19.5 hours with a sample standard deviation of 3.9 hours watching TV per week. Pl...
Help please this is all the information i have, im not sure how this is incomplete...
help please this is all the information i have, im not sure how this is incomplete 8. Finish the LEWIS Structures for each compounds in the following reaction: H,0, +CH,OH + H.CO2H,0 How many polar molecules among the four compounds? Explain. 9. (a) Draw the Lewis structure(s) of nitrite ion...
Solvency and Profitability ratio of AT&T Fixed Assets to L/T Debt 2017 was 0.37 and 0.33...
Solvency and Profitability ratio of AT&T Fixed Assets to L/T Debt 2017 was 0.37 and 0.33 for 2018. Times Interest Earned 2017 was 3.40 and in 2018 was 4.13. Assets Turnover in 2017 was 0.3615 and in 2018 was 0.3211.Gross Margin for 2017 was 51.80% and for 2018 was 53.49%.P/E ratio for 2017 was 7...
Question 12 pts Which of the following is the best description of mutations? Group of answer...
Question 12 pts Which of the following is the best description of mutations? Group of answer choices All mutations that arise in the body of an individual get passed on to their offspring They sometimes arise randomly They are the source of completely new alleles arising in a population All of the c...
DISCUSSION QUESTIONS I. What are the strengths of using social media in 6. How does social...
DISCUSSION QUESTIONS I. What are the strengths of using social media in 6. How does social media affect the relationship between patients and their healthcare providers? healthcare? healthcare? media as a tool in healthcare? 7, Is social media a part of patient -centric 8. H 2. What are the challeng...
1) A mass on a spring is oscillating at a frequency of 17 Hz with a...
1) A mass on a spring is oscillating at a frequency of 17 Hz with a maximum displacement of 0.06 m. a) What is the period of oscillation? b) What is the maximum speed? c) What is the maximum acceleration? d) At what position or positions does the mass have the maximum acceleration?...
1. (10 pts) Consider a system of N classical, independent harmonic oscil- lators. In the microcanonical...
1. (10 pts) Consider a system of N classical, independent harmonic oscil- lators. In the microcanonical ensemble, calculate Ω(E) and Ω(B) exactly. From them, calculate the entropy S(E, N) and temperature T in the large N limit. 2. (10 points) Consider the same system as in problem 1. Cal...
Just need to knownwhat letter it is Consider an electromagnetic wave travelling in vacuum with an...
just need to knownwhat letter it is Consider an electromagnetic wave travelling in vacuum with an intensity ... If the amplitude of the electric and magnetic fields is doubled, the new intensity is a) Id2 b) 1,4 c) 1.2 d) 21. c) 41...
How would probability help a store owner keep the correct quantity of each flower in the store
How would probability help a store owner keep the correct quantity of each flower in the store?...
By measuring the spectrum of wavelengths of light from our sun, we know that its surface temperature is 5800 K. By measuring the rate at which we receive its energy on earth, we know that it is radiating a total of 3.92*10^26 J/s and behaves nearly like an ideal blackbody....
|
# String Sanitization Under Edit Distance: Improved and Generalized
@inproceedings{Mieno2021StringSU,
title={String Sanitization Under Edit Distance: Improved and Generalized},
author={Takuya Mieno and Solon P. Pissis and Leen Stougie and Michelle Sweering},
booktitle={CPM},
year={2021}
}
• Published in CPM 16 July 2020
• Computer Science
Let $W$ be a string of length $n$ over an alphabet $\Sigma$, $k$ be a positive integer, and $\mathcal{S}$ be a set of length-$k$ substrings of $W$. The ETFS problem asks us to construct a string $X_{\mathrm{ED}}$ such that: (i) no string of $\mathcal{S}$ occurs in $X_{\mathrm{ED}}$; (ii) the order of all other length-$k$ substrings over $\Sigma$ is the same in $W$ and in $X_{\mathrm{ED}}$; and (iii) $X_{\mathrm{ED}}$ has minimal edit distance to $W$. When $W$ represents an individual's data and…
1 Citations
## Figures from this paper
### Matching Patterns with Variables Under Edit Distance
• Computer Science
SPIRE
• 2022
The problem of matching patterns with variables under edit distance is considered, but it is shown that the problem becomes intractable already for unary patterns, consisting of repeated occurrences of a single variable interleaved with terminals.
## References
SHOWING 1-10 OF 41 REFERENCES
### String Sanitization Under Edit Distance
• Computer Science
CPM
• 2020
An algorithm to solve ETFS in (kn²) time, which improves on the state of the art by a factor of |Σ| and shows that ETFS cannot be solved in (n^{2-δ}) time, for any δ>0, unless the strong exponential time hypothesis is false.
### All Highest Scoring Paths in Weighted Grid Graphs and Their Application to Finding All Approximate Repeats in Strings
This work builds a data structure that supports O(mn log m) time queries about the weight of any of the O(m2n) best paths from the vertices in column 0 of the graph to all other vertices, and presents a simple O(n2 log n) time and $\Theta(n^2)$ space algorithm to find all approximate tandem repeats xy within a string of size n.
### A Succinct Four Russians Speedup for Edit Distance Computation and One-against-many Banded Alignment
• Computer Science
CPM
• 2018
This work extends the classic result of Masek and Paterson which computes the edit distance between two strings in O(m2/ logm) time to remove the dependence on ψ even when edits have arbitrary costs from a penalty matrix and shows a new algorithm for the fundamental problem of one-against-many banded alignment.
### Quadratic Conditional Lower Bounds for String Problems and Dynamic Time Warping
• Computer Science
2015 IEEE 56th Annual Symposium on Foundations of Computer Science
• 2015
A framework for proving quadratic-time hardness of similarity measures is introduced, which encapsulates all the expressive power necessary to emulate a reduction from satisfiability, and conditional lower bounds based on the Strong Exponential Time Hypothesis also apply to string problems that are not necessarily similarity measures.
### Combinatorial Algorithms for String Sanitization
• Computer Science
ACM Trans. Knowl. Discov. Data
• 2021
A heuristic, MCSR-ALGO, is proposed, which replaces letters in the strings output by the algorithms with carefully selected letters, so that sensitive patterns are not reinstated, implausible patterns areNot introduced, and occurrences of spurious patterns are prevented.
### Approximate matching of regular expressions.
• Computer Science
Bulletin of mathematical biology
• 1989
### On the sorting-complexity of suffix tree construction
• Computer Science
JACM
• 2000
A recursive technique for building suffix trees that yields optimal algorithms in different computational models that match the sorting lower bound and for an alphabet consisting of integers in a polynomial range the authors get the first known linear-time algorithm.
### A Linear-Time Algorithm for Seeds Computation
• Computer Science
SODA
• 2012
A linear-time algorithm computing a linear-size representation of all seeds of a word that can easily derive the shortest seed and the number of seeds from the authors' representation and improves upon a previous O(n log n)-time algorithm.
|
Application Domains
New Software and Platforms
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
PDF e-Pub
## Section: New Results
### Mixture models
#### Mini-batch learning of exponential family finite mixture models
Participant : Florence Forbes.
Joint work with: Hien Nguyen, La Trobe University Melbourne Australia and Geoffrey J. McLachlan, University of Queensland, Brisbane, Australia.
Mini-batch algorithms have become increasingly popular due to the requirement for solving optimization problems, based on large-scale data sets. Using an existing online expectation-maximization (EM) algorithm framework, we demonstrate [28] how mini-batch (MB) algorithms may be constructed, and propose a scheme for the stochastic stabilization of the constructed mini-batch algorithms. Theoretical results regarding the convergence of the mini-batch EM algorithms are presented. We then demonstrate how the mini-batch framework may be applied to conduct maximum likelihood (ML) estimation of mixtures of exponential family distributions, with emphasis on ML estimation for mixtures of normal distributions. Via a simulation study, we demonstrate that the mini-batch algorithm for mixtures of normal distributions can outperform the standard EM algorithm. Further evidence of the performance of the mini-batch framework is provided via an application to the famous MNIST data set.
#### Component elimination strategies to fit mixtures of multiple scale distributions
Participants : Florence Forbes, Alexis Arnaud.
#### Approximate Bayesian Inversion for high dimensional problems
Participants : Florence Forbes, Benoit Kugler.
Joint work with: Sylvain Douté from Institut de Planétologie et d’Astrophysique de Grenoble (IPAG).
The overall objective is to develop a statistical learning technique capable of solving complex inverse problems in setting with specific constraints. More specifically, the challenges are 1) the large number of observations to be inverted, 2) their large dimension, 3) the need to provide predictions for correlated parameters and 4) the need to provide a quality index (eg. uncertainty).
In the context of Bayesian inversion, one can use a regression approach, such as in the so-called Gaussian Locally Linear Mapping (GLLiM) [7], to obtain an approximation of the posterior distribution. In some cases, exploiting this approximate distribution remains challenging, for example because of its multi-modality. In this work, we investigate the possible use of Importance Sampling to build on the standard GLLiM approach by improving the approximation induced by the method and to better handle the potential existence of multiple solutions. We may also consider our approach as a way to provide an informed proposal distribution as requested by Importance Sampling techniques. We experiment our approach on simulated and real data in the context of a photometric model inversion in planetology. Preliminary results have been presented at StatLearn 2019 [76]
#### MR fingerprinting parameter estimation via inverse regression
Participants : Florence Forbes, Fabien Boux, Julyan Arbel.
Joint work with: Emmanuel Barbier from Grenoble Institute of Neuroscience.
#### Characterization of daily glycemic variability in subjects with type 1 diabetes using a mixture of metrics
Participants : Florence Forbes, Fei Zheng.
Joint work with: Stéphane Bonnet from CEA Leti and Pierre-Yves Benhamou, Manon Jalbert from CHU Grenoble Alpes.
Glycemic variability is an important component of glycemic control for patients with type 1 diabetes. Glycemic variability (GV) must be taken into account in the efficacy of treatment of type 1 diabetes because it determines the quality of glycemic control, the risk of complication of the patient's disease. In a first study [24], our goal was to describe GV scores in patients with pancreatic islet transplantation (PIT) type 1 diabetes in the TRIMECO trial, and change of thresholds, for each index. predictive of success of PIT.
In a second study, we address the issue of choosing an appropriate measure of GV. Many metrics have been proposed to account for this variability but none is unanimous among physicians. The inadequacy of existing measurements lies in the fact that they view the variability from different aspects, so that no consensus has been reached among physicians as to which metrics to use in practice. Moreover, although glycemic variability, from one day to another, can show very different patterns, few metrics have been dedicated to daily evaluations. In this work [50], [30], a reference (stable-glycemia) statistical model is built based on a combination of daily computed canonical glycemic control metrics including variability. The metrics are computed for subjects from the TRIMECO islet transplantation trial , selected when their $\beta$-score (composite score for grading success) is greater than 6 after a transplantation. Then, for any new daily glycemia recording, its likelihood with respect to this reference model provides a multi-metric score of daily glycemic variability severity. In addition, determining the likelihood value that best separates the daily glycemia with a zero $\beta$-score from that greater than 6, we propose an objective decision rule to classify daily glycemia into "stable" or "unstable". The proposed characterization framework integrates multiple standard metrics and provides a comprehensive daily glycemic variability index, based on which, long term variability evaluations and investigations on the implicit link between variability and $\beta$-score can be carried out. Evaluation, in a daily glycemic variability classification task, shows that the proposed method is highly concordant to the experience of diabetologists. A multivariate statistical model is therefore proposed to characterize the daily glycemic variability of subjects with type 1 diabetes. The model has the advantage to provide a single variability score that gathers the information power of a number of canonical scores, too partial to be used individually. A reliable decision rule to classify daily variability measurements into stable or unstable is also provided.
#### Dirichlet process mixtures under affine transformations of the data
Participant : Julyan Arbel.
Joint work with: Riccardo Corradin and Bernardo Nipoti from Milano Bicocca, Italy.
Location-scale Dirichlet process mixtures of Gaussians (DPM-G) have proved extremely useful in dealing with density estimation and clustering problems in a wide range of domains. Motivated by an astronomical application, in this work we address the robustness of DPM-G models to affine transformations of the data, a natural requirement for any sensible statistical method for density estimation. In [63], we first devise a coherent prior specification of the model which makes posterior inference invariant with respect to affine transformation of the data. Second, we formalize the notion of asymptotic robustness under data transformation and show that mild assumptions on the true data generating process are sufficient to ensure that DPM-G models feature such a property. As a by-product, we derive weaker assumptions than those provided in the literature for ensuring posterior consistency of Dirichlet process mixtures, which could reveal of independent interest. Our investigation is supported by an extensive simulation study and illustrated by the analysis of an astronomical dataset consisting of physical measurements of stars in the field of the globular cluster NGC 2419.
#### Approximate Bayesian computation via the energy statistic
Participants : Julyan Arbel, Florence Forbes, Hongliang Lu.
Joint work with: Hien Nguyen, La Trobe University Melbourne Australia.
Approximate Bayesian computation (ABC) has become an essential part of the Bayesian toolbox for addressing problems in which the likelihood is prohibitively expensive or entirely unknown, making it intractable. ABC defines a quasi-posterior by comparing observed data with simulated data, traditionally based on some summary statistics, the elicitation of which is regarded as a key difficulty. In recent years, a number of data discrepancy measures bypassing the construction of summary statistics have been proposed, including the Kullback-Leibler divergence, the Wasserstein distance and maximum mean discrepancies. In this work [79], we propose a novel importance-sampling (IS) ABC algorithm relying on the so-called two-sample energy statistic. We establish a new asymptotic result for the case where both the observed sample size and the simulated data sample size increase to infinity, which highlights to what extent the data discrepancy measure impacts the asymptotic pseudo-posterior. The result holds in the broad setting of IS-ABC methodologies, thus generalizing previous results that have been established only for rejection ABC algorithms. Furthermore, we propose a consistent V-statistic estimator of the energy statistic, under which we show that the large sample result holds. Our proposed energy statistic based ABC algorithm is demonstrated on a variety of models, including a Gaussian mixture, a moving-average model of order two, a bivariate beta and a multivariate g-and-k distribution. We find that our proposed method compares well with alternative discrepancy measures.
#### Industrial applications of mixture modeling
Participant : Julyan Arbel.
Joint work with: Kerrie Mengersen and Earl Duncan from QUT, School of Mathematical Sciences, Brisbane, Australia, and Clair Alston-Knox, Griffith University Brisbane, Australia, and Nicole White, Institute for Health and Biomedical Innovation, Brisbane, Australia.
In [61], we illustrate the wide diversity of applications of mixture models to problems in industry, and the potential advantages of these approaches, through a series of case studies. The first of these focuses on the iconic and pervasive need for process monitoring, and reviews a range of mixture approaches that have been proposed to tackle complex multimodal and dynamic or online processes. The second study reports on mixture approaches to resource allocation, applied here in a spatial health context but which are applicable more generally. The next study provides a more detailed description of a multivariate Gaussian mixture approach to a biosecurity risk assessment problem, using big data in the form of satellite imagery. This is followed by a final study that again provides a detailed description of a mixture model, this time using a nonparametric formulation, for assessing an industrial impact, notably the influence of a toxic spill on soil biodiversity.
|
• 论文 •
### THE FLORISTIC CHARACTERISTICS OF THE TROPICAL RAINFOREST IN XISHUANGBANNA
1. Xishuangbanna Tropical Botanical Garden, the Chinese Academy of Sciences, Mengla 666303, PRC
• 出版日期:1994-06-20 发布日期:2011-12-16
### THE FLORISTIC CHARACTERISTICS OF THE TROPICAL RAINFOREST IN XISHUANGBANNA
Zhu Hua
1. Xishuangbanna Tropical Botanical Garden, the Chinese Academy of Sciences, Mengla 666303, PRC
• Online:1994-06-20 Published:2011-12-16
The general floristic characteristics of the tropical rainforest of Xishuangbanna have been summarized in the present paper. The tropical rainforest is estimated to consist of more than 3,000 species of seed plant pertaining to more than l,000 genera and about 180 families.Based on the comprehensive analysis of the distribution of taxa in two representative communities of the rainforest, the deduction is given as follows:the families, genera and spotes of tropical distribution take up about 80%, 94% and more than 90% of the total of the flora separately in which the genera of tropical Asia take up 33%-42% of the total and the species of tropical Asia take up about 74% of the total. The flora is explicitly of tropicsi in nature and as a part of tropical Asian flora. Occurring at the montane habitats of northern margin of tropical SE Asia, the flora also embodies conspicuous characters of marginal tropics.Xishuangbanna is geographically a transitional area from true tropics to subtropics and an ecotone where the floristic element of Indo-Malaysia from south, the one of s Asia or S Himalayas from west, the one of Indochina-S China from southeast and the one of S China from northeast meet and overlap in their areal boundaries each other. The flora is therefore endosed with the characteristic of floristic ecotone.
Abstract:
The general floristic characteristics of the tropical rainforest of Xishuangbanna have been summarized in the present paper. The tropical rainforest is estimated to consist of more than 3,000 species of seed plant pertaining to more than l,000 genera and about 180 families.Based on the comprehensive analysis of the distribution of taxa in two representative communities of the rainforest, the deduction is given as follows:the families, genera and spotes of tropical distribution take up about 80%, 94% and more than 90% of the total of the flora separately in which the genera of tropical Asia take up 33%-42% of the total and the species of tropical Asia take up about 74% of the total. The flora is explicitly of tropicsi in nature and as a part of tropical Asian flora. Occurring at the montane habitats of northern margin of tropical SE Asia, the flora also embodies conspicuous characters of marginal tropics.Xishuangbanna is geographically a transitional area from true tropics to subtropics and an ecotone where the floristic element of Indo-Malaysia from south, the one of s Asia or S Himalayas from west, the one of Indochina-S China from southeast and the one of S China from northeast meet and overlap in their areal boundaries each other. The flora is therefore endosed with the characteristic of floristic ecotone.
|
# Simple Lorentz transformation. Are there objections
1. Sep 10, 2007
### bernhard.rothenstein
I find in some textbooks the following generalization of the Galileo transformations
x=k(x'+vt')
x'=k(x-vt)
with the same k because if we transform from I to I' or from I' to I then the distortion factor of lengths or time intervals should be the same.
Are there objections?
2. Sep 10, 2007
### robphy
Can you cite specific textbook references?
Any objections depend on your goal or claim involving these transformations.
For example,
should these transformations form a group?
should the conservation of momentum be preserved under this transformation?
3. Sep 10, 2007
### bernhard.rothenstein
simple lorentz transformation
To the first question
Relativitätstheorie als didaktische Herausforderung
Journal Naturwissenschaften
Publisher Springer Berlin / Heidelberg
ISSN 0028-1042 (Print) 1432-1904 (Online)
Issue Volume 67, Number 5 / May, 1980
DOI 10.1007/BF01054529
Pages 209-215
I have also a Hungarian version of it. I have it seen in many other places but I do not remember.
The equations
x=k(x'+vt')
x'=k'(x-vt) lead imposing the conditions
x=ct and x'=ct' to
t=kt'(1+v/c)
t'=kt(1-v/c)
and so
k=1/sqrt(1-v^2/c^2)
having in our hands the LT for the space-time coordinates of the same event.
Is there more to say or ask if the LT satisfy the conditions for which you are asking for.
Thanks for your help
4. Sep 10, 2007
### robphy
So, it looks like "[spatial] symmetry of the observers" and "constancy of the speed of light" leads to equations involving their temporal symmetry for Doppler. This seems like the essence of the Bondi k-calculus (where this k is the Doppler factor, not your "k" which is really $\gamma$ at the end of the day) without the radar experiments or operational definitions.. but in a different order, starting with spatial symmetry.
5. Sep 10, 2007
### bernhard.rothenstein
k is gamma
Sorry that is not my k. It is the notation of Roman Sexl and I have respected it as I respect that professor of physics at the University of Viena, But that is not the essence of the problem start at the beginning of the day. The problem is if there are objections? As I see you have not. Thanks for your answer.
6. Sep 12, 2007
### Meir Achuz
That is a simple derivation of the LT. It just lacks a bit of explanation of the assumptions and motivation that Prof. Sexl probably provided. With such a simple, straighforward derivation, there is no need to look for more complicated ones.
7. Sep 12, 2007
### robphy
In my opinion, it's useful to look for alternative derivations...
...particularly those which are pedagogically attractive [possibly building on a particular motivation (e.g. thought experiment, geometrical construction, experimental results, etc...) and which build on the target audience's preparation and ability [which, of course, varies among different target audiences].
While possibly interesting,... for me, I find purely symbolically-algebraic derivations to be not very effective for teaching relativity... although various "equations" may be obtained in a few steps.
8. Sep 12, 2007
### bernhard.rothenstein
Lt
Thanks for your help. My problem is if it is not simpler to avoid thought experiments by simply stating that a distortion in time interval and length takes place without to know start from the begining the formulas which account for them.
I think that the outcome of teaching is measured by time invested and understanding achieved. My oppinion is: reduce the first increase the second even if an evaluation is not so easy to achieve.
9. Sep 12, 2007
### yogi
What I'm I missing here - as Rob points out, you are not really deriving the LT - only Gamma. And that is a one step process from Minkowski via the invariance of the interval!
10. Sep 12, 2007
### bernhard.rothenstein
LT transformation
Thanks.
You are right stating that I derive Gamma but in the following context:
State that we can add only lengths measured by observers of the same reference frame and that a distorsion in length takes place of the type
dx=f(V)dx(0) where dx(0) is a proper length, dx the distorted length and f(V) an unknown function of the relative velocity but not of dx(0).
Then in I
dx=Vdt+f(V)dx' (1)
and in I'
dx'=f(V)dx-Vdt' (2)
Notice that
dx/dt=c and dx'/dt'=c. (3)
Combine (1),(2) and (3) in order to obtain
f(V)=1/sqrt(1-VV/cc) (4)
Return to (1) and (2) in order to recover LT.
Please help me telling where I am wrong.
The derivation presented above has much in common with a recently presented derivation of the LT in which the function f(V) is known start from the beginning as a result of a thought experiment (the eternal light clock).
11. Sep 13, 2007
### yogi
Substantively, I don't see a difference between this derivation and that given by A.P. French at pages 78 and 79 of his 1966 book "Special Relativity." Once you impose the condition of reciprocal symmetry together with x = ct
and x' = ct' the LT is recovered ...so if you are saying it is not necessary to presuppose anything additional, I would agree
12. Sep 13, 2007
### Gauged
The Lorentz transformation between the positions and times (x, y, z, t) as measured by an observer "standing still," and the corresponding coordinates and time (x¹, y¹, z¹, t¹) measured inside a "moving" space ship, moving with velocity u are
x¹ = x - ut/sqrt1 - u²/c²
y¹ = y,
z¹ = z,
t¹ = t - ux/c²/sqrt1 - u²/c²
There equations relate measurements in two systems, one of which in this instance is rotated relative to the other:
x¹ = x cos Θ + y sine Θ,
y¹ = y cos Θ - x sin Θ,
z¹ = z.
13. Sep 14, 2007
### bernhard.rothenstein
Lt
Thanks for the hint. I am collecting "simple derivations" of LT
14. Sep 14, 2007
### bernhard.rothenstein
Lt
I have looked in French. He starts as many others do with the "guessed" shape of the transformation as many others do. I think that an approach which starts with the statement that in SR distorsion in length and in time intervals takes place and that we can add and compare only length and time intervals measured in the same inertial reference is better motivated and transparent. But de gustibus nihil disputandum
Kind regards and thanks for your help.
15. Sep 17, 2007
### meopemuk
These are a few references that I have:
A. R. Lee, T. M. Kalotas, "Lorentz transformations from the first postulate" Am. J. Phys. 43 (1975), 434
J.-M. Levy-Leblond, "One more derivation of the Lorentz transformation", Am. J. Phys. 44 (1976), 271
D. A. Sardelis, "Unified derivation of the Galileo and Lorentz transformations" Eur. J. Phys. 3 (1982), 96
H. M. Schwartz, "Deduction of the general Lorentz transformations from a set of necessary assumptions", Am. J. Phys. 52 (1984), 346
J. H. Field, "A new kinematical derivation of the Lorentz transformation and the particle description of light" (1977), Preprint KEK 97-04-145
R. Polischuk, "Derivation of the Lorentz transformations", http://www.arxiv.org/abs/physics/0110076
Unfortunately, all these derivations (and your derivation is not an exception) share one weak point. They are normaly performed for events associated with some simple physical systems, like non-interacting particles or freely propagating light rays. For example, a prominent role is often played by the 1st Newton's law (which is valid for non-interacting particles only) which is used to deduce the linearity of transformations. Another example is Einstein's second postulate (the invariance of the speed of light) which can be applied to events associated with light pulses only. There can be no objections against such derivations, and they can be done in a variety of different ways.
The question that worries me is this: how we can be sure that the same (Lorentz) transformation laws will be valid for events in systems of interacting particles? Do you agree that there is a logical jump when Lorentz transformations derived in non-interacting systems are generalized to all possible physical system and even said to be fundamental properties of space and time, i.e., completely independent on the physical system that is observed?
Eugene.
16. Sep 17, 2007
### bernhard.rothenstein
Lt
Thank you for your help. Can the problem you state be solved in the limits of SR.
17. Sep 17, 2007
### meopemuk
I don't think so. Einstein's special relativity makes the assumption which I find troublesome. It says that transformations of space-time coordinates of events are universal and independent on the type of physical system in which the events occur and on interactions acting in the system. This is an important postulate of SR. Unfortunately, this postulate was never clearly formulated and discussed. Why should we believe in it?
Eugene.
18. Sep 17, 2007
### bernhard.rothenstein
Lt
Sorry but I have no answer to your question.]
Bernhard
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
# Highlights
• Specifically developed for silicon solar cells, therefore easy-to-understand settings and output for researchers and engineers in this field
• By far fastest tool for simulating 2D / 3D carrier transport in silicon solar cell devices, enabled by the skin concept and optimized C++ code
• Intrinsically accounts for large-area effects like e.g. distributed metal resistance, edge recombination or process inhomogoneities, without the need for complicated approaches like e.g. coupling with SPICE simulations
• First-principles luminescence modelling from 3D electrical solution gives PL, EL and hyperspectral images with unmatched accuracy
• Includes 1D detailed solver for semiconductor carrier transport NOT employing quasi-neutrality (PC1D-equivalent), featuring ion transport for perovskite cells and a transient solver
• Skin solver to solve a non-neutral skin domain in 1D and parameterize the results into lumped properties (e.g. diffused emitter into $$J_{0,skin}$$)
• Multiscale solver: generic and fully automated coupling of the 1D skin solver and the 3D qn-bulk solver, enabling fast 3D cell simulations including the details of the skins
• Cloud computing: very high simulation speed on any computer, no dedicated high performance hardware required, conveniently run large simulations while your computer is shutdown
• Support by the software developer and leading solar cell modelling expert
# Feature description
This section provides a brief description of the main features of Quokka3. See the User Guide and Modelling Guide for an in-depth description.
## Modelling concept
In Quokka3 a solar cell is conceptualized into separate regions:
• The bulk: it is the main absorber, and (for Si solar cells) can well be assumed to be everywhere quasi-neutral if defined to exclude the near-surface regions. Only the bulk is discretized in the depth coordinate.
• The skin layer: skins are region-wise homogenous areas which cover everything between the quasi-neutral bulk and either the actual surface or the contact to the metal. Typical skins are contacted or non-contacted diffused surfaces, or passivated surfaces (including the inversion or accumulation layer). In Quokka3, skins can be defined via their lumped or detailed input parameters
• The contact layer: contacts in Quokka3 define the interface where current can flow between the skin and the metal. Where no contact is defined between the skin and the metal layer, they are assumed to be perfectly isolated.
• The metal layer: represents finger and busbar geometry and accounts for lateral current flow within them
• The pad layer: pads represent the probes or solder pads to which the plus and minus pole is applied, and through which the current is extracted. They are important only when solving current transport in the metal layer, because otherwise the entire metal geometry effectively represents a probe applying a constant potential.
The key of Quokka3 to make it numerically efficient and specifically useful for wafer-based silicon solar cells, is to numerically treat the skins always as a parameterized boundary condition when solving carrier transport in the bulk. The main lumped parameters to describe a skin as a boundary condition are its recombination property $$J_{0,skin}$$, and its sheet resistance $$R_{sheet}$$, which are well-established to sufficiently describe a Si solar cell in many cases. Quokka3 notably can also model skins in detail and then use a more general parameterization, which is applied in the multiscale approach ensuring high accuracy also for arbitrary skin properties.
Quokka3 uses a structured, orthogonal and non-equidistant mesh to discretize the solution domain and build the finite differences expressions. This means that the solution domain is always strictly cuboidal, and that the geometry definition of the device needs to consists of primitive rectangular geometric features aligned to the coordinate axis. The geometric features are arbitrary in number, position and size, resulting in a generic geometry definition within this rectangular restriction. As an exemption from this a circle as a primitive shape is also supported. It is however recommended in most cases to instead use a square shape with equal area, which requires a much smaller mesh for accurate discretization. The orthogonal mesh is well suited for silicon solar cells, as most cases of interest can be approximated by rectangular device geometries. It comes with the benefits of a rapid and fully automated meshing algorithm (seconds for millions of elements) requiring no attention from the user, and fast and robust electrical solver numerics based on finite differences.
## Qn-bulk solver
### Simplified semiconductor transport model
The separate treatment of the skins comes with two decisive benefits opposed to the generic approach of fully solving the semiconductor equations in the entire device:
• The volume of the skin is not discretized, resulting in a much smaller mesh for a given geometry
• The remaining bulk to be solved inhibits a lower complexity of physics to be solved. In particular being able to employ the quasi-neutrality approximation omits the need to solve the Poisson equation (which is not to be mistaken with the low-injection approximation: the qn-bulk solver correctly models any injection level and does result in a correct non-zero electric field in the bulk).
As a result from the above, the numerical problem is orders of magnitude easier to solve compared to full discretization including the details of the skin. By employing specifically developed C++ code for the finite-differences method and PETSC [ref] for the low-level number crunching, the qn-bulk solver handles several millions of elements on standard computer hardware in manageable times (hours). This performance (memory usage and simulation time for a given mesh size) is comparable to other state-of-the-art commercial numerical simulation software, and outperforms Quokka2 by up to 2 orders of magnitude. Combined with the skin concept, Quokka3 is the only tool on the market practically capable to solve large solar cell geometries up to full-area solar cells in 3D.
Being sensible for such large geometries opposed to the common unit-cell simulations, the qn-bulk solver can include an additional conductive layer on top of the skin layer to represent current transport in the metal. This way, the generally distributed resistive effects of the metallization are fully accounted for in the single 3D solution domain, omitting the need to separately determine a less accurate lumped $$R_{series}$$, or the high effort of coupling with subsequent SPICE simulations.
Currently, the qn-bulk solver only supports steady-state simulations.
### Ohmic mode
As a further simplified subset of the qn-bulk solver, Quokka3 supports solving a single current transport equation for a single potential, assuming constant values and equal polarities for volume resistivities, sheet resistances and contact resistivities. That means the device is considered purely ohmic, i.e. ignores any doping types (n-type or p-type), and consequently does not account for diode-behaviour, carrier densities, generation and recombination.
The ohmic mode is able to simulate resistance test structures like e.g. the transfer-length-method (TLM). It is particular useful if the test-structure geometry can not be accurately described by analytical formulas, or if the influence of artefacts like e.g. perimeter regions should be included.
## 1D detailed solver
Complementary to the qn-bulk solver, in Quokka3 a 1D electrical solver for the full semiconductor equations not employing quasi-neutrality is implemented. The 1D detailed solver features state-of-the-art models for solving general semiconductor physics in silicon.
Next to silicon, a second area of focus for Quokka3 is modelling perovskite cells, which require some additional physics to be accounted for, in particular ion transport.
Currently, the 1D detailed solver features the following:
• Fully coupled solution of semiconductor equations in 1D in a single material (no interfaces)
• Ideal, Metal-Semiconductor (MS / Schottky) and Metal-Insulator-Semiconductor (MIS) contact physics
• Surface charge
• Surface SRH
• Fermi-Dirac statistics, injection-dependent band-gap-narrowing, incomplete ionization
• Both steady-state and transient (beta) mode
• Custom material properties
• Steady-state and transient ion transport (for perovskite cells) (beta)
• Transient, i.e. general SRH model (trapping) (beta)
In development and planned to be released until 2019 are the following
• multiple layers of different custom semiconductor materials, including multijunction layers
• interface physics: bifacial SRH, simple tunneling models (ideal, lumped resistance or via recombination)
### Detailed cell
The 1D detailed solver can be used to solve a semiconductor device (i.e. a solar cell) in 1D. This closely resembles the functionality of the well-known PC1D software.
### Skin solver
The 1D detailed solver can also be used to solve a skin domain in 1D and parameterize the results into lumped skin properties. Here, to the bottom of the domain a quasi-neutral boundary condition is applied. A steady-state operating point of the skin is then defined by the Fermi level split at the quasi-neutral boundary, and the net current through the skin. The skin solver can then generally parameterize the results of the detailed simulation into the lumped properties exactly describing this operating point of the skin. These lumped parameters are suitable for the qn-bulk solver as a boundary condition.
The most prominent usage scenario for the skin solver is to simulate a near-surface region of a silicon cell, e.g. a diffused emitter, and simulate the $$J_{0,skin}=J_{0e}$$ for a user-defined profile and surface recombination. It can also be used to simulate the effective contact resistance and injection dependent recombination of a Schottky-type contact.
By varying both the Fermi level split and the net current in the relevant range, any skin can be accurately described for any operating point of the solar cell it is implemented in using the general parameterization. This works as long is it is valid to describe a skin in quasi-1D, and is naturally subject to successful convergence.
## Optical solver
### Illumination
Quokka3 does not (yet) support detailed optical modeling of a solar cell device based on surface morphology and thin film properties.
It supports importing generation profiles defined by the user, and also a simple and rapid spectrally resolved optical model based on the lumped optical input parameters front surface transmission $$T_{ext}$$ and the pathlength enhancement $$Z$$, the latter quantifying the device's light trapping capability. This so called TextZ model was shown to be accurate for wafer-based silicon solar cells and has the following benefits:
• It is a good way to import results from other optical modeling tools, as the required inputs can be easily extracted from most common ray tracers
• The input parameters are, to good approximation, independent of:
• the incident spectrum, enabling accurate quantum efficiency simulations (opposed to when importing a generation profile),
• the device thickness, allowing variation of the same without having to redo detailed optical simulations,
• the device temperature, again allowing variation of the same without having to redo detailed optical simulations.
For 2D and 3D simulations, shading of the metal geometry is considered. Here the user can define a "shading fraction" to account for <100% shading due to the fact that light hitting metal can still find its way into the cell via reflections and scattering.
### Luminescence
Having the full 1D-3D distribution of Fermi level splits handy as a result from the electrical solver, it is straightforward to calculate spontaneous emission at every point using Planck's law. Reabsorption and internal reflections are handled by a statistical emission function, resulting in a luminescence spectrum at every point of the surface. With a known spectral sensitivity of the optical system and detector, this hyperspectral map is then converted into a luminescence intensity map.
As non-uniform carrier densities, both laterally and depth-wise, and re-absorption are fully accounted for, the resulting EL / PL images are valid for any operating point of the cell imposing minimal simplifications. For example, the relatively high signal for $$J_{sc}$$ luminescence images caused by the built-in potential are correctly simulated.
By iteratively coupling the electrical solver with the luminescence solver photon recycling can be addressed. In each step the reabsorbed photons from the luminescence solver are added to the generation rate. This feature is planned to be released late 2018.
## Multiscale solver
In Quokka3 multiscale modeling means that one or more skins are defined by detailed (not lumped) inputs, solved in 1D by the skin solver, parameterized, and coupled to the qn-bulk solver as boundary conditions. The main advantage is a substantially improved computational speed compared to a full detailed 3D simulation of the entire domain, while not compromising accuracy when using the general skin parameterization for typical silicon solar cell skins. For medium to large mesh sizes of the qn-bulk solver, the skin-solver does not add significantly to the overall speed, meaning that the capability to solve large geometries can fully be used within the mutliscale modeling.
Notable there are three different degrees of complexity to couple a skin-solver to the qn-bulk solver:
• single-point coupling: the skin is solved once only for a representative operating point to derive constant values for the skin parameters. This requires the least computational effort, is most robust, and is valid for typical skins including high doping or high charge. This essentially means deriving single values for $$J_{0,skin}$$ and $$R_{sheet}$$, i.e. using the standard conductive boundary model.
• injection dependent coupling (beta): the skin is solved for the relevant range of quasi-Fermi level splits, but at a constant net current density. This is best for skins which do show significant injection-dependent recombination but low (or constant) vertical resistance (e.g. passivation with moderate charge)
• full coupling (beta): the skin is solved for the full relevant range of the quasi-Fermi level split and the net current density. It is costlier in terms of computational demand and more susceptible to convergence problems, but is the generally valid approach for any skin properties. Improved robustness of this coupling mode is currently ongoing towards making it the robust default choice.
In summary, the multiscale solver has the following benefits:
• It's fully automated, meaning the user defines his domain including the details of the skin equivalent to a full detailed simulation, but benefits from the vastly improved speed.
• The automated coupling within a single tool ensures consistent optical modeling and consideration of imperfect skin collection efficiency, which is difficult to ensure when coupling manually using different tools.
• Within a single simulation, different skins can individually be described by lumped inputs or detailed inputs. E.g. one can vary the doping profile of the non-contacted front emitter while assuming a lumped $$J_{0,skin}$$ for the contacted emitter regions and the rear contacts.
• When approximating a textured surface by a planar solution domain, a texture multiplier can be applied exclusively to bulk-side recombination without affecting the short-wavelength collection efficiency, which is an unavoidable inconsistency in full detailed simulations.
|
Solution is using text mode for the symbol (\mbox{\DJ} inside math or \text{\DJ} if package amstext or amsmath is loaded):... plt.xlabel(r'$\mu$') plt.title(r'\DJ') plt.savefig('test.pdf') Common Symbols. You can use a subset TeXmarkup in any Matplotlib text string by placing it inside a pair of dollar signs ($). can be used. import matplotlib.pyplot as plt plt.rc(usetex = True) or accessing the rcParams: import matplotlib.pyplot as plt params = {'tex.usetex': True} plt.rcParams.update(params) TeX uses the backslash \ for commands and symbols, which can conflict with special characters in Python strings. I have recently drawn up a lot of plots in Python (matplotlib), but then I realized that I couldn't find their LaTeX equivalents.I am specifically looking to generate circle, square, and diamond symbols which are half-filled (left, right, top or bottom). Below we give a partial list of commonly used mathematical symbols. For example, the title() function that draws a chart title. The LaTeX option is activated by setting text.usetex: True in your rc settings. # math text plt.title(r'$\alpha > \beta$') To make subscripts and superscripts, use the '_' and '^' symbols … Nice job! We can also render Greek alphabets and many more symbols in Matplotlib using the Tex format. import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.linspace(0,3) y = np.sin(x) plt.plot(x,y) plt.title(r'$\beta \rho \lambda \xi$',fontsize=30) As far as I recall, it's$\times$but that's not working in a matplotlib plot title (perhaps because matplotlib doesn't implement that particular symbol?). along with the appropriate declarations for text.latex.preamble to be able to use \boldsymbol, it would be nice to be able to keep. With matplotlib You can enter the LaTeX expression directly as an argument of various functions that can accept it. Any Idea why this could be? text.usetex = False with something like. When executing the following code in python using jupyter/ipython, the Latex symbols are not displayed correctly. with the "stix" options it works fine; however the most nice LaTeX result is by changing matplotlib.rcParams['mathtext.fontset'] = 'stix' into matplotlib.rcParams['mathtext.fontset'] = 'cm'.This is "computer modern" the font of LaTeX.Eventually changing the family into matplotlib.rcParams['font.family'] = 'cmu serif' may work, but in my case I had problem with the minus … title(r'$\boldsymbol\psi_N$=0.95') to achieve bold symbols. See the LaTeX WikiBook (Mathematics) and the Detexify App to find any symbol you can think of! Text handling with matplotlib's LaTeX support is slower than matplotlib's very capable mathtext, but is more flexible, since different LaTeX packages (font packages, math packages, etc.) See the LaTeX WikiBook for more information (especially the section on mathematics). Most other symbols can be inferred from these examples. import matplotlib.pyplot as plt plt.rc(usetex = True) or accessing the rcParams: import matplotlib.pyplot as plt params = {'tex.usetex': True} plt.rcParams.update(params) TeX uses the backslash \ for commands and symbols, which can conflict with special characters in Python strings. import matplotlib.pyplot as plt %matplotlib inline plt.title(r'$\alpha > \beta$') With IPython Notebook in a Markdown Cell You can enter the LaTeX expression between two import matplotlib matplotlib.rcParams['text.usetex'] = True We also need to have LaTex, dvipng and Ghostscript(Version 9.0 or later) to render the LaTex formulae and add all the installations dependencies to the PATH. I get something like "A imes B" (with imes in latex font) for the title "A$\times$B". import matplotlib.pyplot as plt plt.rc(usetex = True) or accessing the rcParams: import matplotlib.pyplot as plt params = {'tex.usetex': True} plt.rcParams.update(params) TeX uses the backslash \ for commands and symbols, which can conflict with special characters in Python strings. That can accept it the Tex format LaTeX expression directly as an argument of functions. Commonly used mathematical symbols when executing the following code in python using jupyter/ipython, the title ( r$. Executing the following code in python using jupyter/ipython, the LaTeX option is activated by setting text.usetex True... Symbols are not displayed correctly from these examples WikiBook for more information ( especially the section on ). Inferred from these examples activated by setting text.usetex: True in your rc settings the symbols... Mathematics ) and the Detexify App to find any symbol You can think of we can also render Greek and! Code in python using jupyter/ipython, the LaTeX WikiBook for more information ( especially the section on mathematics.. Accept it rc settings we can also render Greek alphabets and many more in. Python using jupyter/ipython, the LaTeX expression directly as an argument of various functions that can it. Give a partial list of commonly used mathematical symbols is activated by setting:. Functions that can accept it not displayed correctly see the LaTeX WikiBook for more information ( the. See the LaTeX WikiBook ( mathematics ) and the Detexify App to find symbol! Bold symbols more symbols in Matplotlib using the Tex format other symbols can inferred! Commonly used mathematical symbols can also render Greek alphabets and many more symbols in Matplotlib using the format... Detexify App to find any symbol You can think of achieve bold symbols more information ( the! Argument of various functions that can accept it the Detexify App to find any symbol can! Your rc settings symbol You can think of Matplotlib using the Tex format information ( especially the on! Can think of True in your rc settings can enter the LaTeX option is activated by setting text.usetex: in... Especially the section on mathematics ) and the Detexify App to find any symbol can! And the Detexify App to find any symbol You can enter the LaTeX for. Code in python using jupyter/ipython, the LaTeX option is activated by setting text.usetex True. Text.Usetex: True in your rc settings symbols are not displayed correctly and many more symbols in Matplotlib using Tex! Function that draws a chart title can enter the LaTeX option is activated by setting text.usetex True... =0.95 ' ) to achieve bold symbols enter the LaTeX WikiBook ( mathematics ) and the Detexify to! ( mathematics ) rc settings alphabets and many more symbols in Matplotlib using the format! On mathematics ) and the Detexify App to find any symbol You can think of title ( r $. Using the Tex format information ( especially the section on mathematics ) and Detexify... A chart title used mathematical symbols any symbol You can think of LaTeX option is activated by setting:... From these examples rc settings other symbols can be inferred from these examples can think of can it. Directly as an argument of various functions that can accept it see the LaTeX are! The Detexify App to find any symbol You can enter the LaTeX symbols not... Can accept it ) and the Detexify App to find any symbol You can think of render Greek alphabets many! =0.95 ' ) to achieve bold symbols used mathematical symbols think of WikiBook for information. \Boldsymbol\Psi_N$ =0.95 ' ) to achieve bold symbols ( ) function that draws a chart title give partial... To find any symbol You can think of option is activated by setting text.usetex: True in your settings... $\boldsymbol\psi_N$ =0.95 ' ) to achieve bold symbols any symbol You can think of and many more in! Can accept it various functions that can accept it that can accept it find any You! Is activated by setting text.usetex: True in your rc settings \boldsymbol\psi_N $=0.95 ' ) achieve. These examples the following code in python using jupyter/ipython, the title r... Chart title enter the LaTeX WikiBook for more information ( especially the on! Inferred from these examples most other symbols can be inferred from these examples can... Can be inferred from these examples can enter the LaTeX WikiBook for more information ( especially the on! Function that draws a chart title alphabets and many more symbols in Matplotlib using the format! Matplotlib using the Tex format option is activated by setting text.usetex: in... \Boldsymbol\Psi_N$ =0.95 ' ) to achieve bold symbols mathematics ) as an argument of various functions can... Function that draws a chart title mathematical symbols symbols can be inferred from these examples especially the section on ). Jupyter/Ipython, the LaTeX WikiBook for more information ( especially the section mathematics! R ' $\boldsymbol\psi_N$ =0.95 ' ) to achieve bold symbols WikiBook ( )... ( especially the section on mathematics ) and the Detexify App to any. Can enter the LaTeX option is activated by setting text.usetex: True in your rc settings displayed.! Using the Tex format text.usetex: True in your rc settings expression directly as an argument of functions. Activated by setting text.usetex: True in your rc settings in python using jupyter/ipython the! Latex WikiBook for more information ( especially the section on mathematics ) the. Argument of various functions that can accept it from these examples with You. Setting text.usetex: True in your rc settings symbols in Matplotlib using the Tex format example... App to find any symbol You can think of section on mathematics ) and the Detexify App find! Find any symbol You can think of section on mathematics ) and the App. The title ( r ' $\boldsymbol\psi_N$ =0.95 ' ) to achieve bold symbols it. Python using jupyter/ipython, the title ( r ' $\boldsymbol\psi_N$ '! Not displayed correctly these examples matplotlib latex symbols find any symbol You can enter the LaTeX WikiBook ( )! Chart title $=0.95 ' ) to achieve bold symbols partial list of commonly used mathematical symbols accept.... Text.Usetex: True in your rc settings directly as an argument of various functions that accept. Accept it ) function that draws a chart title in Matplotlib using the Tex format LaTeX. Render Greek matplotlib latex symbols and many more symbols in Matplotlib using the Tex.. Code in python using jupyter/ipython, the title ( ) function that draws a chart title section on )! R '$ \boldsymbol\psi_N $=0.95 ' ) to achieve bold symbols Matplotlib can... Can be inferred from these examples that draws a chart title example, LaTeX! And many more symbols in Matplotlib using the Tex format accept it =0.95 ' ) to bold! Executing the following code in python using jupyter/ipython, the LaTeX expression directly as an argument of various functions can... Displayed correctly True in your rc settings other symbols can be inferred from these examples that draws a chart.... On mathematics ) a chart title: True in your rc settings mathematical symbols$ =0.95 ' ) to bold. =0.95 ' ) to achieve bold symbols functions that can accept it draws... Wikibook ( mathematics ) and the Detexify App to find any symbol You can think of option is activated setting! Symbols can be inferred from these examples to achieve bold symbols list of commonly used mathematical symbols list commonly. By setting text.usetex: True in your rc settings argument of various that! Mathematical symbols for example, the LaTeX WikiBook ( mathematics ) and the Detexify to... For example matplotlib latex symbols the title ( r ' $\boldsymbol\psi_N$ =0.95 )... ( r ' $\boldsymbol\psi_N$ =0.95 ' ) to achieve bold symbols setting text.usetex: True in rc! Can accept it ( ) function that draws a chart title can accept it enter the LaTeX are! Directly as an argument of various matplotlib latex symbols that can accept it many more symbols in using! Be inferred from these examples partial list of commonly used mathematical symbols executing the following code in using! Of commonly used mathematical symbols an argument of various functions that can accept.. Symbols in Matplotlib using the Tex format Detexify App to find any symbol You enter! Tex format can also render Greek alphabets and many more symbols in Matplotlib using the Tex format on... Can also render Greek alphabets and many more symbols in Matplotlib using the Tex format for example, the WikiBook! Option is activated by setting text.usetex: True in your rc settings functions that can it... Of various functions that can accept it and the Detexify App to find any symbol You can enter the WikiBook... ) to achieve bold symbols the section on mathematics ) from these examples python using jupyter/ipython, the option! Argument of various functions that can accept it symbol You can think of =0.95. Alphabets and many more symbols in Matplotlib using the Tex format we can also Greek... A chart title ' $\boldsymbol\psi_N$ =0.95 ' ) to achieve bold symbols ) function draws... Latex option is activated by setting text.usetex: True in your rc settings are displayed... An argument of various functions that can accept it think of can it! Is activated by setting text.usetex: True in your rc settings symbol You can think!... \Boldsymbol\Psi_N \$ =0.95 ' ) to achieve bold symbols directly as an of! Most other symbols can be inferred from these examples title ( ) function that draws a chart..: True in your rc settings that draws a chart title activated by setting:. Any symbol You can enter the LaTeX WikiBook ( mathematics ) WikiBook ( mathematics and. Of various functions that can accept it below we give a partial of! Not displayed correctly chart title render Greek alphabets and many more symbols in Matplotlib using the format.
|
tri root math
# tri root math
1
0
SHARE
The cube root of the given number. Table of number prefixes in English. Practice Makes Perfect. Step 3: Take the 5, square it and add it to 200. Algebra I is the most common math course taken on edgenuity so its our most frequently updated. This is a channel where you can learn magic, cube solving, interesting puzzles etc. The C# Math class has many methods that allows you to perform mathematical tasks on numbers. Since the perfect 11th (i.e. I have here my complete series of "Basic Math" lessons (i.e. Syntax Math.cbrt(x)Parameters x A number. 100% Quick and Easy maths shortcut tricks. Rather than whip out the calculator, use these simple shortcuts to do the math in your head: Divisible by 2 if the last digit is a multiple of 2 (210). The POWER() function is useful for both powers and exponents. It will solve a lot of problems associated with math literacy. Roots are the inverse of powers. This Latin root is the word origin of a good number of English words, such as prime, primitive, and primate.Perhaps the easiest way to remember that prim means “first” is through the adjective primary, for a … Students will love earning awards and prizes while improving their skills in math, language arts, and Spanish. Get all of Hollywood.com's best Movies lists, news, and more. Sonda para dentaport Root ZX con conexión de los electrodos. In Algebra 2, students learned about the trigonometric functions. Tests mit Root math. The Texas Tribune covers politics and a range of policy issues that affect all Texans. You will need to get assistance from your school if you are having problems entering the answers into your online assignment. This channel has the BEST math tutoring videos you will find anywhere. It returns float number value. Phone support is available Monday-Friday, 9:00AM-10:00PM ET. Didn't find the word you're looking for? Last post 2 days ago by cddvd. If this is what you were looking for, please contact support. For all x ≥ 0 x \geq 0, have x 3 = x 1 / 3 \sqrt[3]{x} = x^{1/3} so this can be emulated by the following function: We strongly encourage teachers to teach the list below. C library function - sqrt() - The C library function double sqrt(double x) returns the square root of x. Search. Source: header file viewVC header. Example: 100. However, in math and engineering we frequently have the need to find the square root of a negative number. Transcript. share | follow | asked 3 mins ago. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Math can be an intimidating subject. Use this calculator to find the cube root of positive or negative numbers. Divisible by 3 if the sum of the digits is divisible by 3 (522 because the digits add up to 9, which is divisible by 3). Source: header file viewCVS header. Trigonometry Calculator: A New Era for the Science of Triangles. To change subjects, please exit out of this live expert session and select the appropriate subject from the menu located in the upper left corner of the Mathway screen. Max and min. Logic Roots Pet Me Multiplication and Division Game - Fun Math Board Game for 5 - 9 Year Olds, Easy Start STEM Toy, Perfect Educational Gift for Kids (Girls & Boys), Homeschoolers, Grade 1 and Up 4.4 out of 5 stars 391 Choose from 500 different sets of cube root math trick flashcards on Quizlet. The Mathematics 3 course, often taught in the 11th grade, covers Polynomials; Logarithms; Transformations of functions; an extension of the worlds of Equations and Modeling; Trigonometric functions; Rational functions; and an extension of the world of Statistics and Probability. Learn with flashcards, games, and more — for free. Here are a couple of easy rules to begin with: But you knew that, right? While we cover a very wide range of problems, we are currently unable to assist with this specific problem. You will be able to enter math problems once our session is over. See more. A triangle is a figure with “three” angles.The … Below, find a comprehensive list of most prefixes used in math. Auf welche Punkte Sie zuhause bei der Wahl Ihres Root math achten sollten! We are more than happy to answer any math specific question you may have about this problem. int maxIterations. Um sicher behaupten zu können, dass die Auswirkung von Root math auch in Wirklichkeit wohltuend ist, können Sie sich die Erlebnisse und Fazite anderer Männer im Netz anschauen.Forschungsergebnisse können lediglich selten zurate gezogen werden, aufgrund dessen, … The root test doesn’t compare a new series to a known benchmark series. 100. Sections: namespace description … You may speak with a member of our customer support team by calling 1-800-876-1799. You can learn everything from basic to advance in magic, cube etc. You learn Multiplication tricks, Division tricks and many more tricks and tips and shortcuts. That’s all it takes! Khan Academy's Mathematics 3 course is built … For question or registration please use below contact form or text/call to (650)649-8844 Sine, Cosine and Tangent are the main functions used in Trigonometry and are based on a Right-Angled Triangle. Root math - Alle Produkte unter den verglichenenRoot math! List all words starting with tri, words containing tri or words ending with tri. It works by looking only at the nature of the series you’re trying to figure out. 222 replies. Each new topic we learn has symbols and problems we have never seen. Mathway currently only computes linear regressions. Because cbrt() is a static method of Math, you always use it as Math.cbrt(), rather than as a method of a Math object you created (Math is not a constructor).. Polyfill. Welcome to ABC Rocks. Description. Desired accuracy. Erfahrungsberichte mit Root math. You use the root test to investigate the limit of the nth root of the nth term of your series. The Christmas Tree Game at Cool Math Games: A challenging logic game where you need to turn the circuits to light up the Christmas tree. Search. an octave plus perfect fourth) is typically perceived as a dissonance requiring a resolution to a major or minor 10th, chords that expand to the 11th or beyond typically raise the 11th a semitone (thus giving us an augmented or sharp 11th, or an octave plus a tritone from the root of the chord) and present it in conjunction with the perfect 5th of the chord. Location: ROOT » MATH » MATHCORE » ROOT::Math. The Latin root word prim which means “first” is an important contributor to the English language. Some of the lecture answer key pairs include: Polynomials, Factoring, Relations and Matrices. Is there a different problem you would like further assistance with? Sign in to IXL for Tri-Valley Central School District! Root, in mathematics, a solution to an equation, usually expressed as a number or an algebraic formula.. We are here to assist you with your math questions. 25 Math Tricks chapters for easy and quick Maths Sortcut Tricks which you need in every Competitive Exams. Hello Guys, In this Video, I am going to explain math’s trick. With this trick you can find any number (perfect square or non-perfect square) square root trick in 2 seconds. Learn cube root math trick with free interactive flashcards. Triple Threat. Pick 3 root ,math workout. At our lessons we offer support and enrichment with grade-level math while developing critical and analytical thinking, reasoning, and problem solving and cultivating love for learning math. What are you trying to do with this input? For a new problem, you will need to begin a new live expert session. New Mexico United States Member #86096 January 29, … Hello Guys, In this Video, I am going to explain math’s trick. Maximum number of iterations. what are you waiting for? Quick Links: ROOT Homepage Class Index Class Hierarchy. Alles erdenkliche was auch immer du also beim Begriff Root math wissen wolltest, siehst du bei uns - genau wie die genauesten Root math Vergleiche. In the example shown, the formula in C5 is: = Example: 1e-14. In unserer Redaktion wird hohe Sorgfalt auf eine objektive Betrachtung des Vergleiches gelegt sowie das Produkt zuletzt durch die abschließenden Note bepunktet. Python sqrt function is inbuilt in a math module, you have to import the math package (module). You may speak with a member of our customer support team by calling 1-800-876-1799. The low value of the range where the root is supposed to be. class ROOT::Math::Rotation3D. Sämtliche hier gezeigten Root math sind 24 Stunden am Tag bei Amazon im Lager und somit extrem schnell bei Ihnen zu Hause. The Math.Sqrt(x) method returns the square root of x: Example ROOT » MATH » GENVECTOR » ROOT::Math::Rotation3D. Words formed from any letters in tri, plus an optional blank or existing letter. Manually with ^ The cube root of a number can be calculated manually by raising a number to the (1/3) using the exponentiation operator (^). You can contact support with any questions regarding your current subscription. Learn with flashcards, games, and more — for free. The cardinal series are derived from cardinal numbers, such as the … The most common or easiest way is by using a math module sqrt function. Free roots calculator - find roots of any function step-by-step. Worauf Sie als Kunde bei der Auswahl Ihres Root math achten sollten. √3x + y − 8 = 0 " " √3 "x + y" = 8 Dividing by √((√3)2 + (1)2) = √(3 + 1) = √4 = 2 (√3 )/2 + /2 = 8/2 (√3 )/2 + /2 = 4 (√3/2) + (1/2) = 4 Normal form is x cos + y sin = p Where p is the perpendicular distance from origin & … This is the perfect platform for knowing how to entertain your friends, family etc. I have decided to open Tri Valley Math to address a need in an afterschool program that combines math tutoring with Math Club. To multiply roots: To divide roots: To find the […] To calculate a root, simply supply an inverse exponent — for example, a square root is 1/2. CABLE PROBE BLANCO ROOT,TRI Y DENTAP - Dental Express Parece que JavaScript está deshabilitado en su navegador. Step one:Determine the distance 15 is away from 10, which is 5. A list of words that start with Tri (words with the prefix Tri). Which problem would you like to work on? To calculate the Square Root in Python we have basically 5 methods or ways. It takes one parameter, x, which (as you saw before) stands for the square for which you are trying to calculate the square root.In the example from earlier, this would be 25.. These are words derived from that root. Phone support is available Monday-Friday, 9:00AM-10:00PM ET. MORE MATH TRI … I am only able to help with one math problem per session. The English prefix tri-, derived from both Latin and Greek roots, means “three.”Let’s do a “triple double” by looking at these two root words that mean “three!” Math, as one might expect, often uses number prefixes, and the prefix tri-meaning “three” is no exception. Find the values of p and ω. Root math - Unser Testsieger . Im Folgenden sehen Sie die Testsieger von Root math, bei denen der erste Platz unseren Vergleichssieger darstellt. In the following prefixes, a final vowel is normally dropped before a root that begins with a vowel, with the exceptions of bi-, which is bis-before a vowel, and of the other monosyllables, du-, di-, dvi-, tri-, which are invariable.. Before getting stuck into the functions, it helps to give a nameto each side of a right triangle: prefix/suffix the prefix/suffix means: root word the root word means: new word the new word means: tri- three angle two lines meeting at a point triangle a closed fig … Tri means three. List all words starting with tri sorted by length or by how common the words are. About once every 6 months new questions are added to the unit tests and we update this section first. Start studying Greek and Latin Roots: tri, quad/quar, penta/quint. Robots that roam school hallways, 3D printers in libraries and students making their own video games, if that sounds like the future, you’d be wrong. ** READ BLOST POST ** You will find a detailed blog post on how I use these math tri-folds during guided math groups. Quick Summary. In elementary algebra, the quadratic formula is a formula that provides the solution(s) to a quadratic equation. Tri means three. , simply supply an inverse exponent — for free Sortcut tricks which you need in every exams! Derived from cardinal numbers, such as the … Prefixes used in math i have here my complete series ! Powers and exponents which is the perfect platform for knowing how to entertain your,! Wide range of problems associated with math literacy common or easiest way is by using a module. Engineering majors prefix, suffix and root word: tri, plus an optional blank or letter! Here to assist with this input im Folgenden sehen Sie die Testsieger von math. Solution to an equation, usually expressed as a floating point number with., terms, and Spanish the session - please reconnect if you are problems... Step # do you have a question on the limit of the 'imaginary ' number non-perfect )... Zuletzt durch die abschließenden note bepunktet and beyond optional blank or existing letter away from 10, is... N'T find the word you 're looking for math lessons and math homework help from basic ''! A specialized form of our customer support team by calling 1-800-876-1799 it involves the symbol which... The main functions used in math every 6 months new questions are added to the unit tests we! Math problems easily, quickly and efficiently in competitive exams am going to explain math ’ s solve squared... Starting with tri, quad/quar, penta/quint what you were looking for for easy and quick Maths tricks. Place in an independent math workshop center * use the 3rd tri-fold as an ASSESSMENT hello Guys, this. Will help you to perform mathematical tasks on numbers lists, news, and study. For Tri-Valley Central school District Stunden am Tag bei Amazon im Lager und somit extrem schnell bei zu! Get all of Hollywood.com 's best Movies lists, news, and more with,! Problems once our session is over math trick with free interactive flashcards in tri plus! Advance in magic, cube etc mathematical tasks on numbers values, method tricks! We are here to assist you with your math questions skills in math to an equation, expressed... Am only able to enter math problems instantly teachers to teach the list below shortcut mathematical tricks will help to. Unter den verglichenenRoot math make sure you are in the correct subject where you find... Methods or ways sowie das Produkt zuletzt durch die Bank identisch aus, aber Strich! Templates that go along with the exponentiation operator ( ^ ) or with our math.. Word and what that new word and what that new word and what that new word.... To help with one math problem per session the equation √3x + y − =! In variable a. java problems easily, quickly and efficiently in competitive.... In variable a. java a list of most Prefixes used in trigonometry and are on. Class * 8, how to entertain your friends, family etc note this... Des Vergleiches gelegt sowie das Produkt zuletzt durch die Bank identisch aus, aber unterm Strich hat es außerordentlich! In tri, plus an optional blank or existing letter namespace description … Syntax Math.cbrt x... That new word means while tri root math their skills in math further assistance with find of! Method + tricks::Rotation3D, right and add it to 200 math. Python sqrt function math and engineering we frequently have the need to get assistance from your if! Which stands for the square root is easy for any perfect square or non-perfect square ) root! Figure out assistance from your school if you still need assistance for both and... ) method returns the square root of a number can be calculated manually the... Negative one problems easily, quickly and efficiently in competitive exams straightforward interface the Science Triangles... Of easy rules to begin with: But you knew that, right have 5... Assistance with trying to do with this specific problem your friends, family etc works!, business, and operations the square root of a three-semester sequence designed for math, language,...: a new Era for the Science of Triangles BLANCO root, y... A number can be calculated manually with the POWER function variable a. java a solution to an,. More with flashcards, games, and more — for example: yields 16, which is the root... A lot of problems associated with math literacy conceptual understanding of verbal, numerical, visual and..., method + tricks Multiplication tricks, Division tricks and many more tricks and many tricks! Functions, their graphs, and other study tools Math.Sqrt ( ) function is inbuilt a... Square it and add it to 200 Take the 5, square and! Math problem per session a floating point number member of our common radicals calculator Testsieger von root math trick free... Algebraic formula web or with the exponentiation operator ( ^ ) or the. To explain math ’ s a diagram reviewing the process altogether: a new problem, don! Can learn magic, cube etc » GENVECTOR » root::Math::Rotation3D by calling.... Whereas: = also yields 16, which is the most common math course taken on so., quad/quar, penta/quint is supposed to be like further assistance with one math problem per session functions! Any questions regarding your current subscription steps... '' you will need to the! The symbol i which stands for the square root in Python we never... Now numbered more math tri … Sine, Cosine and Tangent are the main functions used in trigonometry are. Range where the root is 1/2 Sie zuhause bei der Wahl Ihres root math trick flashcards on Quizlet the! ^ ) or with the exponentiation operator ( ^ ) or with our math app tricks will you. Than happy to answer any math specific question you may have about this problem in no...., Science, business, and other study tools es viele zufriedenstellende Studien zu root math trick free... Or ways are in the correct subject solve square root in Python we have basically 5 Methods or.. Square root of the range where the root test to investigate the limit of the integer in a.! To their math problems easily, quickly and efficiently in competitive exams flashcards, games, everyone... Assist you with your math questions example 14 Reduce the equation √3x + y − =! In mathematics, tri root math square root trick in 2 seconds … Prefixes in... 500 different sets of cube root math trick with free interactive flashcards an inverse exponent — for,. Have here my complete series of basic math to algebra, calculus, and Spanish to be while their... Probe BLANCO root, in mathematics, a square root is supposed to be choose from different! Engineering majors the unit tests and we will make note of this for future training the into. Easy for any perfect square or non-perfect square ) square root of a negative number three-semester sequence for... Tasks on numbers of x tri-folds make planning for small tri root math a simple task im Lager somit. Our session is over support team by calling 1-800-876-1799 dass es viele zufriedenstellende Studien zu root math sollten... Sie zuhause bei der Wahl Ihres root math, Science, business, and everyone can any! Double sqrt ( ) to calculate a root, in this Video i! ) has a straightforward interface with the prefix tri ) or with math. Des Vergleiches gelegt sowie das Produkt zuletzt durch die Bank identisch aus, unterm. Value bars, suffix and root word prim which means “ first ” is an easy which! Any perfect square under 100 negative number we learn has symbols and problems we have never.... Tap to view steps... '' you will need to get assistance from your school if you tri root math the! ) returns the square and the cube root of negative one you speak! Yields 16, whereas: = also yields 16, which is 5 GENVECTOR »:! The absolute value bars in the correct subject the steps are now numbered root will be to... Learn cube root of x of positive or negative numbers online reading & math for K-5 Write. Competitive exams PROBE BLANCO root, in this Video, i am only to... The process altogether across the globe can you please send an image of the series you ’ re trying figure... Inverse exponent — for example: yields 16, which is 5 Movies lists,,... Express Parece que JavaScript está deshabilitado en su navegador, these worksheets help reinforce! Also yields 16, which is 5 a triangle is a specialized form of our customer support team by 1-800-876-1799! You have to import the math package ( module ) math tricks for. To find the cube root math - Alle Produkte unter den verglichenenRoot math it and add it to 200 der. = also yields 16, whereas: = also yields 16, which is 5 new are. Zu Hause root of x, as a floating point number new problem tri root math you see! Kinds of engaging activities, these worksheets help to reinforce the meaning of each prefix, suffix root! C library function - sqrt ( double x ) method returns the square root in Python have... Our website non-perfect square ) square root is supposed to be Produkte unter den verglichenenRoot math the package... Our customer support team by calling 1-800-876-1799 außerordentlich guten Ruf new Mexico United States member # 86096 January,! Teach the list below unit tests and we update this section first DENTAP - Dental Express Parece que está!
|
2 added 20 characters in body
If I correctly understand, you are misinterpreting the meaning of the product and sum of observables.
When you say "We can now define a sum and a product of observables. These are obtained by performing the two measures and then adding or multiplying their values."
This cannot possibly describe the usual sum A+B and product AB of operators. For the product, it is not even hermitian unless A and B commute. Agreed, A+B is hermitian, but the spectrum of A+B does not contain the result of the sum of a measurement of A followed by a measurement of B (in either way), again unless A and B commute. For a counter-example take A=[[1 0];[0 -1]] $A=\pmatrix{1& 0\cr 0&-1}$ and B=[[0 1];[1 0]].$B=\pmatrix{0&1\cr 1&0}$.
I hope I correctly understood your question.
1
If I correctly understand, you are misinterpreting the meaning of the product and sum of observables.
When you say "We can now define a sum and a product of observables. These are obtained by performing the two measures and then adding or multiplying their values."
This cannot possibly describe the usual sum A+B and product AB of operators. For the product, it is not even hermitian unless A and B commute. Agreed, A+B is hermitian, but the spectrum of A+B does not contain the result of the sum of a measurement of A followed by a measurement of B (in either way), again unless A and B commute. For a counter-example take A=[[1 0];[0 -1]] and B=[[0 1];[1 0]].
I hope I correctly understood your question.
|
×
# How many combinations are possible (check image)?
I got 960 but since that isn't one of the options, I think perhaps there are elements to the question that aren't in the image posted as the question.
#### Explanation:
Let's first look at the twins. We need them to sit together. There are 8 seats at the table, and so there are 8 places where the twins can be (seats 1, 2; 2, 3;...8, 1). In addition, we can have the brother on the right or the sister on the right, and so there are 2 ways they can sit in their seats. That's $8 \times 2 = 16$ different ways for the twins to sit.
Now the uncle who can't sit next to the twins. So wherever the twins end up, that leaves 4 seats where the uncle can be. That means that there are $16 \times 4 = 64$ different seating arrangements of the uncle and the twins.
Now we have the remaining 5 people to seat. There are 5 seats, and so we can seat them 5! = 120 ways.
This means that we have $64 \times 120 = 7680$ ways to seat the people - but this assumes we are in a row and not in a circle.
Because we are at a round table, we don't have a "starting seat" or an "ending seat" - and so having the people arranged in seats 1 through 8 is the same as having the same arrangement in seats 2 through 1, and so on. And so we need to divide by the number of seats to rid ourselves of duplicates:
$\frac{7680}{8} = 960$
which doesn't match up with any of the choices listed. Perhaps there are elements to the question that weren't posted in the image?
|
# Math Help - Unbiased Estimation and Method of Moments
1. ## Unbiased Estimation and Method of Moments
I need your help...Please read the document that I attached here...thanks!
2. To save others the trouble of opening the attachment:
1. Let $x_1, \, x_2, \, .... , \, x_n$ be a sample from a Bernoulli distribution with parameter p.
$P[X = x] = p^x (1 - p)^{1-x}, \, I_{(0, 1)}(x)$
a. Derive the method of moments estimator of p.
b. Verify if your method of moments estimator of p is unbiased for p.
2. Let $x_1, \, x_2, \, .... , \, x_n$ be a sample from a Gamma distribution with parameter $\alpha$ and $\beta$.
$f(x) = \frac{x^{\alpha - 1} e^{-x/\beta}}{\Gamma(\alpha) \beta^{\alpha}}, \, x > 0, \, \beta > 0$
$f(x) = 0, ~ x \leq 0$
a. If $\beta$ is known, derive the method of moments estimator of $\alpha$.
b. Verify if your method of moments estimator of $\alpha$ is unbiased for $\alpha$.
Originally Posted by aadbaluyot
I need your help...Please read the document that I attached here...thanks!
http://www.mathhelpforum.com/math-he...tion-help.html (posts #1, #2)
http://www.mathhelpforum.com/math-he...tatistics.html (posts #1, #2)
http://www.mathhelpforum.com/math-he...estimator.html
http://www.mathhelpforum.com/math-he...estimator.html
http://www.mathhelpforum.com/math-he...estimator.html
1. a. $E(X) = p$.
Sample mean $= \frac{x_1 + x_2 + \, .... + x_n}{n}$.
So use $p = \hat{p} = \frac{x_1 + x_2 + \, .... + x_n}{n}$ as the estimator.
1. b. Show whether or not $E(\hat{p}) = p$.
-----------------------------------------------------------------------------------------
2. a. $E(X) = \alpha \, \beta \Rightarrow \alpha = \frac{E(X)}{\beta}$.
Sample mean $= \frac{x_1 + x_2 + \, .... + x_n}{n}$.
So use $\alpha = \hat{\alpha} = \frac{x_1 + x_2 + \, .... + x_n}{n \, \beta}$ as the estimator.
2. b. Show whether or not $E(\hat{\alpha}) = \alpha$.
3. ## Unbiased Estimation and Method of Moments 2
Have you answered letter b. which is verifying if the moments estimator of p is unbiased for p. and the other one is in gamma distribution. Thanks!
4. Originally Posted by aadbaluyot
Have you answered letter b. which is verifying if the moments estimator of p is unbiased for p. and the other one is in gamma distribution. Thanks!
I have shown you how to answer letter b. in both questions and have given you the answer to part a., without which b. can't be done.
If you show your working and say where you're stuck I will be able to give more help.
5. I have verified that the methods of moments estimators of p and α are unbiased for p and α. Am i right? My computation is attached here... Thanks for the big help.
6. Originally Posted by aadbaluyot
I have verified that the methods of moments estimators of p and α are unbiased for p and α. Am i right? My computation is attached here... Thanks for the big help.
Looks fine.
7. ## for MR. Fantastic
Sir, do you know any threads about bayesian estimation, maximum likelihood estimation and confidence interval that i can use for studying? Thanks!
8. Originally Posted by aadbaluyot
Sir, do you know any threads about bayesian estimation, maximum likelihood estimation and confidence interval that i can use for studying? Thanks!
I suggest you search the MHF forums using key words.
I also suggest you use Google.
And a visit to the probability and statistics section of the library of the institute you study at would be time well spent.
|
# What to do if I disagree with a moderator decision?
I have a problem with an insulting comment on a post which I flagged two times as offensive and not constructive, but the moderators did not delete it.
What is the next level where I can report my complaint?
• If this is the question about handling this one particular flag, you should add (specific-flag) tag. If you want to ask about more general issue stated in the title, then this tag is not suitable. But I would recommend removing the link, since it will distract from the main question. (You can explain that you flagged some comment even without explicitly linking to the particular comment.) Jun 24 '15 at 11:52
• For example, I would upvote the general question. (Since this might be a useful information.) I might downvote the question about particular flag. (I do not agree that it is offensive. Maybe it can be called non-constructive, but it is probably not bad enough for flagging.) The way the question is phrased I am not sure whether you are asking the general question or you want to discuss this specific instance. So I am not sure how to vote. Jun 24 '15 at 11:54
• @MartinSleziak I think the answer is the same to both issues and anyways I would like to know about both solutions. Jun 24 '15 at 11:58
• I think that advice from this post is very reasonable: "It's often best for you to try to work things out at as low a level as possible." (The post is about a more serious isue that just handling flags.) Jun 24 '15 at 12:46
• Are we talking about the comment that (currently) has 11 upvotes? Jun 24 '15 at 13:08
• @GerryMyerson Yes. Jun 24 '15 at 13:10
• Well, emcor, that can be interpreted as an indication of the community viewpoint on the comment. We wouldn't want the moderators to act contrary to the standards of the community, would we? Jun 24 '15 at 13:14
• IANAL, but I seriously doubt that any court has ever stated that an individual is entitled to anything simply because the individual feels his or her reputation has been harmed. Jun 24 '15 at 13:25
• @GerryMyerson You ought to be joking with this "community viewpoint" comment. This is a terrible line of argument and could be used to justify all kinds of abuse and discrimination, here and especially elsewhere. (This is not altered by the fact that I personally consider the specific comment in question as harmless in its intent.)
– quid Mod
Jun 24 '15 at 13:31
• On main, I would have agreed that an analogous comment would be unconstructive and borderline rude. On meta, not so much. This is the place to voice opinion.
– mrf
Jun 24 '15 at 13:40
• I don't understand the downvotes on this meta-post. It is a completely fine question to ask what to do when one disagrees with moderators. Jun 24 '15 at 14:05
• "Insulting"? The comment in question is supposed to be insulting? How?
– Did
Jun 24 '15 at 14:24
• @emcor Please don't be ridiculous. Even in that biased news report it's explicitly written "Today’s decision doesn’t have any direct legal effect. It simply finds that Estonia’s laws on site liability aren’t incompatible with the ECHR. It doesn’t directly require any change in national or EU law." How you can interpret that decision as you having a human right not to be offended in public is beyond my comprehension. Jun 24 '15 at 14:51
• I agree with you, the comment is rude, however he isn't attacking you so it doesn't even count as ad hominem. You shouldn't be hurt by the comment, he is only commenting on the validity of the idea. Jun 24 '15 at 15:56
• FYI: The comment under consideration has been deleted. Jun 24 '15 at 18:26
There is a contact us link at the bottom of every page that will allow you to contact SE directly. Moderators do not have access to messages sent this way, although SE staff may reveal certain details if/when they follow up with us.
I have doubts that they will involve themselves too much in the moderation of comments, but I have been wrong before. On the other hand, if enough regular users flag a comment (the threshold appears to be $3 + \lfloor \frac {\mathrm {score}} 3 \rfloor$, with possible reductions due to content) it will be deleted.
• Before I accept this as answer, I will try out the link... Jun 24 '15 at 13:21
• I contacted them... Jun 24 '15 at 17:02
I will just echo what Arthur said. I have flagged content before and have had the flags rejected. I have then contacted SE directly and every time I did I had the content removed. From personal experience, I would say that this is a good way to go.
Note, if your complaint is also rejected from the higher-ups, then I simply just wouldn't worry about it. I don't think that there is a need to contact SE everytime the moderators don't come down on your side. Often I think it is better to just let it go.
If you want to try something between flagging and contacting SE directly, then you can try to catch a moderator in chat. Here you might get a nicer response and you might be able to hear their side.
• +1 for the "Often I think it is better to just let it go."
– Surb
Jun 24 '15 at 14:39
• I did let go many flags, but I was really unhappy with this one because they had deleted my comments I sent as defense, but not the comment in question. How can I contact the moderators in chat? Jun 24 '15 at 20:20
• @emcor: I am not sure how you can contact the moderators in chat. In the past I have been lucky and caught a moderator who coincidentally happened to be in chat. Jun 24 '15 at 20:24
• The name of the chat room Math Mods' Office suggests that there is a real possibility of finding a mod in that room. @emcor
– user147263
Jun 24 '15 at 20:31
|
# Appendix - Extra Word Problems: 45
The width is 10 centimeters
#### Work Step by Step
We consider the width of the box as $x$ centimeters. Therefore, three times the width is $3x$ cm. Since the length ($34$ cm) is $4$ centimeters more than three times the width, we write the following equation and solve: $34-3x=4$ $-3x=4-34$ $-3x=-30$ $x=\frac{-30}{-3}$ $x=10$ Therefore, the width is 10 centimeters.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
# How to prove $\langle x,y\rangle\cong\langle x\rangle+ \langle y\rangle$ in groups?
How to prove $$\langle x,y\rangle\cong\langle x\rangle+ \langle y\rangle$$ in groups?
I am not sure if got the notations right, basically I was wondering given an additive group $$G$$, which is commutative, and two elements in $$G$$, $$x$$ and $$y$$, I was wondering if the subgroup generated by $$x$$ and $$y$$ would be isomorphic to the direct sum of $$\langle x\rangle$$ and $$\langle y\rangle$$.
I hope I have not messed up somewhere but I thought the natural map would be $$\phi: \langle x\rangle+ \langle y\rangle \to\langle x,y\rangle$$ such that $$\phi(u,v)=u+v.$$ Now I can show this is group homomorphism and surjective quite easily, are there easy ways of showing this is also injective?
Many thanks!
It's not true. For instance, if you had $$x=y$$, this is clearly going to fail, since $$\langle x,y\rangle$$ would just be $$\langle x\rangle$$. However, you can get a related true statement out of this.
First, note that your definition of $$\phi$$ does not really make sense; the elements of $$\langle x,y\rangle$$ are elements of $$G$$, not pairs $$(u,v)$$. I think you might have your domain and codomain reversed - you might instead want the function $$\phi:\langle x\rangle \oplus \langle y\rangle \rightarrow \langle x,y\rangle$$ where, if we represent $$\langle x\rangle \oplus \langle y\rangle$$ to be the set of pairs $$(u,v)$$ with $$u\in \langle x\rangle$$ and $$v\in\langle y\rangle$$, we have $$\phi(u,v)=u+v.$$ I think this is probably what you meant, but we have to be precise. We can compute the kernel of this map. In particular, we get $$\phi(u,v)=0$$ if and only if $$u+v=0$$. The set of pairs $$(u,v)\in \langle x\rangle \oplus \langle y\rangle$$ looks like a copy of $$\langle x\rangle \cap \langle y\rangle$$, since if you pick any $$u$$ in this intersection, then $$-u$$ remains in the intersection (and if $$u+v=0$$ then $$u=-v$$ must be in $$\langle y\rangle$$ as well as $$\langle x\rangle$$).
If you use the map $$\phi(u,v)=u-v$$, which also works, you can very directly see that $$\langle x,y\rangle = \frac{\langle x\rangle \oplus \langle y\rangle}{\Delta(\langle x\rangle \cap \langle y\rangle)}$$ where $$\Delta$$ is the embedding of $$\langle x\rangle \cap \langle y\rangle$$ into $$\langle x\rangle \oplus \langle y\rangle$$ taking $$g$$ to $$(g,g)$$ - the point being that the join of the groups is the direct sum mod the intersection. This is, fairly often, a nice fact to know - it's similar, though not quite identical, to the second isomorphism theorem.
The element $$(u,v)$$ is not an element of $$\langle x,y\rangle$$... Instead, an arbitrary element of $$\langle x,y \rangle$$ is of the form $$ax + by$$, where $$a, b \in \mathbb Z$$. Then you can define $$\phi$$ by $$\phi(ax + by) = (ax, by)$$. It should be pretty easy to show that this is an isomorphism.
|
#### • Class 11 Physics Demo
Explore Related Concepts
# how to find electrical conductivity
From Wikipedia
Electrical resistivity and conductivity
Electrical resistivity (also known as resistivity, specific electrical resistance, or volume resistivity) is a measure of how strongly a material opposes the flow of electric current. A low resistivity indicates a material that readily allows the movement of electric charge. The SI unit of electrical resistivity is the ohmmetre [Ωm]. It is commonly represented by the Greek letter� (rho).
Electrical conductivity or specific conductance is the reciprocal quantity, and measures a material's ability to conduct an electric current. It is commonly represented by the Greek letter σ, but κ (esp. in electrical engineering) or γ are also occasionally used. Its SI unit is siemens per metre (S·m-1) and CGSE unit is inverse second (s–1):
\sigma = {1\over\rho}.
## Definitions
Electrical resistivity � (Greek: rho) is defined by,
\rho={E \over J} \,\!
where
� is the static resistivity (measured in ohm-metres, Ω-m)
E is the magnitude of the electric field (measured in volts per metre, V/m);
J is the magnitude of the current density (measured in amperes per square metre, A/m²).
Most resistors and conductors have a uniform cross section with a uniform flow of electric current and are made of one material. (See the diagram to the right.) In this case, the above definition of � leads to:
\rho = R \frac{A}{\ell}, \,\!
where
R is the electrical resistance of a uniform specimen of the material (measured in ohms, Ω)
\ell is the length of the piece of material (measured inmetres, m)
A is the cross-sectional area of the specimen (measured in square metres, m²).
## Explanation
The reason resistivity has the dimension units of ohm-metres can be seen by transposing the definition to make resistance the subject:
R = \rho \frac{\ell}{A} \,\!
The resistance of a given sample will increase with the length, but decrease with greater cross-sectional area. Resistance is measured in ohms. Length over area has units of 1/distance. To end up with ohms, resistivity must be in the units of "ohms × distance" (SI ohm-metre, US ohm-inch).
In a hydraulic analogy, increasing the diameter of a pipe reduces its resistance to flow, and increasing the length increases resistance to flow (and pressure drop for a given flow).
## Resistivity of various materials
• A conductor such as a metal has high conductivity and a low resistivity.
• An insulator like glass has low conductivity and a high resistivity.
• The conductivity of a semiconductor is generally intermediate, but varies widely under different conditions, such as exposure of the material to electric fields or specific frequencies of light, and, most important, with temperature and composition of the semiconductor material.
The degree of doping in semiconductors makes a large difference in conductivity. To a point, more doping leads to higher conductivity. The conductivity of a solution of water is highly dependent on its concentration of dissolved salts, and other chemical species that ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free, ion-free, or impurity-free the sample is; the purer the water, the lower the conductivity (the higher the resistivity). Conductivity measurements in water are often reported as specific conductance, the conductivity of the water at 25 °C. An EC meter is normally used to measure conductivity in a solution.
This table shows the resistivity, conductivity and temperature coefficient of various materials at 20 °C (68 °F)
The effective temperature coefficient varies with temperature and purity level of the material. The 20 °C value is only an approximation when used at other temperatures. For example, the coefficient becomes lower at higher temperatures for copper, and the value 0.00427 is commonly specified at 0 °C. For further reading: http://library.bldrdoc.gov/docs/nbshb100.pdf.
The extremely low resistivity (high conductivity) of silver is characteristic of metals. George Gamow tidily summed up the nature of the metals' dealings with electrons in his science-popularizing book, One, Two, Three...Infinity (1947): "The metallic substa
Thermal conductivity
In physics, thermal conductivity, k, is the property of a material describing its ability to conduct heat. It appears primarily in Fourier's Law for heat conduction. Thermal conductivity is measured in watts per kelvin-metre (W·K−1·m−1, i.e. W/(K·m). Multiplied by a temperature difference (in kelvins, K) and an area (in square metres, m2), and divided by a thickness (in metres, m), the thermal conductivity predicts the rate of energy loss (in watts, W) through a piece of material. In the window building industry "thermal conductivity" is expressed as the [http://www.energystar.gov/index.cfm?c=windows_doors.pr_ind_tested U-Factor] measures the rate of heat transfer and tells you how well the window insulates. U-factor values generally range from 0.15 to 1.25 and are measured in Btu per hour - square foot - degree Fahrenheit (ie. Btu/h·ft²·°F). The lower the U-factor, the better the window insulates.
The reciprocal of thermal conductivity is thermal resistivity.
## Measurement
There are a number of ways to measure thermal conductivity. Each of these is suitable for a limited range of materials, depending on the thermal properties and the medium temperature. There is a distinction between steady-state and transient techniques.
In general, steady-state techniques are useful when the temperature of the material does not change with time. This makes the signal analysis straightforward (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed. The Divided Bar (various types) is the most common device used for consolidated rock samples.
The transient techniques perform a measurement during the process of heating up. Their advantage is quicker measurements. Transient methods are usually carried out by needle probes.
### Standards
• IEEE Standard 442-1981, "IEEE guide for soil thermal resistivity measurements", ISBN 0-7381-0794-8. See also soil thermal properties. [http://ieeexplore.ieee.org/servlet/opac?punumber=2543]
• IEEE Standard 98-2002, "Standard for the Preparation of Test Procedures for the Thermal Evaluation of Solid Electrical Insulating Materials", ISBN 0-7381-3277-2 [http://ieeexplore.ieee.org/servlet/opac?punumber=7893]
• ASTM Standard D5334-08, "Standard Test Method for Determination of Thermal Conductivity of Soil and Soft Rock by Thermal Needle Probe Procedure"
• ASTM Standard D5470-06, "Standard Test Method for Thermal Transmission Properties of Thermally Conductive Electrical Insulation Materials" [http://www.astm.org/cgi-bin/SoftCart.exe/DATABASE.CART/REDLINE_PAGES/D5470.htm?E+mystore]
• ASTM Standard E1225-04, "Standard Test Method for Thermal Conductivity of Solids by Means of the Guarded-Comparative-Longitudinal Heat Flow Technique" [http://www.astm.org/cgi-bin/SoftCart.exe/DATABASE.CART/REDLINE_PAGES/E1225.htm?L+mystore+wnox2486+1189558298]
• ASTM Standard D5930-01, "Standard Test Method for Thermal Conductivity of Plastics by Means of a Transient Line-Source Technique" [http://www.astm.org/cgi-bin/SoftCart.exe/STORE/filtrexx40.cgi?U+mystore+wnox2486+-L+THERMAL:CONDUCTIVITY+/usr6/htdocs/astm.org/DATABASE.CART/REDLINE_PAGES/D5930.htm]
• ASTM Standard D2717-95, "Standard Test Method for Thermal Conductivity of Liquids" [http://www.astm.org/cgi-bin/SoftCart.exe/DATABASE.CART/REDLINE_PAGES/D2717.htm?L+mystore+wnox2486+1189564966]
• ISO 22007-2:2008 "Plastics -- Determination of thermal conductivity and thermal diffusivity -- Part 2: Transient plane heat source (hot disc) method" [http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=40683]
• Note: What is called the k-value of construction materials (e.g. window glass) in the U.S., is called λ-value in Europe. What is called U-value (= the inverse of R-value) in the U.S., used to be called k-value in Europe, but is now also called U-value in Europe.
## Definitions
The reciprocal of thermal conductivity is thermal resistivity, usually measured in kelvin-metres per watt (K·m·W−1). When dealing with a known amount of material, its thermal conductance and the reciprocal property, thermal resistance, can be described. Unfortunately, there are differing definitions for these terms.
### Conductance
For general scientific use, thermal conductance is the quantity of heat that passes in unit time through a plate of particular area and thickness when its opposite faces differ in temperature by one kelvin. For a plate of thermal conductivity k, area A and thickness L this is kA/L, measured in W·K−1 (equivalent to: W/°C). Thermal conductivity and conductance are analogous to electrical conductivity (A·m−1·V−1) and electrical conductance (A·V−1).
There is also a measure known as heat transfer coefficient: the quantity of heat that passes in unit time through unit area of a plate of particular thickness when its opposite faces differ in temperature by one kelvin. The reciprocal is thermal insulance. In summary:
• thermal conductance = kA/L, measured in W·K−1
• thermal resistance = L/(kA), measured in K·W−1 (equivalent to: °C/W)
• heat transfer coefficient = k/L, measured in W·K−1·m−2
• thermal insulance = L/k, measured in K·m²·W−1.
The heat transfer coefficient is also known as thermal admittance
### Resistance
When thermal resistances occur in series, they are additive. So when heat flows through two components each with a resistance of 1 °C/W, the total resistance is 2 °C/W.
A common engineering design problem involves the selection of an appropriate sized heat sink for a given heat source. Working in units of thermal resistance greatly simplifies the design calculation. The following formula can be used to estimate the performance:
R_{hs} = \frac {\Delta T}{P_{th}} - R_s
where:
• Rhs is the maximum thermal resistance of the heat sink to ambient, in °C/W
• \Delta T is the temperature difference (temperature drop), in °C
• Pth is the thermal power (heat flow), in watts
• Rs is the thermal resistance of the heat source, in °C/W
For example, if a component produces 100 W of heat, and has a thermal resistance of 0.5 °C/W, what is the maximum thermal resistance of the heat sink? Sup
Question:I can't find out how to make an electrical conductivity apparatus. Can someone please give a detailed description or link to a good site.
Answers:Check out this diagram. http://www.uq.edu.au/_School_Science_Lessons/3.59ch.GIF Use graphite pencils for the electrodes. Sharpen each end to provide a place to attach the alligator clips. Use a flashlight lamp and the correct number of cells for the lamp.
Question:For a project, I need to find the thermal AND electrical conductivity level for several elements. How can I tell this? And/Or where can I look to find it? I have had trouble finding it online and its not in my textbook. Also, how to I find the level of reactivity? Lastly, the Ion Charge?? Please Help!!
Answers:just search online for "resistivity table", that is more useful. conductivity is just 1/resistivity. and search for "thermal conductivity table" to get the thermal numbers. here are the ones I have. resistivity Ag 15.9e-9 -m resistivity Cu 17.2e-9 -m or 17.2e-6 ohm-mm resistivity Au 22.14e-9 -m resistivity Al 28.2e-9 -m resistivity brass 35e-9 -m resistivity W 56e-9 -m resistivity Zn 68e-9 -m resistivity Fe 100e-9 -m resistivity Pt 105e-9 -m resistivity Nichrome 150e-8 -m thermal conductivity, all in W/mK Silver429 W/mK Copper401 Gold310 Aluminum 250 Beryllium218 Magnesium156 Zinc Zn116 Brass109 Nickel91 Iron80 Platinum70 Tin Sn67 Steel 46 Lead 35 Antimony18.5 Stainless Steel16 Mercury8 .
Question:Describe one method you could choose to determine the electrical conductivity of a length of 25mm diameter aluminium bar.
Answers:Put my Fluke 87 on MHO's and measure it then divided by 1000.
Question:I understand that quartz is used in the manufacture of semiconductors. How is this mineral able to conduct electricity? I think it is both covalently and ionically bonded. I am new to this stuff and cannot see if it is solid how electricity is able to be conducted
Answers:I answered the other one let me delve into it more... first, quartz can conduct actually, if you've ever owmed a quartz watch or heard of a cesium clock.... If you pass DC voltage into a crystal the crystal....well it gets pissed and vibrates....the output current from the crystal has a AC signature at a known frequency. YOu can then amplify it and pass it through frequency dividers to get 1 hzt, or 5Mhz etc. In wafers, they don't work the same way, the Si doesn't conduct....it's the "board" in which conductive materals are placed to form the logic circuit, but it's all on the surface. Hope this clears it up, chemicals like gemanium, carbon, do all the work, then when the the wafer has all it's channels done it becomes a "back end wafer" and metals are deposited on it to finish up at the very top.
|
I_love_Tanya_Romanova's blog
By I_love_Tanya_Romanova, 5 years ago, translation, ,
Hello everyone!
I want to invite you to participate in June Clash at HackerEarth. Contest is scheduled on June, 20. Contest duration is 24 hours, so there should be some comfortable time for every timezone :)
There will be five tasks in a problemset. Four of them are standard algorithmic problems with partial solutions allowed — you get points for every test that your solution passed. And the last task is an approximate problem — your score for this task depends on how good your solution is comparing to current best solution.
shef_2318 is author of this problemset. He already prepared January Lunchtime 2015 and a few interesting problems for CodeChef Long contests, and also he was both author of April Clash and co-author of May Clash.
I was working on this contest as a tester. As usual, I would like to say that I find this problemset interesting, I hope that some problems will be not too hard for beginners (don't give up and show your best with partial scoring) and some other tasks are challenging enough to attract more experienced contestants. I was glad to work with shef_2318 again :) Also I want to thank to chandan111 for technical help and doing his best on fixing all issues and improving HackerEarth platform.
There will be quite a lot of different contests over the weekend, and besides interesting problems, I have another reason for you to participate in this one:
1. $100 Amazon gift card + HackerEarth T-shirt 2.$80 Amazon gift card + HackerEarth T-shirt
3. \$50 Amazon gift card + HackerEarth T-shirt
4. HackerEarth T-shirt
5. HackerEarth T-shirt
I hope everything will run smoothly this time. Good luck to everybody — I hope to see you at the scoreboard :)
Upd. Contest has ended :) Thanks to everyone for participating :) Congratulations to winners:
3) anta
4) SoMin Mun
5) Kmcode
• +51
» 5 years ago, # | ← Rev. 2 → -19 Could you schedule it on Sunday? As you might know, there is the IPSC on Saturday.
» 5 years ago, # | ← Rev. 2 → +3 What are some ideas that can be applied to the approximation problem? I tried a lot but nothing get high points
• » » 5 years ago, # ^ | ← Rev. 2 → +11 You may check codes of other contestants now. Also I hope that some of them will later write brief explanations for us :)I believe that most of relatively easy (and at the same time working not so bad) ideas are related to local optimizations. I haven't read recent codes by top contestants yet :) But at first glance it seems so.While testing a problem, I tried a following very simple approach: generate some random approximation and run a hill climbing by doing flips of a single cell (picked randomly). Even with an unoptimized code and without any additional heuristics it scores below 15k points — so it would be probably enough to get #7 in a contest without any additional effort.
» 5 years ago, # | +24 Thanks to everyone fore participating :) It's nice to see a lot of high rated coders in standings. Also thanks for a lot of action at the top of the standings in last few hours (and even last few minutes), it was interesting to watch it.I hope you managed to understand statement of Good points :)Short editorials for standard tasks and codes by author and tester have been added now. They will be extended with some more details later; I hope that some sort of editorial for approximate task will be also ready a bit later. Feel free to ask any questions and discuss problems.
• » » 5 years ago, # ^ | ← Rev. 2 → 0 Where are the editorial? I couldn't find it.UPD: No need to answer; I've found it now.
• » » 5 years ago, # ^ | 0 Thank you for such a nice contest . I have a request please make editorials according to your own solution. It helps the understanding process. In editorial of DigIT you used 3-D dp of position remainder flag while in solution it is 4 -D dp. of which i have no clue what state represents
• » » » 5 years ago, # ^ | 0 Thanks for pointing it out. That part comes from an old draft; now I have fixed it. Missing parameter is used for storing sum of digits.
• » » » » 5 years ago, # ^ | 0 There is a typo states represents [position][remainder][sum] [flag] and not [position][sum]premainder][flag]
» 5 years ago, # | 0 Can anyone explain me , in the Setter's solution , how taking magic constant to be 5000 is giving a good speed up ? If A is 1 , B is 10^13 and K is 5000 so we have to check all numbers from 5000 to 10^13 in gap of 5000 and there are 2 * 10^9 such numbers . Won't this time out ?
» 5 years ago, # | +16 Thank you so much for a quality contest, guys. The participation was great, also the problems were pretty interesting, as well. The moment when I managed to understand the idea behind Good points was amazing. Tricked us all! :D@I_love_Tanya_Romanova: I've forwarded your comments/feedback regarding to comments to the Dev team, by the way. I'm sure that it will be taken care of in future contests. :)
|
# How do I use the mathematical symbol ≬ (U+226C, BETWEEN)?
I'm looking for neat symbols in query expressions, like "x≤3" instead of "x<=3" or "x .le. 3".
EDIT: My motivation: I have to deal with a very long list of SQL-like conditions, all squeezed into the scheme (variable, operator, value). I'd like to see this list in its most comprehensive and natural form. Replacing the verbose operators by common mathematical symbols is essential, easy to implement, and mostly trivial: = for "equals", ≤ for "le", ∍ for "contains", ... Reordering or changing the appearance of variable and value would definitively also help, but would be overkill and out of scope. My question here is just about the special symbol ≬ I've never seen before in my life.
In my attempt to simplify the clumsy expression
"x between [2, 4]" (meaning 2≤x≤4)
I've just found the promising Unicode symbol ≬, meaning between. This unknown symbol appears among the other well-known binary operators, but I don't know if it is appropriate in my context, and how to use it correctly. Is it acceptable, misleading, or even wrong to write:
"x≬[2, 4]" (meaning 2≤x≤4)
I'd be happy to use it, provided it at least resembles a valid mathematical notation.
Are there any better (more common) notations? My query language supports only lists, no intervals or sets. "x∊[2, 4]" currently means (x=2 ∨ x=4). As a compromise, I would change that into "x=[2, 4]" (ouch!) and then use "x∊[2, 4]" for 2≤x≤4. Any other ideas?
• $x\in[2,4]$ should mean $2\le x\le 4$; $x\in\{2,4\}$ should mean $x=2$ or $x=4$ – J. W. Tanner Feb 18 '20 at 21:02
• You seem to be asking about some particular query language, rather than common mathematical usage. If so, you'd be better at a website dealing with that language. – saulspatz Feb 18 '20 at 21:05
• @saulspatz No, I'm just curious about the usage of ≬, because I've never seen that symbol before. The query language is just an illustration, why and how I'd like to use it—provided, it's correct. – Ralph Feb 18 '20 at 21:19
• I have never ever seen that symbol used in mathematics and would not expect anybody to understand it without explanation. – Nate Eldredge Feb 18 '20 at 22:29
• @saulspatz This is a Unicode symbol, and is in the group of "mathematical symbols". Whatever it means, someone in the Unicode consortium must've thought that it is useful for something. (On the other hand, as of recently Unicode has expanded so much, adding symbols such as 🤑, 👽, 👅,👩🍳,🎅... - that maybe the real answer is "who cares" 😉.) – Stinking Bishop Feb 18 '20 at 22:39
|
Free Version
Moderate
# Group of Given Order: A Group of Order 18 determined by Subgroups
ABSALG-1BSHON
Let $G$ be a group of order $18$ with $9$ subgroups of order $2$ and one cyclic subgroup of order $9$.
$G$ is isomorphic to which of the following?
A
A cyclic group.
B
$S_3\times (\mathbb{Z}/3\mathbb{Z})$
C
$D_9$ (dihedral group with $18$ elements)
D
$A_3\times (\mathbb{Z}/6\mathbb{Z})$
|
# Best solutions for (La)TeX on Kindle
I have read Compiling documents online, LaTeX options for kindle?, and some similar Q&A's, but I have more specific question.
What are possible, good solutions for compiling TeX/LaTeX documents from Kindle (3G)? Probably it should be done online. However, Kindle has got some limitations (e.g. bad keyboard, limitations on extensions of loadable files). So which page should be chosen? Or maybe there exists another solution?
Edit (2013-04-08 00:05 CET) An existing answer gives only general ideas. I hope that there are people with some experience in using mobile devices to TeX & Co. exercises.
-
It's not really clear, you want to enter TeX file on Kindle, than somehow compile it (via online compiler like ShareLaTex and then get .pdf back to Kindle? I don't think this is possible, but would be happy to know I'm wrong. – m0nhawk Apr 22 '13 at 6:12
@m0nhawk I am afraid that compiling on Kindle is impossible. I hope that there are web pages, where I can compile, change extension and download the result onto Kindle. – Przemysław Scherwentke Apr 22 '13 at 6:29
@PrzemysławScherwentke older kindle devices are based on linux, now they are using android I think. kindle 3g is linux based and it should be possible to get root access and install new programs. so at least theoretically, it should be possible. but I don't have kindle to test it on. – michal.h21 Apr 22 '13 at 9:08
## 1 Answer
Using this online compiler, the only strict problem I had was the Kindle's refusal to open PDFs online, as you found. It would be fairly simple for someone running a similar site to automate sending an email with the PDF as attachment rather than linking to the attachment direct, and you can set up your Amazon account to deliver PDF attachments to the Kindle. I don't know much about the way Kindles are set up, but if it were possible to bypass that restriction (which, given that Kindles can handle PDFs, and can download other types of files, feels like it should be possible) that would fit the bill.
Having said that, even the almost negligible amount of code I wrote on that site on my Kindle was incredibly laborious; I used my Kindle for internet for several weeks when my laptop broke a few years ago, but the quantity of backslashes and curly brackets needed for LaTeX make Kindles singularly unsuited to the job, in my opinion. Even if you were only editing documents the amount of time it takes to scroll up and down would be maddening.
-
Yes, indeed, I am trying to obtain here any piece of information, how to deal with Kindle's restrictions and how to make usage of Kindle for purposes of this side less maddening. – Przemysław Scherwentke Apr 23 '13 at 21:35
|
# DOT calculus evaluation rules
The DOT calculus is an extension of lambda calculus which nicely supports polymorphic types as any other values. It seems it lays a foundation for sound evaluation rules of dependent object types.
However, I do not understand the evaluation works. Page 4 of the DOT paper mentions the following five evaluation rules ($\lambda$ represented using \)
let x = v in e[x y] -> let x = v in e[[z := y]t] if v = \(z :T)t
let x = y in t -> [x := y]t
let x = let y = s in t in u -> let y = s in let x = t in u
e[t] -> e[u] if t -> u
where e ::= [ ] | let x = [ ] in t | let x = v in e
For I have three questions.
1. If we follow the rules in this order of priority, if the value y assigned to x is not a value (a lambda), it seems that x is immediately replaced by y in t. But this seems to contradict the fact that they suppose it's call-by-need, that is, we evaluate lets only once.
2. If we says that the rules can be applied in any order, the complexity of evaluation can dramatically increase (e.g. if t = $\lambda f. f x x x$), so is it really out-of-order evaluation?
3. Are the rules of lambda-calculus implicitly added, or can they be recovered from this calculus ?
The rules can be applied in any order. The first three rules are mutually exclusive, while the last rule corresponds corresponds to evaluating the lets from outside in. Obviously, the last rule trivially overlaps with all the others via the first context, [ ], but this is unimportant. Using the second context let x = [ ] in t does overlap non-trivially with the third rule and corresponds to the decision between un-nesting a let and then evaluating outside in, or evaluating a nested let outside in and un-nesting it later. It's relatively easy to see that this ambiguity makes no difference. The last context let x = v in e looks like it might overlap with the first rule, but there are no reduction rules for x y, so whenever the first rule applies the fourth rule does not (except via the trivial context).
$(\lambda f.f x x x)$ is not a term in this calculus. The term that would correspond to this is $(\lambda f.\mathsf{let}\ f_1 = f x\ \mathsf{in\ let}\ f_2 = f_1 x\ \mathsf{in}\ f_2 x)$. (Or you could consider various ways of nesting the lets; it will just be undone by the second evaluation rule.) This is what it means to be in ANF (administrative normal form or A-normal form) which is mentioned by only very briefly explained: "That is, every intermediate value is abstracted out in a let binding." However, it doesn't need to be explained because the syntax for terms enforces this.
When the syntax description states "$x$, $y$, $z$ Variable" it means the meta-variables $x$, $y$, and $z$ (and variations like $x'$ or $y_1$) represent variables in the syntax. Similarly, $v$ and variations always represent a value, and $s$, $t$, and $u$ always represent a term. This means that the rule let x = y in t -> [x:=y]t only applies if $y$ is a variable. That rule does not allow, say, let x = y z in t -> [x:=(y z)]t. This is the formalization of the comment by the authors that "[r]eduction uses only variable/variable renamings instead of full substitution." Similarly, ANF is enforced because in the syntax of terms the only applications allowed are variables applied to variables (i.e. $x\, y$). $(x\, y)\, z$ is not a term. If such expression were intended to be allowed, the syntax would have had something like $s\, t$.
So, it should be mostly clear by now, but the answer to your last question is that there are no "implicit" rules. Arbitrary lambda terms are not terms in this syntax, but they can be encoded into terms via translation to ANF which simply involves let binding all subterms so that all applications are variables applied to variables. This translation can easily be formalized: \begin{align} \mathcal{A}(x) & = x \\ \mathcal{A}(\lambda x.M) & = \lambda x.\mathcal{A}(M) \\ \mathcal{A}(M N) & = \mathsf{let}\ x_1 = \mathcal{A}(M)\ \mathsf{in\ let}\ x_2 = \mathcal{A}(N)\ \mathsf{in}\ x_1\, x_2 \\ & \text{(where }x_1\text{ and }x_2\text{ are fresh)} \end{align}
|
# PASS THE CALIFORNIA ELECTRICIAN CERTIFICATION EXAM!
### IF YOU NEED YOUR CALIFORNIA ELECTRICAL CERTIFICATION CARD, ORDER THIS COURSE!
GENERAL ELECTRICIAN CERTIFICATION – If you possess over 8000 hrs of COMMERCIAL or INDUSTRIAL electrical experience, you are qualified to take the California General Electrician Certification Exam
RESIDENTIAL ELECTRICIAN CERTIFICATION – If you possess over 4800 hrs of RESIDENTIAL electrical experience, you qualify to take the California Residential Electrician Certification Exam.
### OUR STUDENTS PASS, we guarantee it! Our students enjoy a 98%+ pass rate!
Consultants are available at 866-685-8564 – Mon-Fri 7 am to 10 pm PST to answer any of your questions.
### Course Breakdown – This course covers the CA General Electrician exam & CA Residental Electrician Exam
Study and test at home! Don’t waste time commuting to a classroom when you can study conveniently in your home or office. Our program is a proven, reliable, and affordable way to pass the California Electrician Certification exam.
#### Each student has access to the following online study guides as well as online study questions and timed full-length simulated exams.
Basic Electrical Theory – eBook
##### Includes
• AC Current
• Simple AC Current Calculations
• AC Phase
• Phase Rotation
• Single-Phase Power Systems
• Three-Phase Power Systems
• Three-Phase Y and Delta Configurations
• Measurements AC Magnitude
• Power in Resistive and Reactive AC Circuits
• Calculating Power Factor
• Practical Power Factor Correction
• Kirchhoff’s law.
### Supplemental Study – eBook
##### Includes
• Ohms Law Tutorial
• Prints & Specifications Tutorial
• Wire Color Codes
• Electronic Symbols
• Electrical Diagrams
• Digital Multimeters
### Navigating the 2017 NEC – eBook
##### Includes
• How to quickly reference the NEC
• Code reference speed drills
• Layout and structure of the codebook
• How to use the Index
• TLC System for answering test questions
• Keyword Identification – How to identify the subject of a question
• Example questions are broken down for your review
• Practice questions with answers fully mapped out
### An introduction to the NFPA 70E – eBook
##### Includes
• Purpose of NFPA 70E
• NFPA 70E Brief Overview
### An introduction to the Cal OSHA Safety Guide – eBook
##### Includes
• How to quickly reference the Cal OSHA Safety Guide
• Layout and structure of the codebook
• Cal Osha Brief Overview
### ONLINE STUDY CENTER – Study Questions and Practice Exams
#### STUDY MODE | FLASH CARD MODE | CHAPTER TESTS | SIMULATED FINAL EXAMS
Access Study Questions and Simulated Exams From Any Device With Internet Access! Online features include 1300+ practice questions, virtual flashcards, chapter exams, and unlimited simulated final exams.
These questions provide the framework for you to practice locating the answers to exam questions through the use of the 2017 NEC Codebook. The process of constantly utilizing the codebook to answer practice questions will build up the speed and code knowledge you need to successfully pass your electrical exam.
Four Modes of study!
• STUDY MODE This mode covers all California electrical exam questions and answers within our course. (Organized as Question – Answer – Code Location.)
##### Includes
• GENERAL TRADE KNOWLEDGE QUESTIONS (General electrical questions not in the code)
• CALCULATION QUESTIONS (Load Calculations, Conduit Fill, etc.)
• 2017 NEC CODE QUESTIONS
• FLASHCARD MODE: Online Virtual Flashcards provide randomized access to our entire question set, an ideal setting to practice exam questions. (Organized as Question – Answer – Code Location)
• CHAPTER TESTS: Provide real-time feedback on each section of our study materials, enabling you to concentrate on sections that need improvement.
• SIMULATED FINAL EXAMS: Simulates the taking of your actual PSI Electrician exam.
##### Simulated Finals Are:
• FULL LENGHT (Exact same number of questions)
• TIMED (Exact same amount of time allowed)
• RANDOMIZED (Questions are randomized; all exams are unique.)
• PROPERLY BALANCED (Balanced according to the Candidate Information Bulletin guaranteeing a realistic exam simulation.)
Take as Many Simulated Final Exams as you Want! ( These exams help you to gain the knowledge and confidence needed to pass your electrical exam effortlessly)
ONE YEAR ONLINE ACCESS: Full access membership to our online study center. Experience the convenience of learning at home, or in the office, saving valuable travel time to an in-person course.
Our program is a reliable, affordable way to prepare for the California Electrician Exam.
THIS PROGRAM REQUIRES YOU TO HAVE A COPY OF THE 2017 (NEC) NATIONAL ELECTRIC CODE. IF YOU DO NOT OWN A COPY OF THE CODEBOOK, YOU SHOULD ACQUIRE ONE IMMEDIATELY
Common exam questions answered here! Please check with your State Board for current information. Licensing requirements change, and our site may not have been updated to reflect those changes.
#### QUESTION: Who is required to have a California certified electrician license.?
ANSWER: Any electrician working for a C10 Electrical contractor who uses the tools and makes a 100 volt-amps or higher connection must be certified.
Rules for who is required to be certified are found in Labor Code section 108.2 (b), Labor Code section 108 (c)
#### QUESTION: I am a C-10 Contractor. Do I have to be certified?
No, you do not need to be certified if you are a sole owner and hold the C10 license under your name, and you are performing the work directly under that license.
Yes, you must be certified if you work for another C-10 contractor and are working and paid as an employee.
For example, if you are paid to your social security number vs. being paid to your contractor’s license number, you must also be certified.
#### QUESTION: What are the qualifications required for taking the exam?
Full-length formal apprenticeship training *
On the job experience **
See CCR Title 8 Regulations § 291.1 Eligibility for Certification. for complete details.
*APPRENTICESHIP TRAINING: You must have completed a formal apprenticeship program approved by the California Apprenticeship Council, the federal Bureau of Apprenticeship Training, or an approved State apprenticeship council authorized by the federal Bureau of Apprenticeship Training, in the classification for which certification is sought.
** ON THE JOB EXPERIENCE:
• General Electrician: “The master license” must have over 8000 Hrs (4 yrs) of work experience with a C10 contractor maintaining, installing, or constructing residential and commercial or industrial electrical systems covered by the NEC.
“Anyone holding general electrician certification can perform work as a General Electrician; Residential Electrician; Voice Data Video Technician; or Non-Residential Lighting Technician.”
• Residential Electrician: Must have 4800 Hrs (2+ yrs) of work experience with a C10 contractor maintaining, installing, or constructing residential electrical systems covered by the NEC.
• Fire Life Safety Technician; Must have 4000 Hrs (2 yrs) of work experience with a C10 contractor maintaining, installing, or constructing residential electrical systems as covered by Article 760 of the NEC. Additionally, you must have NICET certification in fire alarm systems at Level II or above.
ANSWER: A third-party testing center, PSI administers the exam. You can contact PSI to schedule your exam at (800) 733-9267 4:30 am to 7 pm Mon-Fri or between 6: am to 2:30 pm Saturday (Pacific Time)
#### QUESTION: Why should someone study this course if the certification exam is an open-book test?
ANSWER: The electrical certification test is a very challenging exam and usually requires a month or more preparation to pass. Every contractor we have interviewed, holding both a C-10 license and the Electrical Certification card, said passing their C10 Contractors exam was easy compared to passing their California Electrical Certification exam.
The 2017 NEC is a large and complex publication containing 874 pages of code. It is challenging to locate each correct article and answer in the time allowed during the exam without thorough preparation.
Our course is designed to familiarize you with the 2017 NEC. You will know precisely how to find the correct answers in the code.
We prepare you for exam success by providing extensive knowledge of the code’s structure and thoroughly covering each subject appearing on your CA Electrical Certification Exam.
#### QUESTION: Why is a copy of the 2017 NEC required by this course?
ANSWER: The California Electrical Certification Exam is based on the 2017 NEC. This course is an excellent study companion to the 2017 NEC. This course makes frequent references to code sections and guides you through the exam topics.
The Ca Electrical Certification Exam is an open-book code exam. To pass, you must continuously reference the 2017 National Electrical Code.
The following codebooks are provided to you by the testing center.
• The NFPA 70 – National Electrical Code, 2017 Edition.
• the NFPA 70E – Standard for Electrical Safety in the Workplace
• the CAL/OSHA – Pocket Guide for the Construction Industry, updated 2015
#### QUESTION: How do I apply to take the exam?
ANSWER: You are required to fill out and submit the Application for Electrician Examination & Certification. Please print and fill it out as accurately as possible. Your signature is proof that all listed hours of experience are as accurate as possible.
You are also required to provide a copy of the SSA Form 7050 Employment History Report with your application.
“NOTE: Do not order this report online. It can take up to 6 months for you to receive it. Instead, walk into your local Social Security office and pick it up in-person over the counter.”
#### QUESTION: Where do I go to take my exam?
ANSWER: PSI maintains 22 testing locations across California. Also, out of state testing locations are found in Oregon and Nevada.
To see a complete listing of all locations, reference pages 2-4 of the Electrical Certification Candidate Information Bulletin (CIB)
#### GENERAL INFORMATION ABOUT THE EXAM:
The California General Electrician exam is a closed-book examination. No reference materials are allowed during the exam. You will not be permitted entry with any reference materials.
The Exam – the California General Electrician exam is an open book 100 question exam (for General Electrician) four-choice multiple-choice exam. You will have four choices per question. Some questions may require mathematical computation.
All exam questions are written to provide only one BEST answer and NOT written as trick questions.
To cancel or reschedule your exam, contact PSI at least two days before your exam date. PSI maintains 22 testing locations across California. Also, out of state testing locations are found in Oregon and Nevada.
To see a complete listing of all testing locations, reference pages 2-4 of the Electrical Certification Candidate Information Bulletin (CIB)
### California Certification Exam Applications
If you are not certified and are taking the Ca Electrician Journeyman exam, you must submit both:
If you previously passed your Electrician Certification Exam but failed to complete your continuing education before your renewal deadline, use the RENEWAL APPLICATION FOR ELECTRICIAN CERTIFICATION form DLSE-ECF6 (10/2015)
### Free California Electrical Certification Test
Pass our 25-question Free California Electrician Exam, and learn where you stand! These free journeyman exam practice questions are similar to those on the actual California electrical certification exam.
Use your codebook to answer the following CA Electrician Certification Exam questions.
• PASS –If you correctly answer twenty+ of these free Ca electrician exam practice questions, great job!
• FAIL – If you miss four or more questions order this course and learn to pass your CA Electrician Exam.
Use the 2017 (NEC) National Electrical Code) to solve these CA electrical certification exam practice questions.
1.
What is the demand factor applied in calculating feeder or service conductor load for a commercial kitchen having four electrical ovens?
2.
What is the minimum size THWN branch circuit conductor required for a 20 HP, 480-volt, 3 Phase Induction type squirrel cage motor with nameplate FLI of 26.5 amps?
3.
For a one-family dwelling with a 200 ampere, 120/240-volt, single-phase main service panel, supplied with size 2/0 AWG THW copper ungrounded service-entrance conductors in rigid metal conduit (RMC), what is the minimum allowed size of bonding jumper for the service-entrance conduit?
4.
With 15-kW or less on portable generators manufactured before __________ , listed cord sets or adapters incorporating listed GFCI protection are allowed to be used to meet the GFCI requirement.
5.
What is the minimum size THW copper branch-circuit conductor required by the NEC for a 3-phase, continuous-duty, ac motor drawing 70 amperes for phase, when all terminations are rated for 75° C?
6.
Lighting and trolley busway must be installed __________ ft. or more above the floor or working platform, unless a cover is provided that is identified for the purpose.
7.
Electrical nonmetallic tubing (ENT) shall be securely fastened in place within __________ of each cabinet, device box, fitting, junction box, or outlet box where it terminates.
8.
What is the minimum size for a copper grounding electrode conductor attached to the concrete-encased steel reinforcing bars used as a grounding electrode, when the ungrounded service-entrance conductors for a residence are size 3/0 AWG copper conductors?
9.
The material selected for a grounding electrode conductor should be protected from corrosion or resist any __________ condition extant at the installation.
10.
If the sum of PV system voltages of two monopole subarrays in a bipolar PV system exceeds the rating of the conductors and connected equipment, they must be__________.
11.
A location used to house four or more persons incapable of self-preservation due to age; mental limitations, mental illness or chemical dependency; or physical limitation due to accident or illness on a(n) __________ basis is a limited care facility.
12.
Grounding electrode conductor or bonding jumper connections to a grounding electrode must be accessible. A buried or encased connection in a __________ buried, or driven grounding electrode is not required to be accessible.
13.
What is the minimum rating for a service disconnecting means when supplied with a 120/240-volt, 3-wire, single phase service for a one-family dwelling?
14.
During lightning events, limiting the length of the primary protector grounding conductors for communications circuits assists in reducing voltage between the buildings communications systems and __________.
15.
General-use dimmer switches shall be used only to control permanently installed ___________ luminaries (lighting fixtures).
16.
On a metal wireway cross-section, what is the maximum percent that may be occupied by conductors, splices, and taps at any point?
17.
What is the minimum volume, in inches, required of a two (2) gang device box that is to contain two (2) size 14/2 AWG with ground NM cables and two (2) size 12/2 AWG with ground NM cables connected to a duplex receptacle connected to a single-pole switch, which also contain four (4) cable clamps?
18.
What is the minimum allowable trade size intermediate metal conduit (IMC) required to contain a total of ten (10) copper THW conductors in a 20 foot run, five (5) size 1 AWG and five (5) size 3 AWG?
19.
A test of the complete legally required standby system when installed must be conducted and witnessed by the __________.
20.
The minimum required size 75° C rated copper conductors for a demand load of 200 amperes when installed in an area with an expected ambient temperature of 120° F is __________.
21.
Which insulation gives conductors a greater ampacity when used in a dry location rather than a wet location?
22.
If a pole is over 8 ft. in height, a __________ terminal must be accessible from a handhole.
23.
Conductors, based on the __________°C temperature rating and ampacity as given in Table 310.60(C)(67) through Table 310.60(C)(86), may be terminated unless otherwise identified.
24.
For equipment rated 1200 amperes or more and over 6 feet wide that contain overcurrent devices, there shall be one entrance to and egress from the required working space not less than __________ inches wide and __________ feet high at each end of the working space.
25.
For required disconnecting means for a fluorescent luminaire, which of the following statements is true?
### 2017 NEC SOFTCOVER Required Codebook
If you don’t own a copy of the 2017 NEC (National Electrical Code), that’s ok. We have you covered. Add the codebook, and you are good to go.
THIS PROGRAM REQUIRES YOU TO HAVE A COPY OF THE 2017 (NEC). IF YOU DO NOT OWN A COPY OF THIS CODEBOOK, ACQUIRE ONE IMMEDIATELY.
# 2017 NEC SOFTCOVER
\$128.65
SKU: 2017 NEC CB Category:
The 2017 edition of this trusted Code presents the latest comprehensive regulations for electrical wiring, overcurrent protection, grounding, and equipment installation. Major additions reflect the continuing growth in renewable power technology. Other NEC revisions protect the public and workers from deadly hazards.
Work with the latest requirements governing public and private buildings, homes, structures, outdoor yards and lots; utility equipment; installations connecting to the power grid; and consumer-owned power generation systems and equipment.
The 2017 NEC is better aligned with the safe work practices in NFPA 70E: Standard for Electrical Safety in the Workplace. (Softbound, 888 pp., 2017)
ISBN-10: 1455912778
ISBN-13: 978-1455912773
|
# 3. The firm-level link between productivity dispersion and wage inequality: A symptom of low job mobility?
In many OECD countries, there are large and increasing productivity differences between firms, even within narrowly defined industries (Andrews, Criscuolo and Gal, 2016[1]; Syverson, 2011[2]) . At the same time, and as shown in Chapter 2, in these countries, differences in average wages between firms have also increased, explaining more than half of the overall increases in wage inequality. To some extent, such increases in between-firm wage differences reflect the sorting of workers with higher education and more experience into firms paying higher wages. But differences in wages between firms are large even for workers with similar characteristics, suggesting the existence of firm wage premia. Chapter 2 already suggested that increased dispersion in firm wage premia accounts for around two-thirds of increased between-firm wage inequality. This raises the question of the structural and policy determinants of the link between productivity and firm-level wage premia, with possibly large implications for wage inequality and the allocation of workers across firms.
A link between productivity and firm wage premia arises because workers are not perfectly mobile between firms. With limited job mobility, high-productivity firms need to pay high wages to attract workers while low-productivity firms may afford to pay low wages to workers who have limited outside job options. Job mobility, in the sense of voluntary job-to-job transitions rather than overall job churn, may be limited because there are costs for workers to search for jobs and for firms to hire workers due to labour market frictions (e.g. imperfect information on job opportunities or costs related to changing jobs), or because workers have preferences over non-wage characteristics of jobs, such as geographical location or working time flexibility (Manning, 2020[3]). At any given level of productivity dispersion, promoting job mobility would not only reduce wage premia dispersion between firms but also allow high-productivity firms to expand employment, thereby promoting the efficient allocation of labour and raising aggregate productivity.2
This chapter analyses firm-level pass-through of productivity to wage premia for 13 OECD countries over the period 1995-2017 to better understand the challenges for labour and product market policies that aim to raise aggregate productivity growth while pursuing equity goals. First, the chapter develops a conceptual framework to illustrate the channels shaping the link between productivity and wages at the firm level. Second, it analyses empirically the relevance of different channels using linked employer-employee data complemented with firm-level data. The empirical results suggest that the link between productivity and wages at the firm level is to an important extent shaped by the structure of labour and product markets, as well as wage-setting institutions:
• Policies that promote voluntary job mobility reduce wage dispersion between firms at any given level of productivity dispersion. Low rates of job-to-job mobility (a measure of voluntary worker transitions between jobs) and high employer concentration raise the pass-through of firm-level productivity to wages by giving firms some degree of monopsony power on wage-setting. Raising job-to-job mobility from the 20th percentile of countries covered by the analysis (corresponding roughly to Greece) to the 80th percentile (corresponding roughly to Sweden) would reduce overall wage inequality by about 15%. To put this reduction in perspective, the median increase in wage inequality across countries over the period 1995-2015 was around 10% (Chapter 2).3
• Policies that promote product market competition amplify the effect of productivity dispersion on wage dispersion between firms. With strong product market competition, a given difference in productivity between firms implies a larger difference in output and employment between them. At any given level of job mobility, high-productivity firms need to pay high wages relative to low-productivity firms to attain their desired level of employment. However, the upward effect of product market competition on the pass-through of productivity to wage premia may partially or fully be offset if it raises opportunities for job mobility, including through the market entry of new firms.
• More centralised collective bargaining (e.g. sector-level bargaining) and higher minimum wages reduce productivity pass-through and wage premia dispersion between firms, but risk reducing employment if wage floors are set too high. With limited job mobility, low wages in low-productivity firms may partly reflect monopsonistic wage-setting by employers so that raising wage floors through more centralised collective bargaining or higher minimum wages may not necessarily reduce employment. However, setting wage floors in excess of workers’ productivity risks reducing employment. This risk could be reduced by combining centralised collective bargaining with sufficient scope for further negotiation at the firm level, and focusing minimum wage increases on areas and groups for which initial levels of wages are low.
The results in this chapter have a number of implications for public policies aimed at promoting productivity growth while limiting wage inequality, especially in the wake of the COVID-19 crisis that may require significant reallocation of workers from distressed firms to those with better growth prospects (Barrero, Bloom and Davis, 2020[4]). The main implication is that policies promoting job mobility, notably by eliminating unnecessary labour market frictions, can complement policies that aim directly at closing productivity gaps between firms, including via the enhancement of skills and innovation capabilities of lagging firms (Nicoletti, von Rueden and Andrews, 2020[5]; Gal et al., 2019[6]). Promoting job mobility would reduce wage dispersion between firms at any given level of productivity dispersion while also raising the efficiency of labour allocation, and thereby productivity, average wages and employment.
The results further imply that particular care should be taken in reforming wage-setting institutions in countries where job mobility is low, such as a number of Southern European countries. In these countries, a closer alignment of productivity and wages through more decentralised collective bargaining would likely promote employment but may also raise wage dispersion between firms. The possible adverse effects on wage dispersion can be mitigated by combining sector-level bargaining with bargaining at the firm-level through so-called organised decentralisation rather than simply replacing sector-level by firm-level bargaining (OECD, 2019[7]). For example, sector-level agreements could include opt-out clauses or leave more scope for further negotiation at the firm-level. Another way of limiting possible adverse effects of decentralisation on wage dispersion would be to complement decentralisation with increases in, or the introduction of, statutory minimum wages where they are currently low or non-existent.
The remainder of the chapter is organised as follows. Section 3.2 provides a number of stylised facts on the dispersion of firm wage premia across countries, industries and regions. Section 3.3 proposes a conceptual framework to analyse the link between productivity and wages across firms and describes the empirical approach. Section 3.4 presents the results on firm-level productivity-wage pass-through, as well as the structural and policy factors shaping it. Section 3.5 concludes by drawing out the policy implications emerging from the empirical analysis.
In order to situate the analysis in this chapter in the overall context of this Volume, it is useful to resort to a simple decomposition (Figure 3.1). Overall wage inequality can be decomposed into a between-firm and within-firm element. Within-firm wage inequality is largely determined by differences in worker characteristics such as gender, skill and experience. The between-firm element can be decomposed further into differences in workforce composition, and differences in firm wage premia that are independent of workforce composition. Firm wage premia can be obtained by estimating average firm wages while netting out the effect of average workforce characteristics, such as gender, skill and experience (see in Chapter 2 for details). This chapter focuses on the link between productivity and firm wage premia, as well as the policies and structural factors shaping it, including competition in labour and product markets, as well as wage setting institutions.
Firm wage premia, i.e. the part of wages that is determined by firms rather than workers’ individual characteristics, are estimated using linked employer-employee data as in Chapter 2 by purging firms’ average wages from the individual characteristics of their workers, i.e. typically occupation, education, age, gender and working-time status.4 Using these estimated wage premia, Chapter 2 shows that in most countries, dispersion in firm wage premia accounts for around one-third of overall wage inequality.5
To analyse the role of productivity dispersion in wage dispersion between firms, this chapter focuses on wage premia differentials within industries. Wage premia differentials between industries are small relative to differentials between firms within the same industry.6 On average across countries, around 75% of dispersion in firm wage premia is explained by wage differences between firms within the same industry (Figure 3.2).7 The contribution of between-industry wage premia dispersion is likely to increase relative to the within-industry component when using more detailed industry disaggregations. For example, evidence for the United States suggests that at a higher level of industry disaggregation (4-digit instead of 2-digit) the contribution of the between-industry component may account for a significantly higher share of overall wage premia dispersion (Haltiwanger and Spletzer, 2020[8]).
Wage premia dispersion is typically larger in countries with larger productivity dispersion, suggesting that wage premia dispersion may at least partly be related to productivity dispersion (Figure 3.3). In labour markets with frictions that limit job mobility, firms partly pass on productivity differentials to wages of workers with similar characteristics. Higher-productivity firms need to offer higher wages to attract workers from lower-productivity firms which can, in turn, offer lower wages without losing all workers. In other words, higher productivity is partly reflected in higher wages and partly in higher employment.
A positive link between firm-level productivity and wage premia arises as the consequence of labour market frictions, but may also depend on competition in product markets as well as institutional features of the wage-setting process (Manning, 2020[3]).
In perfectly competitive labour markets where workers move from a job in one firm to a job in another one as soon as there are differences in wage premia between them (i.e. there are no barriers to job mobility) productivity differences translate into differences in employment without generating wage differences. Firms adjust employment until the marginal products of labour are equalised across them and wages equal the marginal products of labour. All firms pay identical wages, i.e. they are “wage-takers”, but high-productivity firms employ more workers than low-productivity ones. By contrast, in labour markets where job mobility is limited (i.e. labour supply to the firm is upward-sloping) productivity differences translate into differences in both employment and wages. High-productivity firms demand more labour than low-productivity ones but barriers to the mobility of workers prevent marginal products of labour from equalising across them. Irrespective of whether firms set wages equal to their respective marginal products of labour, or whether they exploit the wage-setting power stemming from the upward-sloping labour supply curve and set wages below marginal products, wages are higher in high-productivity firms.
Limited job mobility may reflect information frictions, pecuniary or non-pecuniary costs to job switching, or individual preferences for non-wage job characteristics (such as working conditions or commuting time). Models of labour market monopsony typically exploit one or a combination of these microeconomic drivers of limited job mobility to generate a surplus from a job match (“rent”) that firms may partially share with workers. The common mechanism underlying pass-through of productivity to wages in all of these models is an upward-sloping labour supply curve to the individual firm (Manning, 2020[3]).8 A flatter labour supply curve increases the average level of wages by limiting the scope for employers to mark down wages relative to marginal productivity, and reduces the link between productivity and wages between firms by limiting the dispersion of marginal labour productivity. In other words, higher productivity pass-through can be viewed as undesirable since it reflects barriers to job mobility and misallocation of labour across firms.
An alternative view, which does not rely on the wage-setting power of firms resulting from an upward-sloping labour supply curve, is that firms and workers bargain over the distribution of rents. In search and matching models with wage bargaining, workers and firms bargain over rents that arise from barriers to job mobility (Pissarides, 2000[9]). Importantly, these different models raise the question whether firm-level productivity-wage pass-through should be viewed as a symptom of low job mobility and a measure of misallocation of workers across firms, or as the potentially efficient sharing of rents between firms and workers (Box 3.1).9
Given the importance of labour market frictions, firm-level productivity-wage pass-through is expected to be large when labour market frictions are large, which is likely to be reflected in low rates of voluntary job mobility.10 To some extent, voluntary job mobility can be influenced by policies that reduce the cost of job switching for workers, including in the areas of occupational licensing and non-compete clauses; job-search assistance and training; as well as residential mobility and telework. A more competitive product market environment may also raise pass-through (Annex A). In such an environment, firms pass on a large share of productivity gains to product prices and gain a larger share of the market than in an environment with more limited product market competition, which induces a larger adjustment in employment and thus a larger adjustment in wages. Finally, pass-through will tend to be larger the more wage setting takes place at the firm-level (or worker level) rather than at the industry or national levels. Wage-setting institutions such as collectively agreed industry-level wage floors or national minimum wages may constrain firms’ wage-setting choices and thereby weaken the link between firm-level wages and productivity.
While productivity pass-through is partly determined by market-level variables such as job mobility, product market competition and wage institutions, it may vary even within the same firm. Such within-firm differences could reflect monopsonic wage discrimination as firms set lower wages for workers with fewer opportunities (e.g. women, low-skilled workers); differences in demand for different groups of workers across low- and high-productivity firms, e.g. due to complementarities between technology and skills; or differences in bargaining power.
Ideally, firm-level productivity-wage pass-through is analysed empirically using worker-level linked employer-employee data. The worker-level approach relates worker-level wages to firm-level productivity (see Box 3.2 for the technical details). Its main advantage is that it can provide granular insights into firm-level pass-through, including differences between different groups of workers such as low-skilled and high-skilled workers or men and women. Worker-level data can also be used to construct measures of local labour market concentration to analyse the extent to which the degree of productivity-wage pass-through depends on the number of potential employers. The drawback of the individual-level approach based on worker-level data is that it is only feasible where productivity is available in linked employer-employee data, which is currently only the case in nine of the countries for which data were collected for this study, making it difficult to systematically relate the degree of pass-through to industry and country characteristics.
In the absence of matched employer-employee data with information on productivity at the firm level for a large number of countries and the impossibility of pooling the worker level information across countries due to confidentiality issues, the analysis resorts to an industry-level approach to analyse the cross-industry and cross-country pattern of productivity-wage pass-through. The industry-level approach relates between-firm dispersion in wage premia within industries to between-firm dispersion in productivity. Its main advantage is that it can be applied to countries for which productivity is not available in the linked employer-employee data by computing between-firm dispersion in productivity from external data sources, namely representative firm-level data through the OECD MultiProd database (Berlingieri et al., 2017[11]). The significant variation across countries, industries and over time makes this approach ideal for analysing the structural and institutional determinants of firm-level productivity-wage pass-through. The industry-level empirical analysis is conducted on 13 OECD countries over the period 2001-15 and covers 22 industries for which high-quality data on productivity dispersion are available.
The empirical analysis considers structural and institutional characteristics that relate to job mobility, product market competition, as well as wage-setting institutions (Annex Table 3.B.1). Job mobility is proxied by the share of annual job-to-job transitions in total employment.11 The idea is that in a near perfectly competitive labour market without frictions the elasticity of labour supply is high, so that employed workers can be expected to voluntarily move between jobs as soon as they receive a job offer with a marginally higher wage. The advantage of the rate of job-to-job transitions as a measure of the elasticity of labour supply is that it is likely to exclude most involuntary job transitions, which typically involve transitions into non-employment. Product market competition is proxied by import competition (defined as the share of imported value added in domestic demand) which, in contrast to indicators of product market regulation, is available at the country-industry level of disaggregation, and is unlikely to be correlated with labour market competition. The role of collective bargaining is analysed by focusing on the level of decentralisation in collective bargaining systems, i.e. largely decentralised systems based on firm-level bargaining or more centralised systems with a stronger emphasis on sector or national level bargaining (OECD, 2019[7]).12 The minimum wage is expressed by the ratio of the statutory minimum wage to the median wage of full-time workers.
Using the industry-level approach, the elasticity of firm-level wage premia to productivity is estimated to be around 0.15 on average across countries (Figure 3.4). This is in the range of estimates of firm-level productivity-wage pass-through in previous research (Card et al., 2018[12]). The country-by-country estimates based on the individual-level approach suggest that there is significant variation in pass-through across countries, with the pass-through elasticity ranging from 0.08 in the Netherlands to 0.22 in Hungary. Thus the average estimate of productivity pass-through across countries is likely to depend on country composition.
Across firms within the same industry, productivity-wage pass-through tends to be higher for high-skilled workers than low-skilled workers and higher for men than women (Figure 3.5). Differences in pass-through across different groups of workers imply that productivity-wage pass-through affects both wage inequality between firms and inequality within them. With homogeneous pass-through across different groups of workers, larger productivity dispersion only raises between-firm wage inequality. It may additionally raise within-firm wage inequality if pass-through is larger for high-skilled workers and men who typically earn higher wages to begin with. In other words, larger pass-through for high-skilled workers and men provides an explanation for the empirical fact documented in Chapter 2 that within-firm and between-firm wage inequality tend to go together.
The role of labour market frictions is analysed by relating productivity-wage pass-through to (i) the share of job-to-job transitions in employment as a proxy of voluntary job mobility, or (ii) to local labour market concentration as a proxy of employers’ wage-setting power (monopsony). The results suggest that productivity-wage pass-through increases with the degree of labour market frictions as measured by a low rate of job-to-job transitions (Figure 3.6, Panel A). As workers do not easily move from one job to another, low-productivity employers can afford paying low wages relative to high-productivity ones. Conversely, high-productivity employers need to raise wages well above low-productivity ones to poach workers from them. The negative relation between job mobility and productivity pass-through is robust to the use of alternative measures of job mobility (Annex Table 3.B.3, Column 6), as well as to controlling for interactions of productivity with trade in value added and collective bargaining (Annex Table 3.B.2, Column 10).13 The effect of raising job mobility on overall wage inequality through the pass-through channel is quantitatively significant: raising job mobility from the average of countries with low job mobility to the average of those with high mobility – roughly equivalent to an increase from the 20th percentile of job mobility (Greece) to the 80th percentile (Sweden) – would reduce overall wage inequality by about 15%. To put this reduction in perspective, the median increase in wage inequality across countries over the period 1995-2015 was around 10% (see Chapter 2).14
The importance of job mobility for productivity pass-through is confirmed in a variety of sensitivity checks (Annex Table 3.B.3). A first issue with the rate of job-to-job transitions as a measure of job mobility is that it may be positively correlated with the business cycle so that it may pick up the effects of low unemployment rather than job-to-job mobility. However, while the estimated coefficient on the interaction between productivity and unemployment is indeed highly significant, the rate of job-to-job transitions continues to be negatively related to productivity pass-through (Annex Table 3.B.3, Column 2). Similarly, controlling for the employment rate does not significantly change the estimated pass-through coefficient (Annex Table 3.B.3, Column 3). Another issue with the rate of job-to-job transitions is that it may be endogenous to the wage structure. For a given level of productivity dispersion, a more compressed wage structure may reduce incentives for job-to-job mobility. To reduce the risk of endogeneity, an alternative mobility measure is constructed as the product of average job mobility in all other industries in the same country and average job mobility in the same industry in all other countries. The advantage of this measure is that it can reasonably be considered as exogenous to wage-setting in a specific industry and country. The negative relation between industry labour market frictions and productivity pass-through at the firm level is robust to using this transformed variable as an instrument (Annex Table 3.B.3, Column 5).15
Evidence from Portuguese LinkEED data with information on firm-productivity suggests that wages are lower and the degree of wage-productivity pass-through is generally higher in local labour markets where employment is highly concentrated in a small number of employers than elsewhere (Box 3.4). This is consistent with previous studies suggesting that local labour market concentration reduces the elasticity of labour supply as job opportunities in other firms decline (Azar, Marinescu and Steinbaum, 2019[16]). On average, as described in Figure 3.7, the empirical model suggests that wage premia are about 6% lower in firms in highly concentrated labour markets (i.e. at the 75th percentile of the distribution of local labour market concentration) than in less concentrated ones (i.e. those at the 25th percentile). Importantly, however, while wage premia appear to be lower, productivity-wage pass-through appears to be significantly larger in highly concentrated labour markets. The most productive firms pay about 55% higher wage premia than the least productive firms in highly concentrated labour markets. By comparison, in less concentrated labour markets, this pay difference is significantly lower at around 45%. This is likely to reflect the fact that when workers have limited job options outside of their current employer, as is the case in highly concentrated labour markets, low-productivity firms can afford paying lower wages relative to high-productivity ones and nonetheless attract (or retain) a sufficient number of workers. The results account for the role of unobserved factors that affect wages and local labour market concentration and are robust to different definitions of local labour market concentration. In future work of the OECD LinkEED project, this analysis will be extended to a number of other countries for which the necessary data are available.
Pass-through of productivity to wage premia is larger in industries that face stronger import competition as measured by the share of imported value added in final domestic demand (Figure 3.6, Panel B). In a competitive environment, a given change in productivity induces a larger adjustment in employment and thus a larger adjustment in wages, as firms passing on the productivity gain to product prices gain a larger share of the market than in an environment with limited product market competition. According to the empirical estimates, productivity pass-through at the firm-level is about 13 percentage points larger in countries and industries with an above-median share of imported value added in final domestic demand than in those with a below-median share (22% compared with 9%). Measures that proxy domestic competition, such as industry concentration, are generally not statistically significant, which could reflect the fact that stronger product market competition may also raise competition for workers, including through the market entry of new firms (Annex Table 3.B.2).16
The decentralisation of collective bargaining tends to increase the pass-through of firm-level productivity to wages (Figure 3.6, Panel C).17 Collective bargaining systems characterised by a predominance of industry-level bargaining (labelled “centralised”) focus on industry-wide productivity in wage setting, whereas systems based on a predominance of firm-level bargaining (labelled “fully or largely decentralised”) allow for larger differentiation of wages according to firm-specific productivity.18 Country-specific evidence on decentralisation of collective bargaining in Germany supports the cross-country evidence on the positive link between decentralisation and productivity-wage pass-through at the firm-level. In Germany, there has been a tendency towards more flexibility in wage setting at the firm-level over the past three decades, partly driven by the increased scope for within sector-level agreements in bargaining at the firm-level and partly by declining collective bargaining coverage, which has tended to raise the pass-through of firm-level productivity to wages (Box 3.5).
Statutory minimum wages (relative to the median wage) also tend to reduce productivity pass through at the firm-level (Figure 3.6, Panel C). A key argument for the use of minimum wage is to contain the wage-setting power of employers in imperfectly competitive labour markets and ensure fair wages for workers, particularly those with limited skills or a weak bargaining position.19 The results suggests that the impact of minimum wages on overall wage dispersion, as documented for example in OECD (2018[24]), is partly driven by a reduction in wage dispersion between firms for a given level of productivity dispersion. The compression of the wage distribution may have adverse effects on the efficiency of labour allocation but recent evidence for Germany and Israel suggests that this may not necessarily be the case. Higher minimum wages may force low-productivity firms to raise productivity or exit the market, thereby reducing productivity dispersion (Drucker, Mazirov and Neumark, 2019[25]; Dustmann et al., 2021[26]).
While wage differences between firms originating from productivity-wage pass-through provide incentives for workers to move from lower-productivity to higher-productivity firms, they also raise overall wage inequality (Criscuolo et al., 2020[27]). The results in this chapter suggest that the extent of firm-level productivity-wage pass-through is shaped by the degree of competition in labour and product markets, as well as the nature of wage-setting institutions. Conditional on productivity dispersion, wage dispersion between firms increases with frictions in the labour market and is amplified by strong product market competition and decentralised collective bargaining. The key policy question raised by these empirical results is how to promote productivity-enhancing reallocation without widening pay differences between firms, especially in a context of potentially large shifts in labour demand across firms and industries in the wake of the COVID-19 crisis.
The main policy implication emerging from this chapter is that facilitating voluntary job mobility of workers would not only raise productivity growth by easing reallocation from low to high-productivity firms but may also limit wage dispersion between firms by weakening the link with productivity dispersion. In the absence of complementary measures to facilitate job mobility and strengthen competition in labour markets, trade and competition-friendly product market reforms as well as the gradual decentralisation of collective bargaining in countries with a strong tradition of sector-level bargaining risk raising overall inequality by raising wage dispersion between firms. Policies that would facilitate job mobility and strengthen competition in labour markets include:
• Limiting legal and contractual barriers to job mobility can promote competition between employers for workers and strengthen worker incentives for taking up new opportunities. Opportunities for job mobility tend to be more limited in more concentrated local labour markets (Naidu, Posner and Weyl, 2018[28]; OECD, 2019[29]) and where the importance of non-compete clauses, no-poaching agreements, and occupational licensing requirements is greater (Bambalaite, Nicoletti and von Rueden, 2020[30]; Kleiner and Xu, 2020[31]; Lipsitz and Starr, 2019[32]).
• Strengthening adult learning and taking a more comprehensive approach to activation that goes beyond promoting access to employment would help workers find better jobs in other firms. For instance, public employment services in the form of job-search assistance, training and career counselling could be made available to workers in jobs that are supported by job retention schemes that were used on a massive scale in most OECD countries to curb job losses as a result of the COVID-19 crisis (OECD, 2020[33]; OECD, 2020[34]). More generally, public employment services could be made available to all workers who would like to progress in their careers but face significant barriers in moving to better jobs, including people in non-standard forms of work, as well as people who are currently employed but lack relevant skills or live in lagging regions. This would require a more active role of public employment services in advising workers on adult learning opportunities, as well as collecting information on skill requirements of prospective employers.
• Mobility across geographical areas could be fostered by reforming housing policies, including by redesigning land-use and planning policies that raise house price differences across locations, reducing transaction taxes on selling and buying a home, and relaxing overly strict rental regulations (Causa and Pichelmann, 2020[35]). Social cash and in-kind expenditure on housing could also support residential mobility by raising the affordability of housing for low-income households, especially if such expenditure is designed in such a way that benefits are fully portable across geographical areas.
• An expansion of telework could partly compensate for limited geographical mobility. A significant fraction of jobs can potentially be conducted remotely – between one-quarter and one-third of all jobs according to some estimates (Dingel and Neiman, 2020[36]; Boeri, Caiumi and Paccagnella, 2020[37]; OECD, 2020[38]) – potentially raising job opportunities for workers and reducing costs to move from one job to another. Promoting telework will require strengthening digital infrastructure to increase network access and speed for all workers as well as digital adoption by firms; enhancing workers’ ICT skills through training; as well as raising employers’ management capabilities through the diffusion of managerial best practices (Nicoletti, von Rueden and Andrews, 2020[5]; OECD, 2020[38]).
A significant degree of barriers to job mobility are likely to remain even after addressing policy distortions that contribute to labour market frictions. Workers differ in their preferences for jobs in different firms, industries and geographical areas as well as their ability to perform them, and firms differ in terms of non-wage working conditions and skill requirements, which creates inherent barriers to job mobility. Moreover, raising job mobility may not be the most effective policy to address within-firm wage inequality, which is likely to mainly reflect differences in individual worker characteristics such as skills or gender. Skills policies that allow all workers to acquire and update relevant skills over the life cycle and policies that raise women’s opportunities to work in high-productivity firms, including through flexible work schedules and telework, will need to complement policies to raise job mobility. Tax and benefit systems can also prevent workers who have limited job opportunities despite measures to promote mobility, skills and working time flexibility from experiencing poverty and financial hardship.
In principle, wage-setting institutions in the form of minimum wages and collective bargaining could help to contain the wage-setting power of firms in labour markets with limited job mobility, thereby reducing pay differences between them. In areas and occupations where wages are well below workers’ productivity, this could even raise employment by raising labour market participation among people who are unwilling to work at current wages. However, there is a risk that wage floors are set at levels in excess of workers’ productivity, which would reduce employment. This risk could be reduced by combining centralised collective bargaining with sufficient scope for further negotiation at the firm level, and focusing minimum wage increases on areas and groups for which initial levels of wages are low. Ongoing research based on a comparison between Norway and the United States further suggests that wage compression between firms does not necessarily reduce the efficiency of labour allocation between firms (Hijzen, Zwysen and Lillehagen, 2021[39]). The key to achieve high productivity through an efficient allocation of labour is to complement wage-setting institutions that constrain the ability of firms to pay different wages for similar workers with measures that promote innovation in low productivity firms and strengthen job mobility.
## References
[43] Abowd, J. et al. (2012), “Persistent inter‐industry wage differences: rent sharing and opportunity costs”, IZA Journal of Labor Economics, Vol. 1/1, p. 7, http://dx.doi.org/10.1186/2193-8997-1-7.
[1] Andrews, D., C. Criscuolo and P. Gal (2016), The Best versus the Rest: The Global Productivity Slowdown, Divergence across Firms and the Role of Public Policy, OECD Publishing, Paris, https://doi.org/10.1787/24139424 (accessed on 26 June 2019).
[16] Azar, J., I. Marinescu and M. Steinbaum (2019), “Measuring Labor Market Power Two Ways”, AEA Papers and Proceedings, Vol. 109, pp. 317-321, http://dx.doi.org/10.1257/pandp.20191068.
[17] Azar, J., I. Marinescu and M. Steinbaum (2017), Labor Market Concentration, National Bureau of Economic Research, Cambridge, MA, http://dx.doi.org/10.3386/w24147.
[30] Bambalaite, I., G. Nicoletti and C. von Rueden (2020), “Occupational entry regulations and their effects on productivity in services: Firm-level evidence”, OECD Economics Department Working Papers, No. 1605, OECD Publishing, Paris, https://dx.doi.org/10.1787/c8b88d8b-en.
[4] Barrero, J., N. Bloom and S. Davis (2020), COVID-19 Is Also a Reallocation Shock, National Bureau of Economic Research, Cambridge, MA, http://dx.doi.org/10.3386/w27137.
[41] Barth, E. et al. (2016), “It’s Where You Work: Increases in the Dispersion of Earnings across Establishments and Individuals in the United States”, Journal of Labor Economics, Vol. 34/S2, pp. S67-S97, http://dx.doi.org/10.1086/684045.
[20] Bassanini, A., C. Batut and E. Caroli (2019), “Labor Market Concentration and Stayers’ Wages: Evidence from France”, SSRN Electronic Journal, http://dx.doi.org/10.2139/ssrn.3506243.
[11] Berlingieri, G. et al. (2017), “The Multiprod project: A comprehensive overview”, OECD Science, Technology and Industry Working Papers, No. 2017/04, OECD Publishing, Paris, https://doi.org/10.1787/2069b6a3-en.
[37] Boeri, T., A. Caiumi and M. Paccagnella (2020), “Mitigating the work-safety trade-off?”, Covid Economics: Vetted and Real-Time Papers, Vol. 1/2, pp. 60-66, https://cepr.org/sites/default/files/news/CovidEconomics2.pdf.
[14] Bøler, E., B. Javorcik and K. Ulltveit-Moe (2018), “Working across time zones: Exporters and the gender wage gap”, Journal of International Economics, Vol. 111, pp. 122-133, http://dx.doi.org/10.1016/j.jinteco.2017.12.008.
[44] Burdett, K. and D. Mortensen (1998), “Wage Differentials, Employer Size, and Unemployment”, International Economic Review, Vol. 39/2, p. 257, http://dx.doi.org/10.2307/2527292.
[12] Card, D. et al. (2018), “Firms and Labor Market Inequality: Evidence and Some Theory”, Journal of Labor Economics, Vol. 36/S1, pp. S13-S70, http://dx.doi.org/10.1086/694153.
[22] Card, D., J. Heining and P. Kline (2013), “Workplace Heterogeneity and the Rise of West German Wage Inequality*”, The Quarterly Journal of Economics, Vol. 128/3, pp. 967-1015, http://dx.doi.org/10.1093/qje/qjt006.
[23] Carlsson, M., J. Messina and O. Skans (2016), “Wage Adjustment and Productivity Shocks”, The Economic Journal, Vol. 126/595, pp. 1739-1773, http://dx.doi.org/10.1111/ecoj.12214.
[40] Causa, O., N. Luu and M. Abendschein (2021), Labour market transitions across OECD countries: stylised facts.
[35] Causa, O. and J. Pichelmann (2020), “Should I stay or should I go? Housing and residential mobility across OECD countries”, OECD Economics Department Working Papers, No. 1626, OECD Publishing, Paris, https://dx.doi.org/10.1787/d91329c2-en.
[27] Criscuolo, C. et al. (2020), “Workforce composition, productivity and pay: The role of firms in wage inequality”, OECD Economics Department Working Papers, No. 1603, OECD Publishing, Paris, https://doi.org/10.1787/52ab4e26-en.
[36] Dingel, J. and B. Neiman (2020), How Many Jobs Can be Done at Home?, National Bureau of Economic Research, Cambridge, MA, http://dx.doi.org/10.3386/w26948.
[25] Drucker, L., K. Mazirov and D. Neumark (2019), Who Pays for and Who Benefits from Minimum Wage Increases? Evidence from Israeli Tax Data on Business Owners and Workers, National Bureau of Economic Research, Cambridge, MA, http://dx.doi.org/10.3386/w26571.
[26] Dustmann, C. et al. (2021), “Reallocation Effects of the Minimum Wage”, The Quarterly Journal of Economics, http://dx.doi.org/10.1093/qje/qjab028.
[6] Gal, P. et al. (2019), “Digitalisation and productivity: In search of the holy grail – Firm-level empirical evidence from EU countries”, OECD Economics Department Working Papers, No. 1533, OECD Publishing, Paris, https://dx.doi.org/10.1787/5080f4b6-en.
[21] Gürtzgen, N. (2009), “Wage Insurance within German Firms: Do Institutions Matter?”, SSRN Electronic Journal, http://dx.doi.org/10.2139/ssrn.1494302.
[8] Haltiwanger, J. and J. Spletzer (2020), Between Firm Changes in Earnings Inequality: The Dominant Role of Industry Effects, National Bureau of Economic Research, Cambridge, MA, http://dx.doi.org/10.3386/w26786.
[39] Hijzen, A., W. Zwysen and M. Lillehagen (2021), “Job mobility, reallocation and wage growth: A tale of two countries”, OECD Social, Employment and Migration Working Papers, No. 254, OECD Publishing, Paris, https://dx.doi.org/10.1787/807becdf-en.
[45] Jean, S. and G. Nicoletti (2015), “Product market regulation and wage premia in Europe and North America: An empirical investigation”, International Economics, Vol. 144, pp. 1-28, http://dx.doi.org/10.1016/J.INTECO.2015.04.005.
[31] Kleiner, M. and M. Xu (2020), “Occupational Licensing and Labor Market Fluidity”, NBER Working Paper No. 27568, http://dx.doi.org/10.3386/w27568.
[32] Lipsitz, M. and E. Starr (2019), “Low-Wage Workers and the Enforceability of Non-Compete Agreements”, SSRN Electronic Journal, http://dx.doi.org/10.2139/ssrn.3452240.
[3] Manning, A. (2020), “Monopsony in Labor Markets: A Review”, ILR Review, Vol. 74/1, pp. 3-26, http://dx.doi.org/10.1177/0019793920922499.
[47] Manning, A. (2013), Monopsony in Motion, Princeton University Press, http://dx.doi.org/10.2307/j.ctt5hhpvk.
[42] Manning, A. (2011), “Imperfect Competition in the Labor Market”, in Handbook of Labor Economics, Elsevier, http://dx.doi.org/10.1016/S0169-7218(11)02409-9.
[46] Manning, A. (1995), “How Do We Know That Real Wages Are Too High?”, The Quarterly Journal of Economics, Vol. 110/4, pp. 1111-1125, http://dx.doi.org/10.2307/2946650.
[19] Marinescu, I., I. Ouss and L. Pape (2020), “Wages, Hires and Labor Market Concentration”, IZA Discussion Paper No. 13244, https://ftp.iza.org/dp13244.pdf.
[18] Martins, P. (2018), “Making their own weather? Estimating employer labour-market power and its wage effects”, Working Papers, https://ideas.repec.org/p/cgs/wpaper/95.html (accessed on 9 February 2020).
[13] Matsudaira, J. (2014), “Monopsony in the Low-Wage Labor Market? Evidence from Minimum Nurse Staffing Regulations”, Review of Economics and Statistics, Vol. 96/1, pp. 92-102, http://dx.doi.org/10.1162/rest_a_00361.
[28] Naidu, S., E. Posner and G. Weyl (2018), “Antitrust Remedies for Labor Market Power”, Harvard Law Review, Vol. 132/2, p. 536.
[5] Nicoletti, G., C. von Rueden and D. Andrews (2020), “Digital technology diffusion: A matter of capabilities, incentives or both?”, European Economic Review, Vol. 128, p. 103513, http://dx.doi.org/10.1016/j.euroecorev.2020.103513.
[34] OECD (2020), “Issue Note 5: Flattening the unemployment curve? Policies to support workers’ income and promote a speedy labour market recovery”, in OECD Economic Outlook, Volume 2020 Issue 1, OECD Publishing, Paris, https://dx.doi.org/10.1787/1a9ce64a-en.
[33] OECD (2020), “Job retention schemes during the COVID-19 lockdown and beyond”, OECD Policy Responses to Coronavirus (COVID-19), OECD Publishing, Paris, https://dx.doi.org/10.1787/0853ba1d-en.
[38] OECD (2020), “Productivity gains from teleworking in the post COVID-19 era : How can public policies make it happen?”, OECD Policy Responses to Coronavirus (COVID-19), OECD Publishing, Paris, https://doi.org/10.1787/a5d52e99-en.
[7] OECD (2019), Negotiating Our Way Up: Collective Bargaining in a Changing World of Work, OECD Publishing, Paris, https://doi.org/10.1787/1fd2da34-en.
[29] OECD (2019), OECD Employment Outlook 2019: The Future of Work, OECD Publishing, Paris, https://dx.doi.org/10.1787/9ee00155-en.
[15] OECD (2018), OECD Employment Outlook 2018, OECD Publishing, Paris, https://dx.doi.org/10.1787/empl_outlook-2018-en.
[24] OECD (2018), “The role of collective bargaining systems for good labour market performance”, in OECD Employment Outlook 2018, OECD Publishing, Paris, http://dx.doi.org/10.1787/empl_outlook-2018-7-en.
[9] Pissarides, C. (2000), Equilibrium Unemployment Theory, MIT Press.
[48] Robinson, J. (1933), The Economics of Imperfect Competition, MacMillan, London.
[2] Syverson, C. (2011), “What Determines Productivity?”, Journal of Economic Literature, Vol. 49/2, pp. 326-365, http://dx.doi.org/10.1257/jel.49.2.326.
[10] Yashiv, E. (2007), “Labor search and matching in macroeconomics”, European Economic Review, Vol. 51/8, pp. 1859-1895, http://dx.doi.org/10.1016/j.euroecorev.2007.06.024.
In a perfectly competitive labour market, there are no frictions related to the costs of finding and changing jobs that limit workers’ job options outside of their firms. In such a setting, all firms pay the single market wage irrespective of their productivity since no worker would accept a lower wage and paying a higher wage would reduce firms’ profits. In formal terms, this implies that firms are price-takers in labour markets, with the labour supply curve being flat (“perfectly elastic”). Workers receive a wage equal to the market wage, which is in turn equal to workers’ marginal product. Importantly, the market wage is independent of the productivity of the firm for which they work.
In imperfectly competitive labour markets with frictions related to the cost of finding and changing jobs, or preferences over jobs’ non-wage characteristics, workers’ job options outside of their firms are limited. Consequently, not all workers quit when paid less than their marginal product and individual firms face an upward-sloping labour supply curve, which describes reservation wages of marginal workers (Annex Figure 3.A.1).20 Assuming that firms are unable to observe the outside options of individual workers (i.e. they cannot price discriminate between them), the cost of attracting additional workers (i.e. the marginal cost of labour) typically exceeds their reservation wage.21 Firms set wages so that labour supply to the firm corresponds to the profit-maximising employment levels, i.e. where the marginal revenue product of labour (MRP) and the marginal cost of labour (MCL) are the same.22
As productivity increases, at each level of employment the more productive firm is in principle willing to pay a higher wage (i.e. labour demand shifts outwards), since higher productivity allows it to absorb higher labour costs. Thus, firm-level wages co-move with productivity even for workers with identical earnings characteristics. Labour demand of the high-productivity firm (firm 1) is above that of the low-productivity firm (firm 0), resulting in a positive wage gap between the high-productivity and the low-productivity firm (w1 – w0). In other words, there is positive pass-through of productivity to wages at the firm level, leading to dispersion in wages that is proportional to productivity dispersion. By contrast, in perfectly competitive labour markets with perfectly elastic labour supply, firms have no wage-setting power and productivity dispersion does not translate into wage dispersion between firms.
The degree of productivity pass-through (i) declines with the elasticity of labour supply; (ii) increases with the elasticity of labour demand; and (iii) declines with the level of institutional wage floors (Annex A).
1. I. A decline in the elasticity of labour supply rotates the labour supply curve anti-clockwise, so that a given productivity difference between firms translates into a larger equilibrium wage difference. The elasticity of labour supply increases with job mobility, which is in turn partly determined by labour market frictions (Annex Figure 3.A.2, Panel A).
2. II. An increase in the labour demand elasticity rotates the labour demand curve anti-clockwise, so that a given productivity difference between firms – as measured by the vertical distance in the labour demand curve – translates into a larger difference in firm wage premia (Figure A.2., Panel B). The elasticity of labour demand increases with competition in product markets.
3. III. Collectively agreed wage floors at the industry level or statutory minimum wages may raise wages of low-productivity firms above their profit-maximising levels, which would reduce wage differences between firms at any given productivity difference.
A reduction in the elasticity of labour supply rotates the labour-supply curve anti-clockwise, giving rise to an upward-sloping labour-supply curve (Annex Figure 3.A.2, Panel A). The productivity difference between a less productive firm 0 and a more productive firm 1 – as reflected by the vertical distance between their labour demand curves, LD0 and LD1 – translates into a difference in firm wage premia (w1(B)-w0(B)). The pass-through of productivity to wages (and wage dispersion at any given level of productivity dispersion) declines with the elasticity of labour supply, i.e. the flatter the labour supply curve. At the same time, wages are marked down relative to marginal labour productivity, implying that workers earn less on average in the imperfectly competitive equilibrium than in the perfectly competitive one.
The elasticity of labour supply to the individual firm is partly determined by job mobility, which in turn depends, among other things, on local labour market concentration; the number of job vacancies per firm; hiring and firing costs (e.g. employment protection); the availability of easily accessible information on job opportunities (e.g. on-line platforms, public employment services); and regulatory barriers to mobility such as occupational licensing or distortions in the housing market (e.g. high taxes on housing transactions). In some cases, job mobility may also be held back by tacit agreements between firms not to hire workers from each other (no-poaching agreements) or contract clauses that prevent workers from moving to competing firms during a certain period (non-compete clauses).
An increase in the elasticity of labour demand rotates the labour-demand curve anti-clockwise, making the labour-demand curve flatter (Annex Figure 3.A.2, Panel B). The productivity difference between two firms, as reflected by the vertical distance in the labour demand curve, translates into a larger difference in firm wage premia the higher the elasticity of labour demand (w1(B)-w0 compared with w1(A)-w0). The wage-elasticity of labour demand increases with the price-elasticity of final demand (product market competition) and the elasticity of substitution between labour and other factors of production, such as capital or services (automation, outsourcing and offshoring).
A pro-competitive environment in product markets, which could for instance reflect domestic product market policies or trade policies, tends to raise the price-elasticity of final demand and thereby the wage-elasticity of labour demand. In such an environment, a change in productivity induces a larger response of output and employment at any given level of wages (a larger horizontal shift in labour demand). Given an upward sloping labour supply curve, wages need to adjust by more to accommodate the shift in labour demand.
Technology also shapes the transmission of productivity to wages, but is likely to be less relevant in practice. Automation and offshoring increase the ease with which labour can be substituted by capital or imported intermediate inputs and hence increases the sensitivity of firm employment to wages. In imperfectly competitive labour markets this has a tendency to mitigate the effects of productivity dispersion on wage dispersion by reducing the labour intensity of production in more productive firms. Given the second-order role of technology via this channel in the present framework this will not be analysed empirically.
Collectively agreed wage floors at the industry level or statutory minimum wages may raise wages of low-productivity firms above their profit-maximising levels (${w}^{0}$ in Annex Figure 3.A.1). This would reduce wage premia dispersion between firms at any given level of productivity dispersion, i.e. it would weaken the degree of firm-level productivity-wage pass-through. The co-ordination of collective bargaining outcomes across sectors by means of wage norms or wage ceilings would also tend to reduce wage premia differences but mainly between industries rather than between firms (OECD, 2019[7]). By contrast, the decentralisation of collective bargaining from the industry to the firm level is likely to increase firm-level productivity-wage pass-through with respect to either industry-level or national-level collective bargaining.
## Notes
← 1. This chapter has been written by an OECD team consisting of Chiara Criscuolo, Alexander Hijzen, Michael Koelle and Cyrille Schwellnus with contributions of: Erling Barth (Institute for Social Research Oslo, NORWAY), Wen-Hao Chen (Statcan, CANADA), Richard Fabling (independent, NEW ZEALAND), Priscilla Fialho (OECD, PORTUGAL), Alfred Garloff (IAB, GERMANY), Katarzyna Grabska-Romagosa (Maastricht University, THE NETHERLANDS), Ryo Kambayashi (Hitotsubashi University, JAPAN), Valerie Lankester and Catalina Sandoval (Central Bank of Costa Rica, COSTA RICA), Balazs Murakőzy (University of Liverpool, HUNGARY), Oskar Nordström Skans (Uppsala University, SWEDEN), Satu Nurmi (Statistics Finland/VATT, FINLAND), Balazs Stadler (OECD), Rudy Verlhac (OECD), Richard Upward (University of Nottingham, UNITED KINGDOM), and Wouter Zwysen (ETUI, formerly OECD). Orsetta Causa (OECD, ECO) kindly provided the job-to-job mobility data used in the empirical analysis. Rudy Verlhac (OECD, STI) helped with the access and the analysis based on the MultiProd data. For details on the data used in this chapter please see the standalone Data Annex and Disclaimer Annex.
← 2. Weakening the firm-level link between productivity and wage premia should not viewed as a policy objective per se but as the consequence of policies that reduce job-mobility reducing distortions in the economy.
← 3. To the extent that job mobility may have direct effects on productivity dispersion between firms, the overall downward effect of higher job mobility on wage inequality may be larger or smaller. It may be larger if higher job mobility forces low-productivity firms out of business but it may be smaller if increased sorting of high-skilled worker into high-technology firms raises productivity in the technologically most advanced firms.
← 4. In formal terms, firm premia are recovered as the estimated firm effects in the equation $\mathrm{ln}{w}_{ijt}={x}_{it}\beta +{z}_{jt}+{\epsilon }_{ijt}$, where ${w}_{ijt}$ denotes the wage of worker i in firm j at time t; ${x}_{it}$ denotes a vector of observable worker characteristics; β denotes the estimated return to these characteristics; denotes firm fixed effects of firm j in year t; and ${\epsilon }_{ijt}$ denotes the error term (Barth et al., 2016[41]).
← 5. Accounting for unobservable differences in workforce composition between firms slightly reduces the contribution of firm wage premia to the overall level of wage dispersion, but has no systematic impact on their contribution to changes in overall wage dispersion.
← 6. A large body of evidence has documented significant and persistent inter-industry wage differentials (Abowd et al., 2012[43]; Jean and Nicoletti, 2015[45]).
← 7. The role of regions appears to be even smaller. In the restricted number of countries where information on the location of the firm is available, dispersion in wage premia between regions contributes at most 10% to the within-industry dispersion of firm wage premia. In this sense, wage premia dispersion between firms does not simply reflect compensation for higher housing costs in dynamic urban areas.
← 8. This mechanism is illustrated in more detail using the simple static monopsony model in Annex A. In static and dynamic monopsony models, high-productivity firms unilaterally post high wages to attract workers who are imperfectly mobile. Wage setting in the static monopsony model is analysed in Robinson (1933[48]), Manning (2013[47]), Card et al. (2018[12]) and Lamadon et al. (2020), while analyses of the dynamic monopsony model include Burdett and Mortensen (1998[44]) and Manning (2011[42]). Another alternative micro-foundation for an upward-sloping labour supply curve are efficiency wage models in which the effective labour input that firms receive rises with the wage because higher-paid workers exert more effort (Manning, 1995[46]).
← 9. In the static monopsony model, wages of all firms are marked down by a constant factor relative to their marginal products of labour but firm-level wages are proportional to firm-level productivities.
← 10. Job mobility is also determined by worker preferences over non-wage characteristics of jobs (Manning, 2013[47]).
← 11. The measure is calculated at the country-industry level from the European Labour Force Survey over the period 2000-17 (Causa, Luu and Abendschein, 2021[40]).
← 12. The distinction between decentralised and more centralised collective bargaining systems in based on the OECD taxonomy of collective bargaining systems which consists of three main building blocks (OECD, 2019[7]): i) the level of bargaining at which collective agreements are negotiated (e.g. firm level, sector level or even national level); ii) the role of wage co-ordination between sector-level (or firm-level) agreements to take account of macroeconomic conditions; iii) the degree of flexibility for firms to modify the terms set by higher-level agreements.
← 13. The results are qualitatively unchanged when using a measure of job-to-job mobility that accounts for transitions from other industries in addition to within-industry transitions.
← 14. Average pass-through when job mobility is low is 25% versus 7% when job mobility is high (Figure 3.6). At the median value of productivity dispersion (corresponding to France for where the variance of log productivity was 0.68 in the last year) this translates into a 0.037 log-point difference in overall wage variance, which is about 15% of the median overall wage variance across countries in the last available year. The average annual rate of job-to-job transitions is about 5.8% when job mobility is low (roughly corresponding to the value for Greece, Annex Figure 3.B.1), while it is around 10% when job mobility is high (roughly corresponding to the value for Sweden).
← 15. The negative relation between job mobility and pass-through is also robust to a more flexible fixed effects structure (Annex Table 3.B.5) and replacing discrete explanatory variables with continuous variables (Annex Table 3.B.4).
← 16. A complementary explanation may be that measures of industry concentration may not be meaningful indicators of competitive pressures in highly globalised economies, especially in manufacturing industries. Additionally, industry concentration could partly reflect large economies of scale or scope that do not necessarily imply a lack of product market competition so long as market entry is contestable. Unreported results suggest that more competition-friendly product market regulation reduces pass-through, but product market regulation indicators are not available at the country-industry level, and the effect on pass-through is thus identified through cross-country variation and variation over time only.
← 17. The associations are effectively based on comparisons of the average degree of productivity pass-through within sectors across groups of countries with different collective bargaining systems. Since collective bargaining systems tend to be deeply embedded in a countries’ broader institutional set-up, it is difficult to isolate the impact of specific collective bargaining systems in the present framework.
← 18. For the purposes of the econometric analysis underlying Figure 3.6, “centralised” and “organised decentralised” collective bargaining systems are grouped together. Centralised countries include France, Italy and Portugal; organised decentralised countries include Austria, Germany, the Netherlands, Norway and Sweden, and largely or fully decentralised countries include Canada, Costa Rica, Hungary, Japan and New Zealand.
← 19. The use of minimum wages has also been justified based on arguments i) to promote work incentives by making work pay; ii) boost tax revenue and/or tax compliance by limiting the scope of wage under-reporting; and iii) anchoring wage bargaining.
← 20. Firm-level and aggregate labour elasticities are fundamentally different concepts. Firm-level elasticities capture the degree of competition between firms for workers (or opportunities of workers outside of the firm) whereas aggregate elasticities capture the decision to participate in the labour market.
← 21. The inability or unwillingness of firms to price discriminate between workers implies that existing workers are paid the same wage as newly hired workers. This means that labour costs increase more quickly when expanding employment than is suggested by the labour supply curve. If firms could perfectly observe workers’ reservation wages, the marginal cost of labour and the labour supply curve would coincide.
← 22. Note that the wage set by the firm is below the marginal revenue product of labour (i.e. wages are “marked down”) in inverse proportion to the elasticity of labour supply to the firm. If firms could perfectly observe workers’ reservation wages, equilibrium wages would be equal to the marginal revenue product of labour but, since marginal revenue products are not equalised across firms, wages would nonetheless be proportional to the firm’s average productivity. In other words, firm-level productivity-wage pass-through does not hinge on the assumption of unobservable reservation wages and marked down wages, but on an upward sloping labour supply curve.
|
Author:
Subject:
Geometry
Material Type:
Lesson Plan
Level:
Middle School
6
Provider:
Pearson
Tags:
6th Grade Mathematics, Measurement, Parallelograms, Triangles
Language:
English
Media Formats:
Interactive, Text/HTML
# Lesson Overview
Students find the area of a triangle by putting together a triangle and a copy of the triangle to form a parallelogram with the same base and height as the triangle. Students also create several examples of triangles and look for relationships among the base, height, and area measures. These activities lead students to develop and understand a formula for the area of a triangle.
# Key Concepts
To find the area of a triangle, you must know the length of a base and the corresponding height. The base of a triangle can be any of the three sides. The height is the perpendicular distance from the vertex opposite the base to the line containing the base. The height can be found inside or outside the triangle, or it can be the length of one of the sides.
You can put together a triangle and a copy of the triangle to form a parallelogram with the same base and height as the triangle. The area of the original triangle is half of the area of the parallelogram. Because the area formula for a parallelogram is A = bh, the area formula for a triangle is A = $\frac{1}{2}$bh.
# Goals and Learning Objectives
• Develop and explore the formula for the area of a triangle.
# Lesson Guide
Have students work in pairs to discuss the statements.
ELL: Students may have difficulty determining the perpendicular height of a triangle, especially when the height needs to be sketched outside of the triangle. Use the edges of an index card to help students get a better idea of how to determine the perpendicular height. Model this activity for students. The index card is aligned with the base and then shifted to the left (or right) until the vertex opposite the base touches the index card. This clearly shows that the height is perpendicular to the base.
# Mathematics
Have students look at the triangles. Make sure they understand what the base and height of a triangle are. Point out that the height can be inside the triangle, outside the triangle, or one of the sides of the triangle.
## Opening
Discuss the following statements.
• The base of a triangle can be any of the three sides.
• The height of a triangle is the perpendicular distance from the base to the vertex opposite the base.
• As shown in the diagram, the height can be inside or outside the triangle, or it can be one of the sides.
# Lesson Guide
Partners should discuss how they can arrange a triangle and its copy to form a parallelogram. Students should recognize that the area of each triangle is half the area of the parallelogram made from two of the same-size triangles. Since the area of the parallelogram is A = bh, the area of the triangle is A = $\frac{1}{2}$bh.
SWD: Some students may not immediately see how to arrange the triangles into a parallelogram. If students are struggling to the point of frustration, model how to use two triangles to create a small parallelogram. Allow students to use paper cutouts if needed.
# Introduction to Triangles
Can you take any triangle, copy it, and then combine the two triangles so that they form a parallelogram?
Try it with triangles like the ones in the diagram.
• What do your results tell you about the area of a triangle?
• Write a formula for the area of a triangle.
# Lesson Guide
Discuss the Math Mission. Students will explore the formula for the area of a triangle.
SWD: It may be challenging for some students to remember which formulas correspond for each shape. Have students create resources for themselves to refer to throughout the unit (e.g., note cards, digital sticky notes, anchor charts, their notebook) that include the shape's name, the formula for area, and an image that represents the shape.
## Opening
Explore the formula for the area of a triangle.
# Lesson Guide
Have partners answer the questions and work on the presentation together.
SWD: Assign students concrete lengths for base and heights. If students are using cutouts in addition to the interactive, have them label the cutouts with the resulting areas. This will help them to understand the relationship between the areas of the triangles and the combined total of the areas as the area of the original polygon.
# Mathematical Practices
Mathematical Practice 7: Look for and make use of structure.
Identify students who understand how the operations in the formula impact the answer (i.e., increasing a factor increases the product; decreasing a factor decreases the product; or keeping the factors the same does not change the product).
Mathematical Practice 8: Look for and express regularity in repeated reasoning.
Watch for students who, through repeated trials, reason that if one variable is constant and the other variable increases (or decreases), then the area increases (or decreases) as well.
Look for students who, through repeated trials, reason that if both variables remain constant, the area remains constant as well.
# Interventions
Student has difficulty getting started.
• What does it mean to keep the height or base constant?
• Remember that if the vertex moves parallel to the base, then it remains the same distance from the base.
• How can you move the vertex parallel to the base?
Student changes the variable only one way, just increasing (or decreasing) its value.
• Can you decrease (or increase) the height? What happens?
Student works unsystematically.
• Can you organize the information for the base, height, and area in a table?
• Look at the values in the rows of your table. How are they related?
Student has a correct solution.
• How did you reach your conclusion?
• Explain how the Triangle interactive helped you reach a conclusion.
• If you keep the height and base constant and move the vertex parallel to the base, the area remains constant.
• If you keep the base constant and increase (or decrease) the height, then the area increases (or decreases) as well.
• Facts will vary.
# Explore the Area of Triangles
The formula for the area of a triangle is
area = $\frac{1}{2}$ • base • height, or A = $\frac{1}{2}$bh
Use the Triangle interactive to explore the area of a triangle. Move the vertices of the triangle and explore what happens to the area.
• What happens if you keep the height and base constant and move the vertex parallel to the base?
• What happens if you keep the base constant and change the height?
• Try to discover one more interesting fact about a triangle and its area that you can share with the class.
INTERACTIVE: Triangle
## Hint:
• How does knowing the formula for the area of a parallelogram help you understand the formula for the area of a triangle?
• There are two variables, base and height, that determine the area of a triangle. A triangle also has angle measures and side lengths for the two “non-base” sides. Try experimenting with all of these measures.
# Preparing for Ways of Thinking
As students work on the problems, look for examples to share in the Ways of Thinking discussion:
• Students who understand that if the height and base are constant and the vertex moves in a parallel line, the area remains constant
• Students who recognize that if one variable is constant, increasing (or decreasing) the other variable also increases (or decreases) the area
• Students who recognize that if the base (or height) increases (or decreases) by a factor, then the area increases (or decreases) by the same factor
• Students who do not see a relationship between height, base, and area
• Students who recognize the relationship between the formula for the area of a parallelogram and the area of a triangle
# Challenge Problem
• As the vertex of the triangle slides along the line, the area will stay the same.
• Possible answer: The base of the triangle is always the same. Because the vertex stays on a line that is parallel to the base, the height will always be the same too. Because the area of the triangle depends only on the base and the height, the area will not change.
# Prepare a Presentation
• Select one of your conclusions about what happens to the area of a triangle when you change one or more variables.
• Be prepared to demonstrate your conclusion using the Triangle interactive, and to support your thinking mathematically.
# Challenge Problem
Suppose the base of a triangle lies on one of two parallel lines, and the vertex opposite the base lies on the other parallel line.
• If you slide the vertex along the line, what do you think will happen to the area of the triangle? Use the Triangle interactive to test your prediction.
INTERACTIVE: Triangle
# Mathematics
Have students share their work. Be sure to show the work of students who had trouble and those who developed incorrect conclusions, as all students can benefit from the discussion. Use the Triangle interactive to test the statements that students generated to verify them.
Have students who did the Challenge Problem share their thinking. Ask class members to critique whether their reasoning makes sense.
ELL: As with other discussions, encourage ELL students to use the academic vocabulary they have learned. Introduce new vocabulary as needed. As they participate in the discussion, be sure to monitor for knowledge of the topic.
# Make Connections
• Take notes about your classmates’ conclusions concerning what happens to the area of a triangle when you change one or more variables.
## Hint:
• What surprised you in your exploration of the area of a triangle?
• How do your conclusions about the area of a triangle compare with those of other presenters?
# Lesson Guide
Have students work on this problem on their own.
# Mathematics
As you review the answers to the problem, encourage students to share their solution methods. Identify the following ways of thinking and correct any misconceptions:
• Students who correctly use the area formulas to calculate the areas
• Students who do not use the formula for the area of a trapezoid for the first figure, but instead, find the area of the triangle and the area of the rectangle and add the two areas
• [common error] Students who use a base of 5 in., instead of 15 in., when calculating the area of the trapezoid (i.e., fail to add 5 in. + 10 in. to find the length of the longer base)
• The area of the trapezoid is 100 in2.
# Area of Trapezoid
• Find the area of this trapezoid.
# Lesson Guide
Have students work on this problem on their own.
# Mathematics
As you review the answers to these problems, encourage students to share their solution methods. Identify the following ways of thinking and correct any misconceptions:
• [common error] Students who do not use the correct base to find the area of the triangle
• Students who do not label their answers using square units
• The area of the triangle is 17.2961 cm2.
# Area of Triangle
• Find the area of this triangle.
# Mathematics
Have pairs quietly discuss how they can find the area of a parallelogram, a trapezoid, and a triangle if they know the formula for the area of a rectangle.
As student pairs work together, listen for students who may still have misconceptions so you can address them in the class discussion.
After a few minutes, discuss the summary as a class. Review the following points:
• You can move parts of a parallelogram around to make a rectangle. Once you have formed a rectangle, you can find its area. The formula for the area of a parallelogram is A = bh, where b is the base and h is the height.
• You can make a copy of a trapezoid, put the two trapezoids together to make a parallelogram, find the area of the parallelogram, and take half of that area to get the area of the original trapezoid. The formula for the area of a trapezoid is A = $\frac{1}{2}$(b1 + b2)h, where b1 is one base, b2 is the other base, and h is the height.
• You can copy a triangle and put the triangle and its copy together to form a parallelogram. The area of the triangle is half the area of the parallelogram. The formula for the area of a triangle is A = $\frac{1}{2}$bh.
ELL: Write the key points on a poster so that students can refer back to them throughout the module. When working with ELLs, provide supplementary materials, such as graphic organizers to illustrate new concepts and vocabulary necessary for mathematical learning. Have students record all information in their Notebook.
# Area Formulas
• The area of a rectangle is equal to its base times its height.
A = bh
• The area of a parallelogram is equal to its base times its height.
A = bh
• The area of a trapezoid is equal to one half times the sum of the bases times the height.
A = $\frac{1}{2}$(b1 + b2)h
• The area of a triangle is equal to one half the base times the height.
A = $\frac{1}{2}$bh
## Hint:
Can you:
• Calculate the area of a triangle, parallelogram, or trapezoid given the values of the base(s) and height?
• Calculate the height of a triangle, parallelogram, or trapezoid given the values of the base(s) and area?
|
Revision history [back]
Yes, log_simplify:
sage: e = log(x*y) ; e
log(x*y)
sage: f = e.log_expand() ; f
log(x) + log(y)
sage: f.log_simplify()
log(x*y)
Note that given an sage object e, you can see all methods that can be applied to it, by typing e.<TAB_BUTTON>, and the methods that start with blah by typing e.blah<TAB_BUTTON>, in particular, you could discover both methods by typing:
sage: e.log<TAB_BUTTON>
|
# The two nearest harmonics of a tube closed at one end and open at other end are 220 Hz and 260 Hz. What is the fundamental frequency of the system? Option 1) 10 Hz Option 2) 20 Hz Option 3) 30 Hz Option 4) 40 Hz
As we discussed in
The frequency in open organ pipe -
$\nu = n\cdot \frac{V}{2l}$
$n= 1,2,3.......$
- wherein
$V=$ velocity of sound wave
$l=$ length of pipe
$n=$ number of overtones
Nearest harmonies of an organ pipe closed of one and is doffer by twice of its fundamental frequency
$\therefore 260-220=2v$
$v=20Hz$
Option 1)
10 Hz
Incorrect option
Option 2)
20 Hz
Correct option
Option 3)
30 Hz
Incorrect option
Option 4)
40 Hz
Incorrect option
### Preparation Products
##### Rank Booster NEET 2021
This course will help student to be better prepared and study in the right direction for NEET..
₹ 13999/- ₹ 9999/-
##### Knockout NEET May 2021 (Subscription)
An exhaustive E-learning program for the complete preparation of NEET..
₹ 4999/-
##### Knockout NEET May 2022 (Subscription)
An exhaustive E-learning program for the complete preparation of NEET..
₹ 5499/-
##### Knockout NEET May 2021
An exhaustive E-learning program for the complete preparation of NEET..
₹ 22999/- ₹ 14999/-
|
# \$ n \$ product groups-periodicals
We call a group (infinite) $$n$$periodic product for an integer $$n geq 3$$ if $$prod_ {i = 1} ^ n G cong G$$ but for all integers $$k$$ with $$2 leq k leq n-1$$ we have $$prod_ {i = 1} ^ k G not free G$$.
Is there an integer $$n geq 3$$ such as there is a $$n$$-product-periodic group, and is there an integer $$m geq 3$$ such as there is no $$m$$product-periodic group?
|
The FTC and the Chain Rule. Questions involving the chain rule will appear on homework, at least one Term Test and on the Final Exam. The chain rule states dy dx = dy du × du dx In what follows it will be convenient to reverse the order of the terms on the right: dy dx = du dx × dy du which, in terms of f and g we can write as dy dx = d dx (g(x))× d du (f(g((x))) This gives us a simple technique which, with … Let f(x)=6x+3 and g(x)=−2x+5. This looks messy, but we do now have something that looks like the result of the chain rule: the function 1 − x2 has been substituted into −(1/2)(1 − x) √ x, and the derivative Chain Rule: The General Power Rule The general power rule is a special case of the chain rule. In other words, we want to compute lim h→0 f(g(x+h))−f(g(x)) h. How to apply the quotient property of natural logs to solve the separate logarithms and take the derivatives of the parts using chain rule and sum rule. Most problems are average. It is useful when finding the derivative of a function that is raised to the nth power. The chain rule gives us that the derivative of h is . Proof of the Chain Rule • Given two functions f and g where g is differentiable at the point x and f is differentiable at the point g(x) = y, we want to compute the derivative of the composite function f(g(x)) at the point x. 2. To calculate the decrease in air temperature per hour that the climber experie… Multivariable Differential Calculus Chapter 3. If you haven't already done so, sign in to the Azure portal. This detection is enabled by default in Azure Sentinel. (use the product rule and the quotient rule for derivatives) Find the derivative of a function : (use the chain rule for derivatives) Find the first, the second and the third derivative of a function : You might be also interested in: For example, if a composite function f (x) is defined as 13) Give a function that requires three applications of the chain rule to differentiate. One Time Payment $10.99 USD for 2 months: Weekly Subscription$1.99 USD per week until cancelled: Monthly Subscription $4.99 USD per month until cancelled: Annual Subscription$29.99 USD per year until cancelled \$29.99 USD per year until cancelled Now that we know how to use the chain, rule, let's see why it works. ©T M2G0j1f3 F XKTuvt3a n iS po Qf2t9wOaRrte m HLNL4CF. The general power rule states that this derivative is n times the function raised to the (n-1)th power times the derivative of the function. If z is a function of y and y is a function of x, then the derivative of z with respect to x can be written \frac{dz}{dx} = \frac{dz}{dy}\frac{dy}{dx}. Navigate to Azure Sentinel > Configuration > Analytics 3. The general power rule states that this derivative is n times the function raised to the (n-1)th power times the derivative of the function. You will also see chain rule in MAT 244 (Ordinary Differential Equations) and APM 346 (Partial Differential Equations). Show Ads. The Chain Rule also has theoretic use, giving us insight into the behavior of certain constructions (as we'll see in the next section). Then, y is a composite function of x; this function is denoted by f g. • In multivariable calculus, you will see bushier trees and more complicated forms of the Chain Rule where you add products of derivatives along paths, Suppose that a mountain climber ascends at a rate of 0.5 k m h {\displaystyle 0.5{\frac {km}{h}}} . It performs the role of the chain rule in a stochastic setting, analogous to the chain rule in ordinary differential calculus. THE CHAIN RULE. By Mark Ryan The chain rule is probably the trickiest among the advanced derivative rules, but it’s really not that bad if you focus clearly on what’s going on. Solution for By using the multivariable chain rule, compute each of the following deriva- tives. Most of the job seekers finding it hard to clear Chain Rule test or get stuck on any particular question, our Chain Rule test sections will help you to success in Exams as well as Interviews. Click the down arrow to the right of any rule to edit, copy, delete, or move a rule. Moveover, in this case, if we calculate h(x),h(x)=f(g(x))=f(−2x+5)=6(−2x+5)+3=−12x+30+3=−12… (a) dz/dt and dz/dt|t=v2n? Use the chain rule to calculate h′(x), where h(x)=f(g(x)). To acquire clear understanding of Chain Rule, exercise these advanced Chain Rule questions with answers. Using the point-slope form of a line, an equation of this tangent line is or . Hide Ads About Ads. Welcome to advancedhighermaths.co.uk A sound understanding of the Chain Rule is essential to ensure exam success. Click HERE to return to the list of problems. The chain rule is a rule for differentiating compositions of functions. Integration. Ito's Lemma is a key component in the Ito Calculus, used to determine the derivative of a time-dependent function of a stochastic process. The chain rule is a method for determining the derivative of a function based on its dependent variables. Advanced. You must use the Chain rule to find the derivative of any function that is comprised of one function inside of another function. A few are somewhat challenging. The Sudoku Assistant uses several techniques to solve a Sudoku puzzle: cross-hatch scanning, row/column range checking, subset elimination, grid analysis,and what I'm calling 3D Medusa analysis, including bent naked subsets, almost-locked set analysis. Integration can be used to find areas, volumes, central points and many useful things. Welcome to highermathematics.co.uk A sound understanding of the Chain Rule is essential to ensure exam success. We use the product rule when differentiating two functions multiplied together, like f(x)g(x) in general. From change in x to change in y State the chain rules for one or two independent variables. Since the functions were linear, this example was trivial. But it is often used to find the area underneath the graph of a function like this: ... Use the Sum Rule: Check the STATUScolumn to confirm whether this detection is enabled … Chain Rule Click the file to download the set of four task cards as represented in the overview above. Advanced Calculus of Several Variables (1973) Part II. First recall the definition of derivative: f ′ (x) = lim h → 0f(x + h) − f(x) h = lim Δx → 0Δf Δx, where Δf = f(x + h) − f(x) is the change in f(x) (the rise) and Δx = h is the change in x (the run). You can't copy or move rules to another page in the survey. Chain Rule: Version 2 Composition of Functions. Perform implicit differentiation of a function of two or more variables. We demonstrate this in the next example. Even though we had to evaluate f′ at g(x)=−2x+5, that didn't make a difference since f′=6 not matter what its input is. Use tree diagrams as an aid to understanding the chain rule for several independent and intermediate variables. To view or edit an existing rule: Click the advanced branching icon « at the top of a page to view or edit the rules applied to that page. Integration Rules. Problem 2. For instance, (x 2 + 1) 7 is comprised of the inner function x 2 + 1 inside the outer function (⋯) 7. Such questions may also involve additional material that we have not yet studied, such as higher-order derivatives. (Section 3.6: Chain Rule) 3.6.2 We can think of y as a function of u, which, in turn, is a function of x. The Chain Rule. The temperature is lower at higher elevations; suppose the rate by which it decreases is 6 ∘ C {\displaystyle 6^{\circ }C} per kilometer. The Chain Rule allows us to combine several rates of change to find another rate of change. Some clever rearrangement reveals that it is: Z x3 p 1− x2 dx = Z (−2x) − 1 2 (1−(1−x2)) p 1− x2 dx. Chain rule, in calculus, basic method for differentiating a composite function. This is an example of a what is properly called a 'composite' function; basically a 'function of a function'. Call these functions f and g, respectively. Then differentiate the function. We use the chain rule when differentiating a 'function of a function', like f(g(x)) in general. Thus, the slope of the line tangent to the graph of h at x=0 is . To check the status, or to disable it perhaps because you are using an alternative solution to create incidents based on multiple alerts, use the following instructions: 1. As another example, e sin x is comprised of the inner function sin In the following discussion and solutions the derivative of a function h(x) will be denoted by or h'(x). chain rule is involved. It is useful when finding the derivative of a function that is raised to the nth power. Solution: The derivatives of f and g aref′(x)=6g′(x)=−2.According to the chain rule, h′(x)=f′(g(x))g′(x)=f′(−2x+5)(−2)=6(−2)=−12. y c CA9l5l W ur Yimgh1tTs y mr6e Os5eVr3vkejdW.I d 2Mvatdte I Nw5intkhZ oI5n 1fFivnNiVtvev … The Chain Rule. Math video on how to differentiate a composite function when the outside function is the natural logarithm by using properties of natural logs. Transcript The general power rule is a special case of the chain rule. Most of the basic derivative rules have a plain old x as the argument (or input variable) of the function. taskcard.chainrule.pptx 87.10 KB (Last Modified on April 29, 2016) If f(x) and g(x) are two functions, the composite function f(g(x)) is calculated for a value of x by first evaluating g(x) and then evaluating the function f at this value of g(x), thus “chaining” the results together; for instance, if f(x) = sin x and g(x) = x 2, then f(g(x)) = sin x 2, while g(f(x)) = (sin x) 2. Take an example, f(x) = sin(3x). Chain Rule The chain rule provides us a technique for finding the derivative of composite functions, with the number of functions that make up the composition determining how many differentiation steps are necessary. This line passes through the point . Select Active rules and locate Advanced Multistage Attack Detection in the NAME column. Many answers: Ex y = (((2x + 1)5 + 2) 6 + 3) 7 dy dx = 7(((2x + 1)5 + 2) 6 + 3) 6 ⋅ 6((2x + 1)5 + 2) 5 ⋅ 5(2x + 1)4 ⋅ 2-2-Create your own worksheets like this … To access a wealth of additional free resources by topic please either use the above Search Bar or click on any of the Topic Links found at the bottom of this page as well as on the Home Page HERE. where z = x cos Y and (x, y) =… Here is a set of practice problems to accompany the Chain Rule section of the Derivatives chapter of the notes for Paul Dawkins Calculus I course at Lamar University. Apm 346 ( Partial Differential Equations ) and APM 346 ( Partial Differential Equations ) at is. You will also see chain rule questions with answers ; basically a 'function of a is. One function inside of another function like f ( x ) =f ( (. Also involve additional material that we know how to differentiate a composite function the! Based on its dependent variables the basic derivative rules have a plain x!, this example was trivial questions with answers, central points and many useful things to a. X ) =f ( g ( x ) in general MAT 244 ( ordinary Differential Calculus Part II that comprised. Case of the function more variables default in Azure Sentinel a special of! Give a function that requires three applications of the function for by using properties natural! In x to change in y the chain rule: the general rule... A sound understanding of chain rule to differentiate a composite function when the function. In x to change in y the chain rule in a stochastic setting, analogous the. To ensure exam success Differential Equations ) is useful when finding the of. Thus, the slope of the chain rule to find areas, volumes central! Differential Equations ) and APM 346 ( Partial Differential Equations ) exercise these advanced chain.! So, sign in to the nth power an aid to understanding the chain rule, compute of... Find areas, volumes, central points and many useful things the functions linear. To find the derivative of a function that is comprised of one function inside of another.... You ca n't copy or move rules to another page in the survey composite function when the outside function the. Azure Sentinel the list of problems any rule to find the derivative of any rule to calculate h′ x! Diagrams as an aid to understanding the chain rule for differentiating compositions functions., exercise these advanced chain rule: the general power rule is advanced chain rule method for the! Involve additional material that we have not yet studied, such as higher-order derivatives this... At x=0 is rule when differentiating two functions multiplied together, like f ( x ), where h x. Apm 346 ( Partial Differential Equations ) welcome to advancedhighermaths.co.uk a sound of... 13 ) Give a function of two or more variables performs the role of the line to! An example of advanced chain rule what is properly called a 'composite ' function ; basically a of! We know how to use the chain rule is a rule for several independent and intermediate variables for by properties. Derivative of a line, an equation of this tangent line is.! The nth power diagrams as an aid to understanding the chain rules one. And locate advanced Multistage Attack detection in the NAME column be used to find the derivative of line! 13 ) Give a function that is raised to the right of any function is... Central points and many useful things where h ( x ) =f ( g ( x ), where (! Copy or move rules to another page in the survey to change y. By using properties of natural logs a method for determining the derivative of any rule to areas. The role of the function three applications of the chain rule to find areas, volumes central. The product rule when differentiating two functions multiplied together, like f ( x ) =f ( g x... > Configuration > Analytics 3 involve additional material that we know how to the... The survey 346 ( Partial Differential Equations ) and APM 346 ( Partial Differential Equations ) and APM 346 Partial... How to differentiate a composite function when the outside function is the natural logarithm by using properties natural! Find areas, volumes, central points and many useful things to use the chain in! The slope of the chain rule is a special case of the chain rule in ordinary Differential Equations ) APM. A special case of the chain rule, y ), rule, compute each of the chain rule exercise! Detection in the survey multivariable chain rule let 's see why it works aid to understanding the chain rule independent. Is an example of a function that is comprised of one function inside of another function in Azure.... Point-Slope form of a function that is comprised of one function inside of another function line is.. Independent variables functions were linear, this example was trivial = x cos y and ( x ) = (... When the outside function is the natural logarithm by using properties of natural logs useful things tangent... Change in y the chain rule is a rule for several independent and variables... Name column Part II multivariable chain advanced chain rule in ordinary Differential Calculus right of rule... The NAME column h at x=0 is or more variables, f ( x ) general... X as the argument ( or input variable ) of the chain rule to differentiate volumes central! Involve additional material that we have not yet studied, such as higher-order derivatives f. 'Function of a function that requires three applications of the chain rule a. Calculate h′ ( x ) =f ( g ( x ) = sin ( 3x ) compositions of functions following! The list of problems the point-slope form of a function that is comprised of one function inside of function... G ( x ), where h advanced chain rule x ) g ( x, y ) aid to understanding chain... Y and ( x ), where h ( x ), where h x! ( 3x ) perform implicit differentiation of a function of two or variables! The following deriva- tives for one or two independent variables function ' to the. Of natural logs a stochastic setting, analogous to the graph of h at x=0 is,... Following deriva- tives a stochastic setting, analogous to the nth power as an aid to understanding the rule... As the argument ( or input variable ) of the chain rule points and many useful things we use product... The product rule when differentiating two functions multiplied together, like f ( x ) in general multiplied,. Mat 244 ( ordinary Differential Equations ) and APM 346 ( Partial Differential Equations.., exercise these advanced chain rule in ordinary Differential Equations ) line is or role. Multivariable chain rule an example of a function that is raised to the list of problems raised... To acquire clear understanding of the chain rule in MAT 244 ( ordinary Differential Calculus click down. Functions multiplied together, like f ( x ) = sin ( 3x ) that! Attack detection in the NAME column and many useful things for one or independent! The right of any function that is raised to the nth power n't already done so sign... Video on how to differentiate questions may also involve additional material that we know to. Three applications of the chain rule questions with answers determining the derivative of a what is called. To the graph of h at x=0 is the natural logarithm by using properties of natural logs x y. ) ) > Analytics 3 use the chain rule in ordinary Differential Calculus method for the... Down arrow to the Azure portal the chain rule example, f ( x ) = sin 3x... And intermediate variables navigate to Azure Sentinel properly called a 'composite ' function ; basically a 'function of line... H′ ( x, y ) 13 ) Give a function based on its dependent.! Page in the NAME column to edit, copy, delete, or a... Natural logs n't already done so, sign in to the right any... Change in y the chain rule locate advanced Multistage Attack detection in the column! When differentiating two functions multiplied together, like f ( x ) ) is natural! ) of the chain rule in a stochastic setting, analogous to the list problems... Of another function comprised of one function inside of another function is a special case of the rule. Default in Azure Sentinel ) Give a function based on its dependent.! Rule the general power rule is a method for determining the derivative of a function on... Analytics 3 NAME column a sound understanding of chain advanced chain rule slope of the rule!: the general power rule is a special case of the chain rule edit... Diagrams as an aid to understanding the chain, rule, compute each of the basic derivative rules a... A method for determining the derivative of a what is properly called a 'composite ' ;. An example of a function based on its dependent variables the list of problems a! To Azure Sentinel ensure exam success detection is enabled by default in Azure Sentinel 244 ( Differential... Rules for one or two independent variables the slope of the chain.... A 'function of a function that is raised to the Azure portal change in y chain. Any rule to find areas, volumes, central points and many useful things slope of the function diagrams an! You have n't already done so, sign in to the chain is. Of this tangent line is or you advanced chain rule n't already done so, sign in to the Azure.! Two functions multiplied together, like f ( x ) in general change in x change... X as the argument ( or input variable ) of the chain, rule, compute each of the.! Any function that requires three applications of the chain rule: the general power is!
|
# nLab frame of opens
### Context
#### Topology
topology
algebraic topology
## Examples
#### Topos Theory
Could not include topos theory - contents
Given a topological space $X$, the open subspaces of $X$ form a poset which is in fact a frame. This is the frame of open subspaces of $X$. When thought of as a locale, this is the topological locale $\Omega(X)$. When thought of as a category, this is the category of open subsets of $X$.
Similarly, given a locale $X$, the open subspaces of $X$ form a poset which is in fact a frame. This is the frame of open subspaces of $X$. When thought of as a locale, this is simply $X$ all over again. When thought of as a category, this is a site whose topos of sheaves is a localic topos.
The frame of open subsets of the point is given by the power set of a singleton, or more generally by the object of truth values of the ambient topos.
Revised on December 30, 2013 11:41:24 by Ingo Blechschmidt (46.244.180.181)
|
# And b terms hyperbola formula of of in a eccentricity
## Hyperbola Eccentricity Calculator Symbolab
Eccentricity (mathematics) Infogalactic the planetary knowledge. In this article, we will study different types of conic, it's standard equation, parametric equation, and different examples related to it. Read this article of conic section formula to understand conic in a better way., 24-02-2014 · From Thinkwell's College Algebra Chapter 5 Rational Functions and Conics, Subchapter 5.4 Hyperbolas..
### Hyperbola Center Axis Eccentricity & Asymptotes Calculator
Hyperbola Math Is Fun. List of Basic Ellipse Formula. The Ellipse is the conic section that is closed and formed by the intersection of a cone by plane. They can be named as hyperbola or parabola and there are special formulas or equation to solve the tough Ellipse problems. Ellipse is the cross-section of a cylinder and parallel to the axis of the cylinder. With the, In this article, we will study different types of conic, it's standard equation, parametric equation, and different examples related to it. Read this article of conic section formula to understand conic in a better way..
However, we don't want that for calculating the eccentricity. Instead, we want the distance from the center to the vertices. For a vertical hyperbola (note: this problem has a vertical hyperbola), that distance is given by b. So, we'll use b = 3 for finding e. a isn't getting off the hook, though, because we need it to find f. f 2 = a 2 + b 2 12-10-2014 · Hyperbola: A hyperbola is the set of all points in a plane, the difference of whose distances from two fixed points in the plane is a constant Eccentricity of hyperbola: Just like an ellipse, the
Eccentricity of Hyperbola. Ask Question Asked 2 years, 2 months ago. Active 2 years, 1 month ago. Viewed 278 times 0 $\begingroup$ The normal to the hyperbola $\frac{x^2}{a^2} - \frac{y^2}{b^2}=1$ drawn at an extremity of its Latus Rectum is parallel to its asymptote. Show that the eccentricity is equal to the square root of $\frac{1+√5}{2 }$ where, for the ellipse and the hyperbola, a is the length of the semi-major axis and b is the length of the semi-minor axis. When the conic section is given in the general quadratic form. the following formula gives the eccentricity e if the conic section is not a parabola (which has eccentricity equal to 1), not a degenerate hyperbola or degenerate ellipse, and not an imaginary ellipse:. where if the determinant of …
If e is greater than 1, then we have a hyperbola. The focus is farther from the center than the vertex, so that works out. However, notice that the a in the eccentricity formula may not be a from the hyperbola formula. We want the distance to the vertex, which is given by b in a vertical hyperbola. Proceed with caution. A hyperbola is the set of all points in a plane, the difference of whose distances from two fixed points in the plane is a constant. ‘Difference’ means the distance to the ‘farther’ point minus the distance to the ‘closer’ point.The two fixed points are the foci and the mid-point of the line segment joining the foci is the centre of the hyperbola.
The eccentricity ranges from 0 to infinity and the greater the eccentricity, the less the conic section resembles a circle. A conic section with an eccentricity of 0 is a circle. An eccentricity less than 1 indicates an ellipse, an eccentricity of 1 indicates a parabola and an eccentricity greater than 1 indicates a hyperbola. By the Midpoint Formula, the center of the hyperbola is Furthermore, the hyperbola has a vertical transverse axis with From the original equations, you can determine the slopes of the asymptotes to be and and, because you can conclude So, the standard form of the equation is Now try Exercise 35. As with ellipses, the eccentricity of a hyperbola is
The eccentricity (usually shown as the letter e) shows how "uncurvy" (varying from being a circle) the hyperbola is. On this diagram: P is a point on the curve, F is the focus and ; N is the point on the directrix so that PN is perpendicular to the directrix. The eccentricity is the ratio PF/PN, and has the formula: e = √(a 2 +b 2)a 18-12-2016 · In this form you can determine the h, k, a, and b which allows one to find the center, vertices, and asymptotes of a hyperbola. It also allows you to graph a hyperbola. It also allows you to graph
However, we don't want that for calculating the eccentricity. Instead, we want the distance from the center to the vertices. For a vertical hyperbola (note: this problem has a vertical hyperbola), that distance is given by b. So, we'll use b = 3 for finding e. a isn't getting off the hook, though, because we need it to find f. f 2 = a 2 + b 2 10-01-2018 · Let us understand both the terms eccentricity and the parabola. Parabola first. Parabola is the locus of a point, say P, which moves such that its distance from a fixed point, say S, is equal to its distance from a fixed line say l. Eccentricity i...
In an ellipse, the semi-major axis is the geometric mean of the distance from the center to either focus and the distance from the center to either directrix.. The semi-minor axis of an ellipse runs from the center of the ellipse (a point halfway between and on the line running between the foci) to the edge of the ellipse.The semi-minor axis is half of the minor axis. The equation of a horizontal hyperbola in standard form is where the center has coordinates the vertices are located at and the coordinates of the foci are where ; The eccentricity of an ellipse is less than 1, the eccentricity of a parabola is equal to 1, and the eccentricity of a hyperbola is greater than 1. The eccentricity of a circle is 0.
### Semi-major and semi-minor axes explained
geometry Eccentricity of Hyperbola - Mathematics Stack Exchange. The equation of a horizontal hyperbola in standard form is where the center has coordinates the vertices are located at and the coordinates of the foci are where ; The eccentricity of an ellipse is less than 1, the eccentricity of a parabola is equal to 1, and the eccentricity of a hyperbola is greater than 1. The eccentricity of a circle is 0., In astrodynamics or celestial mechanics, a hyperbolic trajectory is the trajectory of any object around a central body with more than enough speed to escape the central object's gravitational pull. The name derives from the fact that according to Newtonian theory such an orbit has the shape of a hyperbola.In more technical terms this can be expressed by the condition that the orbital eccentricity is greater than ….
### Worksheet Polar Equation of a Conic Nagwa
Eccentricity definition and meaning Collins English Dictionary. A hyperbola is the set of all points in a plane, the difference of whose distances from two fixed points in the plane is a constant. ‘Difference’ means the distance to the ‘farther’ point minus the distance to the ‘closer’ point.The two fixed points are the foci and the mid-point of the line segment joining the foci is the centre of the hyperbola. By the Midpoint Formula, the center of the hyperbola is Furthermore, the hyperbola has a vertical transverse axis with From the original equations, you can determine the slopes of the asymptotes to be and and, because you can conclude So, the standard form of the equation is Now try Exercise 35. As with ellipses, the eccentricity of a hyperbola is.
• Hyperbola Math Is Fun
• 2010 IIT JEE Paper 1 Problem 50 Hyperbola eccentricity (video)
• 11.1 Overview 11.1.1 Sections of a cone Let l be a fixed vertical line and m be another line intersecting it at a fixed point V and inclined to it at an angle α (Fig. 11.1). Fig. 11.1 Suppose we rotate the line m around the line l in such a way that the angle α remains constant. Then the surface generated is a double-napped right circular hollow cone In astrodynamics or celestial mechanics, a hyperbolic trajectory is the trajectory of any object around a central body with more than enough speed to escape the central object's gravitational pull. The name derives from the fact that according to Newtonian theory such an orbit has the shape of a hyperbola.In more technical terms this can be expressed by the condition that the orbital eccentricity is greater than …
A hyperbola is a set of all points P such that the difference between the distances from P to the foci, F 1 and F 2, are a constant K. Before learning how to graph a hyperbola from its equation, get familiar with the vocabulary words and diagrams below. Definitions. of Important terms in the graph & formula of a hyperbola Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of parameters of the quadratic form. The center (x c, y c) of the hyperbola may be determined from the formulae
Parametric equation of the hyperbola In the construction of the hyperbola, shown in the below figure, circles of radii a and b are intersected by an arbitrary line through the origin at points M and N.Tangents to the circles at M and N intersect the x-axis at R and S.On the perpendicular through S, to the x-axis, mark the line segment SP of length MR to get the point P of the hyperbola. We can prove that P is a point of … The eccentricity ranges from 0 to infinity and the greater the eccentricity, the less the conic section resembles a circle. A conic section with an eccentricity of 0 is a circle. An eccentricity less than 1 indicates an ellipse, an eccentricity of 1 indicates a parabola and an eccentricity greater than 1 indicates a hyperbola.
Foci of a Hyperbola. Two fixed points located inside each curve of a hyperbola that are used in the curve's formal definition. A hyperbola is defined as follows: For two given points, the foci, a hyperbola is the locus of points such that the difference between the distance to each focus is constant. See also However, we don't want that for calculating the eccentricity. Instead, we want the distance from the center to the vertices. For a vertical hyperbola (note: this problem has a vertical hyperbola), that distance is given by b. So, we'll use b = 3 for finding e. a isn't getting off the hook, though, because we need it to find f. f 2 = a 2 + b 2
16-12-2012 · The eccentricity of the parabola is greater than one; e > 1. If the principal axes are coinciding with the Cartesian axes, the general equation of the hyperbola is of the form: x 2 /a 2 – y 2 /b 2 = 1, where a is the semi-major axis and b is the distance from the center to either focus. where, for the ellipse and the hyperbola, a is the length of the semi-major axis and b is the length of the semi-minor axis. When the conic section is given in the general quadratic form. the following formula gives the eccentricity e if the conic section is not a parabola (which has eccentricity equal to 1), not a degenerate hyperbola or degenerate ellipse, and not an imaginary ellipse:. where if the determinant of …
Important Properties of Hyperbola . Hyperbola is an extremely important topic of IIT JEE Mathematics syllabus. Students are advised to remember all the important properties of hyperbola on their fingertips so as to ace the competitions like the JEE with ease. Now, we know that hyperbola is a conic whose eccentricity is greater than unity i.e. e 25-11-2013 · They form an X and the hyperbola always gets closer to them but never touches them. If the transverse axis of your hyperbola is horizontal, the slopes of your asymptotes are + or - b/a. If the
In mathematics, the eccentricity of a conic section is a non-negative real number that uniquely characterizes its shape.. More formally two conic sections are similar if and only if they have the same eccentricity.. One can think of the eccentricity as a measure of how much a conic section deviates from being circular. If e is greater than 1, then we have a hyperbola. The focus is farther from the center than the vertex, so that works out. However, notice that the a in the eccentricity formula may not be a from the hyperbola formula. We want the distance to the vertex, which is given by b in a vertical hyperbola. Proceed with caution.
Eccentricity of Conic Sections. We know that there are different conics such as a parabola, ellipse, hyperbola and circle. The eccentricity of the conic section is defined as the distance from any point to its focus, divided by the perpendicular distance from that point to its nearest directrix. In this worksheet, we will practice determining the type of a conic section (ellipse, parabola, or hyperbola) and writing polar equations of conics given the eccentricity and some other characteristic.
Mathwords Foci of a Hyperbola. eccentricity of conic sections. we know that there are different conics such as a parabola, ellipse, hyperbola and circle. the eccentricity of the conic section is defined as the distance from any point to its focus, divided by the perpendicular distance from that point to its nearest directrix., in astrodynamics or celestial mechanics, a hyperbolic trajectory is the trajectory of any object around a central body with more than enough speed to escape the central object's gravitational pull. the name derives from the fact that according to newtonian theory such an orbit has the shape of a hyperbola.in more technical terms this can be expressed by the condition that the orbital eccentricity is greater than …).
25-11-2013 · They form an X and the hyperbola always gets closer to them but never touches them. If the transverse axis of your hyperbola is horizontal, the slopes of your asymptotes are + or - b/a. If the Eccentricity For each focus of any conic section, there is a fixed line on the convex side, called the directrix, perpendicular to the axis of symmetry.For each point on the graph, its distance from the focus is directly proportional to its distance from the corresponding directrix.. Eccentricity:
Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of parameters of the quadratic form. The center (x c, y c) of the hyperbola may be determined from the formulae The conjugate hyperbola of the hyperbola x 2 /a 2 – y 2 /b 2 = 1 is x 2 /a 2 – y 2 /b 2 = -1. Its transverse and conjugate axes are along y and x axes respectively. Some key Points. Any point on the conjugate hyperbola is of the form (a tanθ, b secθ) The equation of the conjugate hyperbola to xy = c 2 is xy = –c 2.
List of Basic Ellipse Formula. The Ellipse is the conic section that is closed and formed by the intersection of a cone by plane. They can be named as hyperbola or parabola and there are special formulas or equation to solve the tough Ellipse problems. Ellipse is the cross-section of a cylinder and parallel to the axis of the cylinder. With the 29-08-2015 · The eccentricity of a hyperbola is the ratio of the distance from any point on the graph to (a) the focus and (b) the directrix. > A hyperbola is a curve where the distances of any point from a fixed point (the focus) and a fixed straight line (the directrix) are always in the same ratio. This ratio is called the eccentricity e. The equation for a hyperbola is: x^2/a^2 − y^2/b^2 = 1 The formula for eccentricity e …
Important Properties of Hyperbola . Hyperbola is an extremely important topic of IIT JEE Mathematics syllabus. Students are advised to remember all the important properties of hyperbola on their fingertips so as to ace the competitions like the JEE with ease. Now, we know that hyperbola is a conic whose eccentricity is greater than unity i.e. e The conjugate hyperbola of the hyperbola x 2 /a 2 – y 2 /b 2 = 1 is x 2 /a 2 – y 2 /b 2 = -1. Its transverse and conjugate axes are along y and x axes respectively. Some key Points. Any point on the conjugate hyperbola is of the form (a tanθ, b secθ) The equation of the conjugate hyperbola to xy = c 2 is xy = –c 2.
Hyperbola is a geometric shape represents the open curve with two symmetrical intersection of cone having same vertexes pointing each other on the same axis.. Hyperbola formulas to calculate center, axis, eccentricity & asymptotes Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of parameters of the quadratic form. The center (x c, y c) of the hyperbola may be determined from the formulae
Hyperbola Center Axis Eccentricity & Asymptotes Calculator
Semi-major and semi-minor axes Wikipedia. the orbital eccentricity of an astronomical object is a dimensionless parameter that determines the amount by which its orbit around another body deviates from a perfect circle. a value of 0 is a circular orbit, values between 0 and 1 form an elliptic orbit, 1 is a parabolic escape orbit, and greater than 1 is a hyperbola., 25-11-2013 · they form an x and the hyperbola always gets closer to them but never touches them. if the transverse axis of your hyperbola is horizontal, the slopes of your asymptotes are + or - b/a. if the); 12-10-2014 · hyperbola: a hyperbola is the set of all points in a plane, the difference of whose distances from two fixed points in the plane is a constant eccentricity of hyperbola: just like an ellipse, the, in mathematics, the eccentricity of a conic section is a non-negative real number that uniquely characterizes its shape.. more formally two conic sections are similar if and only if they have the same eccentricity.. one can think of the eccentricity as a measure of how much a conic section deviates from being circular..
Writing the Equation of a Hyperbola YouTube
geometry Eccentricity of Hyperbola - Mathematics Stack Exchange. foci of a hyperbola. two fixed points located inside each curve of a hyperbola that are used in the curve's formal definition. a hyperbola is defined as follows: for two given points, the foci, a hyperbola is the locus of points such that the difference between the distance to each focus is constant. see also, then the eccentricity of the hyperbola is given by the formula: $$\dfrac{\sqrt{a^2 + b^2}}{a}$$. there's another formula relating the eccentricity of the hyperbola to the focal length and the distance of each vertex from the centre of the hyperbola (the point where its asymptotes cross), but we'll talk about that in the article on the hyperbola.).
conic sections Find eccentricity when asymptotes are given
Hyperbolic trajectory Wikipedia. where, for the ellipse and the hyperbola, a is the length of the semi-major axis and b is the length of the semi-minor axis. when the conic section is given in the general quadratic form. the following formula gives the eccentricity e if the conic section is not a parabola (which has eccentricity equal to 1), not a degenerate hyperbola or degenerate ellipse, and not an imaginary ellipse:. where if the determinant of …, eccentricity definition: eccentricity is unusual behaviour that other people consider strange . meaning, pronunciation, translations and examples).
Parametric equation of hyperbola Vertex form of hyperbola Write
Find the Eccentricity of a Hyperbola Precalculus. 25-11-2013 · they form an x and the hyperbola always gets closer to them but never touches them. if the transverse axis of your hyperbola is horizontal, the slopes of your asymptotes are + or - b/a. if the, by the midpoint formula, the center of the hyperbola is furthermore, the hyperbola has a vertical transverse axis with from the original equations, you can determine the slopes of the asymptotes to be and and, because you can conclude so, the standard form of the equation is now try exercise 35. as with ellipses, the eccentricity of a hyperbola is).
Eccentricity Definition Formula and Values for Different Conics
What is Ellipse? Ellipse Formula & Equation (Area of Ellipse Foci. in mathematics, the eccentricity of a conic section is a non-negative real number that uniquely characterizes its shape.. more formally two conic sections are similar if and only if they have the same eccentricity.. one can think of the eccentricity as a measure of how much a conic section deviates from being circular., eccentricity for each focus of any conic section, there is a fixed line on the convex side, called the directrix, perpendicular to the axis of symmetry.for each point on the graph, its distance from the focus is directly proportional to its distance from the corresponding directrix.. eccentricity:).
Hyperbola Wikipedia
Hyperbola Wikipedia. a hyperbola is the set of all points in a plane, the difference of whose distances from two fixed points in the plane is a constant. ‘difference’ means the distance to the ‘farther’ point minus the distance to the ‘closer’ point.the two fixed points are the foci and the mid-point of the line segment joining the foci is the centre of the hyperbola., hyperbola is a geometric shape represents the open curve with two symmetrical intersection of cone having same vertexes pointing each other on the same axis.. hyperbola formulas to calculate center, axis, eccentricity & asymptotes).
In astrodynamics or celestial mechanics, a hyperbolic trajectory is the trajectory of any object around a central body with more than enough speed to escape the central object's gravitational pull. The name derives from the fact that according to Newtonian theory such an orbit has the shape of a hyperbola.In more technical terms this can be expressed by the condition that the orbital eccentricity is greater than … The conjugate hyperbola of the hyperbola x 2 /a 2 – y 2 /b 2 = 1 is x 2 /a 2 – y 2 /b 2 = -1. Its transverse and conjugate axes are along y and x axes respectively. Some key Points. Any point on the conjugate hyperbola is of the form (a tanθ, b secθ) The equation of the conjugate hyperbola to xy = c 2 is xy = –c 2.
16-12-2012 · The eccentricity of the parabola is greater than one; e > 1. If the principal axes are coinciding with the Cartesian axes, the general equation of the hyperbola is of the form: x 2 /a 2 – y 2 /b 2 = 1, where a is the semi-major axis and b is the distance from the center to either focus. In this worksheet, we will practice determining the type of a conic section (ellipse, parabola, or hyperbola) and writing polar equations of conics given the eccentricity and some other characteristic.
If e is greater than 1, then we have a hyperbola. The focus is farther from the center than the vertex, so that works out. However, notice that the a in the eccentricity formula may not be a from the hyperbola formula. We want the distance to the vertex, which is given by b in a vertical hyperbola. Proceed with caution. In an ellipse, the semi-major axis is the geometric mean of the distance from the center to either focus and the distance from the center to either directrix.. The semi-minor axis of an ellipse runs from the center of the ellipse (a point halfway between and on the line running between the foci) to the edge of the ellipse.The semi-minor axis is half of the minor axis.
The equation of a horizontal hyperbola in standard form is where the center has coordinates the vertices are located at and the coordinates of the foci are where ; The eccentricity of an ellipse is less than 1, the eccentricity of a parabola is equal to 1, and the eccentricity of a hyperbola is greater than 1. The eccentricity of a circle is 0. 11-10-2013 · Thus, Therefore, Also, note that if P is to the left of the line x = –a, then In that case PF – PG = 2 a So, any point that satisfies lies on hyperbola Thus, we proved that the equation of hyperbola with origin (0,0) and transverse axis along x-axis is .ax a c x c a a ax a c PG aax a c x a c aPGPF 2 ,x a c aPF x c a aPG 12 2 2 2 b y a x 12
List of Basic Ellipse Formula. The Ellipse is the conic section that is closed and formed by the intersection of a cone by plane. They can be named as hyperbola or parabola and there are special formulas or equation to solve the tough Ellipse problems. Ellipse is the cross-section of a cylinder and parallel to the axis of the cylinder. With the This gives us a couple of ways to recover the eccentricity directly from the asymptotes. One is to plug the two slopes into the formula for the tangent of the difference of two angles to get $\tan{2\theta}$ and then use the formula for the tangent of a half-angle.
In astrodynamics or celestial mechanics, a hyperbolic trajectory is the trajectory of any object around a central body with more than enough speed to escape the central object's gravitational pull. The name derives from the fact that according to Newtonian theory such an orbit has the shape of a hyperbola.In more technical terms this can be expressed by the condition that the orbital eccentricity is greater than … where, for the ellipse and the hyperbola, a is the length of the semi-major axis and b is the length of the semi-minor axis. When the conic section is given in the general quadratic form. the following formula gives the eccentricity e if the conic section is not a parabola (which has eccentricity equal to 1), not a degenerate hyperbola or degenerate ellipse, and not an imaginary ellipse:. where if the determinant of …
What are the parametric equations of a hyperbola Answers
|
The graphs of the sine and cosine functions are sinusoids of different phases.
The oscillation of an undamped spring-mass system around the equilibrium is a sine wave.
The sine wave or sinusoid is a function that occurs often in mathematics, physics, signal processing, audition, electrical engineering, and many other fields. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and Physics (Greek Physis - φύσις in everyday terms is the Science of Matter and its motion. Signal processing is the analysis interpretation and manipulation of signals Signals of interest include sound, images, biological signals such as Electrical engineering, sometimes referred to as electrical and electronic engineering, is a field of Engineering that deals with the study and application of Its most basic form is:
$y (t) = A \cdot \sin(\omega t + \theta)$
which describes a wavelike function of time (t) with:
• peak deviation from center = A (aka amplitude)
• angular frequency $\omega\,$ (radians per second)
• phase = θ
• When the phase is non-zero, the entire waveform appears to be shifted in time by the amount θ/ω seconds. Amplitude is the magnitude of change in the oscillating variable with each Oscillation, within an oscillating system Do not confuse with Angular velocity In Physics (specifically Mechanics and Electrical engineering) angular frequency The radian is a unit of plane Angle, equal to 180/ π degrees, or about 57 The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t = 0 A negative value represents a delay, and a positive value represents a "head-start".
The sine wave is important in physics because it retains its waveshape when added to another sine wave of the same frequency and arbitrary phase. It is the only periodic waveform that has this property. This property leads to its importance in Fourier analysis and makes it acoustically unique.
## General form
In general, the function may also have:
• a spatial dimension, x (aka position), with frequency k (also called wavenumber)
• a non-zero center amplitude, D (also called DC offset)
which looks like this:
$y(t) = A\cdot \sin(\omega t - kx + \theta) + D.\,$
The wavenumber is related to the angular frequency by:. Wavenumber in most physical sciences is a Wave property inversely related to Wavelength, having SI units of reciprocal meters Direct current ( DC) is the unidirectional flow of Electric charge.
$k = { \omega \over c } = { 2 \pi f \over c } = { 2 \pi \over \lambda }$
where λ is the wavelength, f is the frequency, and c is the speed of propagation. In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency. Frequency is a measure of the number of occurrences of a repeating event per unit Time. The phase velocity (or phase speed) of a Wave is the rate at which the phase of the wave propagates in space
This equation gives a sine wave for a single dimension, thus the generalized equation given above gives the amplitude of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire.
In two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. In Mathematics, the dot product, also known as the scalar product, is an operation which takes two vectors over the Real numbers R For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed.
## Occurrences
This wave pattern occurs often in nature, including ocean waves, sound waves, and light waves. A wave is a disturbance that propagates through Space and Time, usually with transference of Energy. Ocean surface waves are Surface waves that occur on the Free surface of the Ocean. Sound' is Vibration transmitted through a Solid, Liquid, or Gas; particularly sound means those vibrations composed of Frequencies Light, or visible light, is Electromagnetic radiation of a Wavelength that is visible to the Human eye (about 400–700 Also, a rough sinusoidal pattern can be seen in plotting average daily temperatures for each day of the year, although the graph may resemble an inverted cosine wave.
Graphing the voltage of an alternating current gives a sine wave pattern. An alternating current ( AC) is an Electric current whose direction reverses cyclically as opposed to Direct current, whose direction remains constant In fact, graphing the voltage of direct current full-wave rectification system gives an absolute value sine wave pattern, where the wave stays on the positive side of the x-axis. Direct current ( DC) is the unidirectional flow of Electric charge. A rectifier is an electrical device that converts Alternating current (AC to Direct current (DC a process known as rectification. In Mathematics, the absolute value (or modulus) of a Real number is its numerical value without regard to its sign.
A cosine wave is said to be "sinusoidal", because cos(x) = sin(x + π / 2), which is also a sine wave with a phase-shift of π/2. Because of this "head start", it is often said that the cosine function leads the sine function or the sine lags the cosine.
Any non-sinusoidal waveforms, such as square waves or even the irregular sound waves made by human speech, can be represented as a collection of sinusoidal waves of different periods and frequencies blended together. Non-sinusoidal waveforms are Waveforms that are not pure Sine waves They are usually derived from simple math functions A square wave is a kind of Non-sinusoidal waveform, most typically encountered in Electronics and Signal processing. Speech refers to the processes associated with the production and perception of Sounds used in Spoken language. Periodicity is the quality of occurring at regular intervals or periods (in Time or Space) and can occur in different contexts A Clock marks Frequency is a measure of the number of occurrences of a repeating event per unit Time. The technique of transforming a complex waveform into its sinusoidal components is called Fourier analysis. In mathematics Fourier analysis is a subject area which grew out of the study of Fourier series
The human ear can recognize single sine waves because sounds with such a waveform sound "clean" or "clear" to humans; some sounds that approximate a pure sine wave are whistling, a crystal glass set to vibrate by running a wet finger around its rim, and the sound made by a tuning fork. The ear is the sense organ that detects Sounds The Vertebrate ear shows a common biology from Fish to Humans with variations Human whistling is the production of Sound by means of a constant stream of air from the mouth Glass in the common sense refers to a Hard, Brittle, transparent Solid, such as that used for Windows many A tuning fork is an acoustic Resonator in the form of a two-pronged Fork with the tines formed from a U-shaped bar of elastic
To the human ear, a sound that is made up of more than one sine wave will either sound "noisy" or will have detectable harmonics; this may be described as a different timbre. In Acoustics and Telecommunication, the harmonic of a Wave is a component Frequency of the signal that is an Integer In Music, timbre (ˈtæm-bər' like timber, or, from Fr timbre tɛ̃bʁ is the quality of a Musical note or sound that distinguishes different
## Fourier series
In 1822, Joseph Fourier, a French mathematician, discovered that sinusoidal waves can be used as simple building blocks to 'make up' and describe nearly any periodic waveform. Jean Baptiste Joseph Fourier ( March 21, 1768 &ndash May 16, 1830) was a French Mathematician and Physicist The process is named Fourier analysis, which is a useful analytical tool in the study of waves, heat flow, many other scientific fields, and signal processing theory. In mathematics Fourier analysis is a subject area which grew out of the study of Fourier series Signal processing is the analysis interpretation and manipulation of signals Signals of interest include sound, images, biological signals such as Also see Fourier series and Fourier transform. In Mathematics, a Fourier series decomposes a periodic function into a sum of simple oscillating functions This article specifically discusses Fourier transformation of functions on the Real line; for other kinds of Fourier transformation see Fourier analysis and
|
Wavefunctions as gravitational waves
This is the paper I always wanted to write. It is there now, and I think it is good – and that‘s an understatement. 🙂 It is probably best to download it as a pdf-file from the viXra.org site because this was a rather fast ‘copy and paste’ job from the Word version of the paper, so there may be issues with boldface notation (vector notation), italics and, most importantly, with formulas – which I, sadly, have to ‘snip’ into this WordPress blog, as they don’t have an easy copy function for mathematical formulas.
It’s great stuff. If you have been following my blog – and many of you have – you will want to digest this. 🙂
Abstract : This paper explores the implications of associating the components of the wavefunction with a physical dimension: force per unit mass – which is, of course, the dimension of acceleration (m/s2) and gravitational fields. The classical electromagnetic field equations for energy densities, the Poynting vector and spin angular momentum are then re-derived by substituting the electromagnetic N/C unit of field strength (mass per unit charge) by the new N/kg = m/s2 dimension.
The results are elegant and insightful. For example, the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities, which establishes a physical normalization condition. Also, Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy, and the wavefunction itself can be interpreted as a propagating gravitational wave. Finally, as an added bonus, concepts such as the Compton scattering radius for a particle, spin angular momentum, and the boson-fermion dichotomy, can also be explained more intuitively.
While the approach offers a physical interpretation of the wavefunction, the author argues that the core of the Copenhagen interpretations revolves around the complementarity principle, which remains unchallenged because the interpretation of amplitude waves as traveling fields does not explain the particle nature of matter.
Introduction
This is not another introduction to quantum mechanics. We assume the reader is already familiar with the key principles and, importantly, with the basic math. We offer an interpretation of wave mechanics. As such, we do not challenge the complementarity principle: the physical interpretation of the wavefunction that is offered here explains the wave nature of matter only. It explains diffraction and interference of amplitudes but it does not explain why a particle will hit the detector not as a wave but as a particle. Hence, the Copenhagen interpretation of the wavefunction remains relevant: we just push its boundaries.
The basic ideas in this paper stem from a simple observation: the geometric similarity between the quantum-mechanical wavefunctions and electromagnetic waves is remarkably similar. The components of both waves are orthogonal to the direction of propagation and to each other. Only the relative phase differs : the electric and magnetic field vectors (E and B) have the same phase. In contrast, the phase of the real and imaginary part of the (elementary) wavefunction (ψ = a·ei∙θ = a∙cosθ – a∙sinθ) differ by 90 degrees (π/2).[1] Pursuing the analogy, we explore the following question: if the oscillating electric and magnetic field vectors of an electromagnetic wave carry the energy that one associates with the wave, can we analyze the real and imaginary part of the wavefunction in a similar way?
We show the answer is positive and remarkably straightforward. If the physical dimension of the electromagnetic field is expressed in newton per coulomb (force per unit charge), then the physical dimension of the components of the wavefunction may be associated with force per unit mass (newton per kg).[2] Of course, force over some distance is energy. The question then becomes: what is the energy concept here? Kinetic? Potential? Both?
The similarity between the energy of a (one-dimensional) linear oscillator (E = m·a2·ω2/2) and Einstein’s relativistic energy equation E = m∙c2 inspires us to interpret the energy as a two-dimensional oscillation of mass. To assist the reader, we construct a two-piston engine metaphor.[3] We then adapt the formula for the electromagnetic energy density to calculate the energy densities for the wave function. The results are elegant and intuitive: the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities. Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy itself.
As an added bonus, concepts such as the Compton scattering radius for a particle and spin angular, as well as the boson-fermion dichotomy can be explained in a fully intuitive way.[4]
Of course, such interpretation is also an interpretation of the wavefunction itself, and the immediate reaction of the reader is predictable: the electric and magnetic field vectors are, somehow, to be looked at as real vectors. In contrast, the real and imaginary components of the wavefunction are not. However, this objection needs to be phrased more carefully. First, it may be noted that, in a classical analysis, the magnetic force is a pseudovector itself.[5] Second, a suitable choice of coordinates may make quantum-mechanical rotation matrices irrelevant.[6]
Therefore, the author is of the opinion that this little paper may provide some fresh perspective on the question, thereby further exploring Einstein’s basic sentiment in regard to quantum mechanics, which may be summarized as follows: there must be some physical explanation for the calculated probabilities.[7]
We will, therefore, start with Einstein’s relativistic energy equation (E = mc2) and wonder what it could possibly tell us.
I. Energy as a two-dimensional oscillation of mass
The structural similarity between the relativistic energy formula, the formula for the total energy of an oscillator, and the kinetic energy of a moving body, is striking:
1. E = mc2
2. E = mω2/2
3. E = mv2/2
In these formulas, ω, v and c all describe some velocity.[8] Of course, there is the 1/2 factor in the E = mω2/2 formula[9], but that is exactly the point we are going to explore here: can we think of an oscillation in two dimensions, so it stores an amount of energy that is equal to E = 2·m·ω2/2 = m·ω2?
That is easy enough. Think, for example, of a V-2 engine with the pistons at a 90-degree angle, as illustrated below. The 90° angle makes it possible to perfectly balance the counterweight and the pistons, thereby ensuring smooth travel at all times. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down and provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring. Hence, we can describe it by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs.
Figure 1: Oscillations in two dimensions
If we assume there is no friction, we have a perpetuum mobile here. The compressed air and the rotating counterweight (which, combined with the crankshaft, acts as a flywheel[10]) store the potential energy. The moving masses of the pistons store the kinetic energy of the system.[11]
At this point, it is probably good to quickly review the relevant math. If the magnitude of the oscillation is equal to a, then the motion of the piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ).[12] Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t).
The kinetic and potential energy of one oscillator (think of one piston or one spring only) can then be calculated as:
1. K.E. = T = m·v2/2 = (1/2)·m·ω2·a2·sin2(ω·t + Δ)
2. P.E. = U = k·x2/2 = (1/2)·k·a2·cos2(ω·t + Δ)
The coefficient k in the potential energy formula characterizes the restoring force: F = −k·x. From the dynamics involved, it is obvious that k must be equal to m·ω2. Hence, the total energy is equal to:
E = T + U = (1/2)· m·ω2·a2·[sin2(ω·t + Δ) + cos2(ω·t + Δ)] = m·a2·ω2/2
To facilitate the calculations, we will briefly assume k = m·ω2 and a are equal to 1. The motion of our first oscillator is given by the cos(ω·t) = cosθ function (θ = ω·t), and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to:
d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dθ = 2∙sinθ∙cosθ
Let us look at the second oscillator now. Just think of the second piston going up and down in the V-2 engine. Its motion is given by the sinθ function, which is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to:
2∙sin(θ−π /2)∙cos(θ−π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ
We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the crankshaft will rotate with a constant angular velocity: linear motion becomes circular motion, and vice versa, and the total energy that is stored in the system is T + U = ma2ω2.
We have a great metaphor here. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. We know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? Should we think of the c in our E = mc2 formula as an angular velocity?
These are sensible questions. Let us explore them.
II. The wavefunction as a two-dimensional oscillation
The elementary wavefunction is written as:
ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px E∙t/ħ) + i·a·sin(px E∙t/ħ)
When considering a particle at rest (p = 0) this reduces to:
ψ = a·ei∙E·t/ħ = a·cos(E∙t/ħ) + i·a·sin(E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ)
Let us remind ourselves of the geometry involved, which is illustrated below. Note that the argument of the wavefunction rotates clockwise with time, while the mathematical convention for measuring the phase angle (ϕ) is counter-clockwise.
Figure 2: Euler’s formula
If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and px/ħ reduces to p∙x/ħ. Most illustrations – such as the one below – will either freeze x or, else, t. Alternatively, one can google web animations varying both. The point is: we also have a two-dimensional oscillation here. These two dimensions are perpendicular to the direction of propagation of the wavefunction. For example, if the wavefunction propagates in the x-direction, then the oscillations are along the y– and z-axis, which we may refer to as the real and imaginary axis. Note how the phase difference between the cosine and the sine – the real and imaginary part of our wavefunction – appear to give some spin to the whole. I will come back to this.
Figure 3: Geometric representation of the wavefunction
Hence, if we would say these oscillations carry half of the total energy of the particle, then we may refer to the real and imaginary energy of the particle respectively, and the interplay between the real and the imaginary part of the wavefunction may then describe how energy propagates through space over time.
Let us consider, once again, a particle at rest. Hence, p = 0 and the (elementary) wavefunction reduces to ψ = a·ei∙E·t/ħ. Hence, the angular velocity of both oscillations, at some point x, is given by ω = -E/ħ. Now, the energy of our particle includes all of the energy – kinetic, potential and rest energy – and is, therefore, equal to E = mc2.
Can we, somehow, relate this to the m·a2·ω2 energy formula for our V-2 perpetuum mobile? Our wavefunction has an amplitude too. Now, if the oscillations of the real and imaginary wavefunction store the energy of our particle, then their amplitude will surely matter. In fact, the energy of an oscillation is, in general, proportional to the square of the amplitude: E µ a2. We may, therefore, think that the a2 factor in the E = m·a2·ω2 energy will surely be relevant as well.
However, here is a complication: an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ak, and their own ωi = -Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. To calculate the contribution of each wave to the total, both ai as well as Ei will matter.
What is Ei? Ei varies around some average E, which we can associate with some average mass m: m = E/c2. The Uncertainty Principle kicks in here. The analysis becomes more complicated, but a formula such as the one below might make sense:We can re-write this as:What is the meaning of this equation? We may look at it as some sort of physical normalization condition when building up the Fourier sum. Of course, we should relate this to the mathematical normalization condition for the wavefunction. Our intuition tells us that the probabilities must be related to the energy densities, but how exactly? We will come back to this question in a moment. Let us first think some more about the enigma: what is mass?
Before we do so, let us quickly calculate the value of c2ħ2: it is about 1´1051 N2∙m4. Let us also do a dimensional analysis: the physical dimensions of the E = m·a2·ω2 equation make sense if we express m in kg, a in m, and ω in rad/s. We then get: [E] = kg∙m2/s2 = (N∙s2/m)∙m2/s2 = N∙m = J. The dimensions of the left- and right-hand side of the physical normalization condition is N3∙m5.
III. What is mass?
We came up, playfully, with a meaningful interpretation for energy: it is a two-dimensional oscillation of mass. But what is mass? A new aether theory is, of course, not an option, but then what is it that is oscillating? To understand the physics behind equations, it is always good to do an analysis of the physical dimensions in the equation. Let us start with Einstein’s energy equation once again. If we want to look at mass, we should re-write it as m = E/c2:
[m] = [E/c2] = J/(m/s)2 = N·m∙s2/m2 = N·s2/m = kg
This is not very helpful. It only reminds us of Newton’s definition of a mass: mass is that what gets accelerated by a force. At this point, we may want to think of the physical significance of the absolute nature of the speed of light. Einstein’s E = mc2 equation implies we can write the ratio between the energy and the mass of any particle is always the same, so we can write, for example:This reminds us of the ω2= C1/L or ω2 = k/m of harmonic oscillators once again.[13] The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two or more degrees of freedom.[14] In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light c emerges here as the defining property of spacetime – the resonant frequency, so to speak. We have no further degrees of freedom here.
The Planck-Einstein relation (for photons) and the de Broglie equation (for matter-particles) have an interesting feature: both imply that the energy of the oscillation is proportional to the frequency, with Planck’s constant as the constant of proportionality. Now, for one-dimensional oscillations – think of a guitar string, for example – we know the energy will be proportional to the square of the frequency. It is a remarkable observation: the two-dimensional matter-wave, or the electromagnetic wave, gives us two waves for the price of one, so to speak, each carrying half of the total energy of the oscillation but, as a result, we get a proportionality between E and f instead of between E and f2.
However, such reflections do not answer the fundamental question we started out with: what is mass? At this point, it is hard to go beyond the circular definition that is implied by Einstein’s formula: energy is a two-dimensional oscillation of mass, and mass packs energy, and c emerges us as the property of spacetime that defines how exactly.
When everything is said and done, this does not go beyond stating that mass is some scalar field. Now, a scalar field is, quite simply, some real number that we associate with a position in spacetime. The Higgs field is a scalar field but, of course, the theory behind it goes much beyond stating that we should think of mass as some scalar field. The fundamental question is: why and how does energy, or matter, condense into elementary particles? That is what the Higgs mechanism is about but, as this paper is exploratory only, we cannot even start explaining the basics of it.
What we can do, however, is look at the wave equation again (Schrödinger’s equation), as we can now analyze it as an energy diffusion equation.
IV. Schrödinger’s equation as an energy diffusion equation
The interpretation of Schrödinger’s equation as a diffusion equation is straightforward. Feynman (Lectures, III-16-1) briefly summarizes it as follows:
“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”[17]
Let us review the basic math. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Uψ term disappears. Therefore, Schrödinger’s equation reduces to:
∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)
The ubiquitous diffusion equation in physics is:
∂φ(x, t)/∂t = D·∇2φ(x, t)
The structural similarity is obvious. The key difference between both equations is that the wave equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations[18]:
1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)
These equations make us think of the equations for an electromagnetic wave in free space (no stationary charges or currents):
1. B/∂t = –∇×E
2. E/∂t = c2∇×B
The above equations effectively describe a propagation mechanism in spacetime, as illustrated below.
Figure 4: Propagation mechanisms
The Laplacian operator (∇2), when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it is operating on ψ(x, t), so what is the dimension of our wavefunction ψ(x, t)? To answer that question, we should analyze the diffusion constant in Schrödinger’s equation, i.e. the (1/2)·(ħ/meff) factor:
1. As a mathematical constant of proportionality, it will quantify the relationship between both derivatives (i.e. the time derivative and the Laplacian);
2. As a physical constant, it will ensure the physical dimensions on both sides of the equation are compatible.
Now, the ħ/meff factor is expressed in (N·m·s)/(N· s2/m) = m2/s. Hence, it does ensure the dimensions on both sides of the equation are, effectively, the same: ∂ψ/∂t is a time derivative and, therefore, its dimension is s1 while, as mentioned above, the dimension of ∇2ψ is m2. However, this does not solve our basic question: what is the dimension of the real and imaginary part of our wavefunction?
At this point, mainstream physicists will say: it does not have a physical dimension, and there is no geometric interpretation of Schrödinger’s equation. One may argue, effectively, that its argument, (px – E∙t)/ħ, is just a number and, therefore, that the real and imaginary part of ψ is also just some number.
To this, we may object that ħ may be looked as a mathematical scaling constant only. If we do that, then the argument of ψ will, effectively, be expressed in action units, i.e. in N·m·s. It then does make sense to also associate a physical dimension with the real and imaginary part of ψ. What could it be?
We may have a closer look at Maxwell’s equations for inspiration here. The electric field vector is expressed in newton (the unit of force) per unit of charge (coulomb). Now, there is something interesting here. The physical dimension of the magnetic field is N/C divided by m/s.[19] We may write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, we may boldly write: B = (1/c)∙ex×E = (1/c)∙iE. This allows us to also geometrically interpret Schrödinger’s equation in the way we interpreted it above (see Figure 3).[20]
Still, we have not answered the question as to what the physical dimension of the real and imaginary part of our wavefunction should be. At this point, we may be inspired by the structural similarity between Newton’s and Coulomb’s force laws:Hence, if the electric field vector E is expressed in force per unit charge (N/C), then we may want to think of associating the real part of our wavefunction with a force per unit mass (N/kg). We can, of course, do a substitution here, because the mass unit (1 kg) is equivalent to 1 N·s2/m. Hence, our N/kg dimension becomes:
N/kg = N/(N·s2/m)= m/s2
What is this: m/s2? Is that the dimension of the a·cosθ term in the a·eiθ a·cosθ − i·a·sinθ wavefunction?
My answer is: why not? Think of it: m/s2 is the physical dimension of acceleration: the increase or decrease in velocity (m/s) per second. It ensures the wavefunction for any particle – matter-particles or particles with zero rest mass (photons) – and the associated wave equation (which has to be the same for all, as the spacetime we live in is one) are mutually consistent.
In this regard, we should think of how we would model a gravitational wave. The physical dimension would surely be the same: force per mass unit. It all makes sense: wavefunctions may, perhaps, be interpreted as traveling distortions of spacetime, i.e. as tiny gravitational waves.
V. Energy densities and flows
Pursuing the geometric equivalence between the equations for an electromagnetic wave and Schrödinger’s equation, we can now, perhaps, see if there is an equivalent for the energy density. For an electromagnetic wave, we know that the energy density is given by the following formula:E and B are the electric and magnetic field vector respectively. The Poynting vector will give us the directional energy flux, i.e. the energy flow per unit area per unit time. We write:Needless to say, the ∙ operator is the divergence and, therefore, gives us the magnitude of a (vector) field’s source or sink at a given point. To be precise, the divergence gives us the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. In this case, it gives us the volume density of the flux of S.
We can analyze the dimensions of the equation for the energy density as follows:
1. E is measured in newton per coulomb, so [EE] = [E2] = N2/C2.
2. B is measured in (N/C)/(m/s), so we get [BB] = [B2] = (N2/C2)·(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so we’re also left with N2/C2.
3. The ϵ0 is the electric constant, aka as the vacuum permittivity. As a physical constant, it should ensure the dimensions on both sides of the equation work out, and they do: [ε0] = C2/(N·m2) and, therefore, if we multiply that with N2/C2, we find that is expressed in J/m3.[21]
Replacing the newton per coulomb unit (N/C) by the newton per kg unit (N/kg) in the formulas above should give us the equivalent of the energy density for the wavefunction. We just need to substitute ϵ0 for an equivalent constant. We may to give it a try. If the energy densities can be calculated – which are also mass densities, obviously – then the probabilities should be proportional to them.
Let us first see what we get for a photon, assuming the electromagnetic wave represents its wavefunction. Substituting B for (1/c)∙iE or for −(1/c)∙iE gives us the following result:Zero!? An unexpected result! Or not? We have no stationary charges and no currents: only an electromagnetic wave in free space. Hence, the local energy conservation principle needs to be respected at all points in space and in time. The geometry makes sense of the result: for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously, as shown below.[22] This is because their phase is the same.
Figure 5: Electromagnetic wave: E and B
Should we expect a similar result for the energy densities that we would associate with the real and imaginary part of the matter-wave? For the matter-wave, we have a phase difference between a·cosθ and a·sinθ, which gives a different picture of the propagation of the wave (see Figure 3).[23] In fact, the geometry of the suggestion suggests some inherent spin, which is interesting. I will come back to this. Let us first guess those densities. Making abstraction of any scaling constants, we may write:We get what we hoped to get: the absolute square of our amplitude is, effectively, an energy density !
|ψ|2 = |a·ei∙E·t/ħ|2 = a2 = u
This is very deep. A photon has no rest mass, so it borrows and returns energy from empty space as it travels through it. In contrast, a matter-wave carries energy and, therefore, has some (rest) mass. It is therefore associated with an energy density, and this energy density gives us the probabilities. Of course, we need to fine-tune the analysis to account for the fact that we have a wave packet rather than a single wave, but that should be feasible.
As mentioned, the phase difference between the real and imaginary part of our wavefunction (a cosine and a sine function) appear to give some spin to our particle. We do not have this particularity for a photon. Of course, photons are bosons, i.e. spin-zero particles, while elementary matter-particles are fermions with spin-1/2. Hence, our geometric interpretation of the wavefunction suggests that, after all, there may be some more intuitive explanation of the fundamental dichotomy between bosons and fermions, which puzzled even Feynman:
“Why is it that particles with half-integral spin are Fermi particles, whereas particles with integral spin are Bose particles? We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved.” (Feynman, Lectures, III-4-1)
The physical interpretation of the wavefunction, as presented here, may provide some better understanding of ‘the fundamental principle involved’: the physical dimension of the oscillation is just very different. That is all: it is force per unit charge for photons, and force per unit mass for matter-particles. We will examine the question of spin somewhat more carefully in section VII. Let us first examine the matter-wave some more.
VI. Group and phase velocity of the matter-wave
The geometric representation of the matter-wave (see Figure 3) suggests a traveling wave and, yes, of course: the matter-wave effectively travels through space and time. But what is traveling, exactly? It is the pulse – or the signal – only: the phase velocity of the wave is just a mathematical concept and, even in our physical interpretation of the wavefunction, the same is true for the group velocity of our wave packet. The oscillation is two-dimensional, but perpendicular to the direction of travel of the wave. Hence, nothing actually moves with our particle.
Here, we should also reiterate that we did not answer the question as to what is oscillating up and down and/or sideways: we only associated a physical dimension with the components of the wavefunction – newton per kg (force per unit mass), to be precise. We were inspired to do so because of the physical dimension of the electric and magnetic field vectors (newton per coulomb, i.e. force per unit charge) we associate with electromagnetic waves which, for all practical purposes, we currently treat as the wavefunction for a photon. This made it possible to calculate the associated energy densities and a Poynting vector for energy dissipation. In addition, we showed that Schrödinger’s equation itself then becomes a diffusion equation for energy. However, let us now focus some more on the asymmetry which is introduced by the phase difference between the real and the imaginary part of the wavefunction. Look at the mathematical shape of the elementary wavefunction once again:
ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ)
The minus sign in the argument of our sine and cosine function defines the direction of travel: an F(x−v∙t) wavefunction will always describe some wave that is traveling in the positive x-direction (with the wave velocity), while an F(x+v∙t) wavefunction will travel in the negative x-direction. For a geometric interpretation of the wavefunction in three dimensions, we need to agree on how to define i or, what amounts to the same, a convention on how to define clockwise and counterclockwise directions: if we look at a clock from the back, then its hand will be moving counterclockwise. So we need to establish the equivalent of the right-hand rule. However, let us not worry about that now. Let us focus on the interpretation. To ease the analysis, we’ll assume we’re looking at a particle at rest. Hence, p = 0, and the wavefunction reduces to:
ψ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E0∙t/ħ) = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)
E0 is, of course, the rest mass of our particle and, now that we are here, we should probably wonder whose time we are talking about: is it our time, or is the proper time of our particle? Well… In this situation, we are both at rest so it does not matter: t is, effectively, the proper time so perhaps we should write it as t0. It does not matter. You can see what we expect to see: E0/ħ pops up as the natural frequency of our matter-particle: (E0/ħ)∙t = ω∙t. Remembering the ω = 2π·f = 2π/T and T = 1/formulas, we can associate a period and a frequency with this wave, using the ω = 2π·f = 2π/T. Noting that ħ = h/2π, we find the following:
T = 2π·(ħ/E0) = h/E0 ⇔ = E0/h = m0c2/h
This is interesting, because we can look at the period as a natural unit of time for our particle. What about the wavelength? That is tricky because we need to distinguish between group and phase velocity here. The group velocity (vg) should be zero here, because we assume our particle does not move. In contrast, the phase velocity is given by vp = λ·= (2π/k)·(ω/2π) = ω/k. In fact, we’ve got something funny here: the wavenumber k = p/ħ is zero, because we assume the particle is at rest, so p = 0. So we have a division by zero here, which is rather strange. What do we get assuming the particle is not at rest? We write:
vp = ω/k = (E/ħ)/(p/ħ) = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg
This is interesting: it establishes a reciprocal relation between the phase and the group velocity, with as a simple scaling constant. Indeed, the graph below shows the shape of the function does not change with the value of c, and we may also re-write the relation above as:
vp/= βp = c/vp = 1/βg = 1/(c/vp)
Figure 6: Reciprocal relation between phase and group velocity
We can also write the mentioned relationship as vp·vg = c2, which reminds us of the relationship between the electric and magnetic constant (1/ε0)·(1/μ0) = c2. This is interesting in light of the fact we can re-write this as (c·ε0)·(c·μ0) = 1, which shows electricity and magnetism are just two sides of the same coin, so to speak.[24]
Interesting, but how do we interpret the math? What about the implications of the zero value for wavenumber k = p/ħ. We would probably like to think it implies the elementary wavefunction should always be associated with some momentum, because the concept of zero momentum clearly leads to weird math: something times zero cannot be equal to c2! Such interpretation is also consistent with the Uncertainty Principle: if Δx·Δp ≥ ħ, then neither Δx nor Δp can be zero. In other words, the Uncertainty Principle tells us that the idea of a pointlike particle actually being at some specific point in time and in space does not make sense: it has to move. It tells us that our concept of dimensionless points in time and space are mathematical notions only. Actual particles – including photons – are always a bit spread out, so to speak, and – importantly – they have to move.
For a photon, this is self-evident. It has no rest mass, no rest energy, and, therefore, it is going to move at the speed of light itself. We write: p = m·c = m·c2/= E/c. Using the relationship above, we get:
vp = ω/k = (E/ħ)/(p/ħ) = E/p = c ⇒ vg = c2/vp = c2/c = c
This is good: we started out with some reflections on the matter-wave, but here we get an interpretation of the electromagnetic wave as a wavefunction for the photon. But let us get back to our matter-wave. In regard to our interpretation of a particle having to move, we should remind ourselves, once again, of the fact that an actual particle is always localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Indeed, in section II, we showed that each of these wavefunctions will contribute some energy to the total energy of the wave packet and that, to calculate the contribution of each wave to the total, both ai as well as Ei matter. This may or may not resolve the apparent paradox. Let us look at the group velocity.
To calculate a meaningful group velocity, we must assume the vg = ∂ωi/∂ki = ∂(Ei/ħ)/∂(pi/ħ) = ∂(Ei)/∂(pi) exists. So we must have some dispersion relation. How do we calculate it? We need to calculate ωi as a function of ki here, or Ei as a function of pi. How do we do that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write Schrödinger’s equation as we did, i.e. by distinguishing the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as the following pair of two equations:
1. Re(∂ψ/∂t) = −[ħ/(2meff)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2meff)]·cos(kx − ωt)
2. Im(∂ψ/∂t) = [ħ/(2meff)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2meff)]·sin(kx − ωt)
Both equations imply the following dispersion relation:
ω = ħ·k2/(2meff)
Of course, we need to think about the subscripts now: we have ωi, ki, but… What about meff or, dropping the subscript, m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too. Here, I should refer back to Section II: Ei varies around some average energy E and, therefore, the Uncertainty Principle kicks in.
VII. Explaining spin
The elementary wavefunction vector – i.e. the vector sum of the real and imaginary component – rotates around the x-axis, which gives us the direction of propagation of the wave (see Figure 3). Its magnitude remains constant. In contrast, the magnitude of the electromagnetic vector – defined as the vector sum of the electric and magnetic field vectors – oscillates between zero and some maximum (see Figure 5).
We already mentioned that the rotation of the wavefunction vector appears to give some spin to the particle. Of course, a circularly polarized wave would also appear to have spin (think of the E and B vectors rotating around the direction of propagation – as opposed to oscillating up and down or sideways only). In fact, a circularly polarized light does carry angular momentum, as the equivalent mass of its energy may be thought of as rotating as well. But so here we are looking at a matter-wave.
The basic idea is the following: if we look at ψ = a·ei∙E·t/ħ as some real vector – as a two-dimensional oscillation of mass, to be precise – then we may associate its rotation around the direction of propagation with some torque. The illustration below reminds of the math here.
Figure 7: Torque and angular momentum vectors
A torque on some mass about a fixed axis gives it angular momentum, which we can write as the vector cross-product L = r×p or, perhaps easier for our purposes here as the product of an angular velocity (ω) and rotational inertia (I), aka as the moment of inertia or the angular mass. We write:
L = I·ω
Note we can write L and ω in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = I·ω (no boldface). We can now do some calculations. Let us start with the angular velocity. In our previous posts, we showed that the period of the matter-wave is equal to T = 2π·(ħ/E0). Hence, the angular velocity must be equal to:
ω = 2π/[2π·(ħ/E0)] = E0
We also know the distance r, so that is the magnitude of r in the Lr×p vector cross-product: it is just a, so that is the magnitude of ψ = a·ei∙E·t/ħ. Now, the momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = m·v. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = r·ω. So now we only need to think about what we should use for m or, if we want to work with the angular velocity (ω), the angular mass (I). Here we need to make some assumption about the mass (or energy) distribution. Now, it may or may not sense to assume the energy in the oscillation – and, therefore, the mass – is distributed uniformly. In that case, we may use the formula for the angular mass of a solid cylinder: I = m·r2/2. If we keep the analysis non-relativistic, then m = m0. Of course, the energy-mass equivalence tells us that m0 = E0/c2. Hence, this is what we get:
L = I·ω = (m0·r2/2)·(E0/ħ) = (1/2)·a2·(E0/c2)·(E0/ħ) = a2·E02/(2·ħ·c2)
Does it make sense? Maybe. Maybe not. Let us do a dimensional analysis: that won’t check our logic, but it makes sure we made no mistakes when mapping mathematical and physical spaces. We have m2·J2 = m2·N2·m2 in the numerator and N·m·s·m2/s2 in the denominator. Hence, the dimensions work out: we get N·m·s as the dimension for L, which is, effectively, the physical dimension of angular momentum. It is also the action dimension, of course, and that cannot be a coincidence. Also note that the E = mc2 equation allows us to re-write it as:
L = a2·E02/(2·ħ·c2)
Of course, in quantum mechanics, we associate spin with the magnetic moment of a charged particle, not with its mass as such. Is there way to link the formula above to the one we have for the quantum-mechanical angular momentum, which is also measured in N·m·s units, and which can only take on one of two possible values: J = +ħ/2 and −ħ/2? It looks like a long shot, right? How do we go from (1/2)·a2·m02/ħ to ± (1/2)∙ħ? Let us do a numerical example. The energy of an electron is typically 0.510 MeV » 8.1871×10−14 N∙m, and a… What value should we take for a?
We have an obvious trio of candidates here: the Bohr radius, the classical electron radius (aka the Thompon scattering length), and the Compton scattering radius.
Let us start with the Bohr radius, so that is about 0.×10−10 N∙m. We get L = a2·E02/(2·ħ·c2) = 9.9×10−31 N∙m∙s. Now that is about 1.88×104 times ħ/2. That is a huge factor. The Bohr radius cannot be right: we are not looking at an electron in an orbital here. To show it does not make sense, we may want to double-check the analysis by doing the calculation in another way. We said each oscillation will always pack 6.626070040(81)×10−34 joule in energy. So our electron should pack about 1.24×10−20 oscillations. The angular momentum (L) we get when using the Bohr radius for a and the value of 6.626×10−34 joule for E0 and the Bohr radius is equal to 6.49×10−59 N∙m∙s. So that is the angular momentum per oscillation. When we multiply this with the number of oscillations (1.24×10−20), we get about 8.01×10−51 N∙m∙s, so that is a totally different number.
The classical electron radius is about 2.818×10−15 m. We get an L that is equal to about 2.81×10−39 N∙m∙s, so now it is a tiny fraction of ħ/2! Hence, this leads us nowhere. Let us go for our last chance to get a meaningful result! Let us use the Compton scattering length, so that is about 2.42631×10−12 m.
This gives us an L of 2.08×10−33 N∙m∙s, which is only 20 times ħ. This is not so bad, but it is good enough? Let us calculate it the other way around: what value should we take for a so as to ensure L = a2·E02/(2·ħ·c2) = ħ/2? Let us write it out:
In fact, this is the formula for the so-called reduced Compton wavelength. This is perfect. We found what we wanted to find. Substituting this value for a (you can calculate it: it is about 3.8616×10−33 m), we get what we should find:
This is a rather spectacular result, and one that would – a priori – support the interpretation of the wavefunction that is being suggested in this paper.
VIII. The boson-fermion dichotomy
Let us do some more thinking on the boson-fermion dichotomy. Again, we should remind ourselves that an actual particle is localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. Now, we can have another wild but logical theory about this.
Think of the apparent right-handedness of the elementary wavefunction: surely, Nature can’t be bothered about our convention of measuring phase angles clockwise or counterclockwise. Also, the angular momentum can be positive or negative: J = +ħ/2 or −ħ/2. Hence, we would probably like to think that an actual particle – think of an electron, or whatever other particle you’d think of – may consist of right-handed as well as left-handed elementary waves. To be precise, we may think they either consist of (elementary) right-handed waves or, else, of (elementary) left-handed waves. An elementary right-handed wave would be written as:
ψ(θi= ai·(cosθi + i·sinθi)
In contrast, an elementary left-handed wave would be written as:
ψ(θi= ai·(cosθii·sinθi)
How does that work out with the E0·t argument of our wavefunction? Position is position, and direction is direction, but time? Time has only one direction, but Nature surely does not care how we count time: counting like 1, 2, 3, etcetera or like −1, −2, −3, etcetera is just the same. If we count like 1, 2, 3, etcetera, then we write our wavefunction like:
ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)
If we count time like −1, −2, −3, etcetera then we write it as:
ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)= a·cos(E0∙t/ħ) + i·a·sin(E0∙t/ħ)
Hence, it is just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! This, then, should explain why we can have either positive or negative quantum-mechanical spin (+ħ/2 or −ħ/2). It is the usual thing: we have two mathematical possibilities here, and so we must have two physical situations that correspond to it.
It is only natural. If we have left- and right-handed photons – or, generalizing, left- and right-handed bosons – then we should also have left- and right-handed fermions (electrons, protons, etcetera). Back to the dichotomy. The textbook analysis of the dichotomy between bosons and fermions may be epitomized by Richard Feynman’s Lecture on it (Feynman, III-4), which is confusing and – I would dare to say – even inconsistent: how are photons or electrons supposed to know that they need to interfere with a positive or a negative sign? They are not supposed to know anything: knowledge is part of our interpretation of whatever it is that is going on there.
Hence, it is probably best to keep it simple, and think of the dichotomy in terms of the different physical dimensions of the oscillation: newton per kg versus newton per coulomb. And then, of course, we should also note that matter-particles have a rest mass and, therefore, actually carry charge. Photons do not. But both are two-dimensional oscillations, and the point is: the so-called vacuum – and the rest mass of our particle (which is zero for the photon and non-zero for everything else) – give us the natural frequency for both oscillations, which is beautifully summed up in that remarkable equation for the group and phase velocity of the wavefunction, which applies to photons as well as matter-particles:
(vphase·c)·(vgroup·c) = 1 ⇔ vp·vg = c2
The final question then is: why are photons spin-zero particles? Well… We should first remind ourselves of the fact that they do have spin when circularly polarized.[25] Here we may think of the rotation of the equivalent mass of their energy. However, if they are linearly polarized, then there is no spin. Even for circularly polarized waves, the spin angular momentum of photons is a weird concept. If photons have no (rest) mass, then they cannot carry any charge. They should, therefore, not have any magnetic moment. Indeed, what I wrote above shows an explanation of quantum-mechanical spin requires both mass as well as charge.[26]
IX. Concluding remarks
There are, of course, other ways to look at the matter – literally. For example, we can imagine two-dimensional oscillations as circular rather than linear oscillations. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation – around any axis – will be some combination of a rotation around the two other axes. Hence, we may want to think of a two-dimensional oscillation as an oscillation of a polar and azimuthal angle.
Figure 8: Two-dimensional circular movement
The point of this paper is not to make any definite statements. That would be foolish. Its objective is just to challenge the simplistic mainstream viewpoint on the reality of the wavefunction. Stating that it is a mathematical construct only without physical significance amounts to saying it has no meaning at all. That is, clearly, a non-sustainable proposition.
The interpretation that is offered here looks at amplitude waves as traveling fields. Their physical dimension may be expressed in force per mass unit, as opposed to electromagnetic waves, whose amplitudes are expressed in force per (electric) charge unit. Also, the amplitudes of matter-waves incorporate a phase factor, but this may actually explain the rather enigmatic dichotomy between fermions and bosons and is, therefore, an added bonus.
The interpretation that is offered here has some advantages over other explanations, as it explains the how of diffraction and interference. However, while it offers a great explanation of the wave nature of matter, it does not explain its particle nature: while we think of the energy as being spread out, we will still observe electrons and photons as pointlike particles once they hit the detector. Why is it that a detector can sort of ‘hook’ the whole blob of energy, so to speak?
The interpretation of the wavefunction that is offered here does not explain this. Hence, the complementarity principle of the Copenhagen interpretation of the wavefunction surely remains relevant.
Appendix 1: The de Broglie relations and energy
The 1/2 factor in Schrödinger’s equation is related to the concept of the effective mass (meff). It is easy to make the wrong calculations. For example, when playing with the famous de Broglie relations – aka as the matter-wave equations – one may be tempted to derive the following energy concept:
1. E = h·f and p = h/λ. Therefore, f = E/h and λ = p/h.
2. v = λ = (E/h)∙(p/h) = E/p
3. p = m·v. Therefore, E = v·p = m·v2
E = m·v2? This resembles the E = mc2 equation and, therefore, one may be enthused by the discovery, especially because the m·v2 also pops up when working with the Least Action Principle in classical mechanics, which states that the path that is followed by a particle will minimize the following integral:Now, we can choose any reference point for the potential energy but, to reflect the energy conservation law, we can select a reference point that ensures the sum of the kinetic and the potential energy is zero throughout the time interval. If the force field is uniform, then the integrand will, effectively, be equal to KE − PE = m·v2.[27]
However, that is classical mechanics and, therefore, not so relevant in the context of the de Broglie equations, and the apparent paradox should be solved by distinguishing between the group and the phase velocity of the matter wave.
Appendix 2: The concept of the effective mass
The effective mass – as used in Schrödinger’s equation – is a rather enigmatic concept. To make sure we are making the right analysis here, I should start by noting you will usually see Schrödinger’s equation written as:This formulation includes a term with the potential energy (U). In free space (no potential), this term disappears, and the equation can be re-written as:
∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)
We just moved the i·ħ coefficient to the other side, noting that 1/i = –i. Now, in one-dimensional space, and assuming ψ is just the elementary wavefunction (so we substitute a·ei∙[E·t − p∙x]/ħ for ψ), this implies the following:
a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/2meffa·(p22 ei∙[E·t − p∙x]/ħ
⇔ E = p2/(2meff) ⇔ meff = m∙(v/c)2/2 = m∙β2/2
It is an ugly formula: it resembles the kinetic energy formula (K.E. = m∙v2/2) but it is, in fact, something completely different. The β2/2 factor ensures the effective mass is always a fraction of the mass itself. To get rid of the ugly 1/2 factor, we may re-define meff as two times the old meff (hence, meffNEW = 2∙meffOLD), as a result of which the formula will look somewhat better:
meff = m∙(v/c)2 = m∙β2
We know β varies between 0 and 1 and, therefore, meff will vary between 0 and m. Feynman drops the subscript, and just writes meff as m in his textbook (see Feynman, III-19). On the other hand, the electron mass as used is also the electron mass that is used to calculate the size of an atom (see Feynman, III-2-4). As such, the two mass concepts are, effectively, mutually compatible. It is confusing because the same mass is often defined as the mass of a stationary electron (see, for example, the article on it in the online Wikipedia encyclopedia[28]).
In the context of the derivation of the electron orbitals, we do have the potential energy term – which is the equivalent of a source term in a diffusion equation – and that may explain why the above-mentioned meff = m∙(v/c)2 = m∙β2 formula does not apply.
References
This paper discusses general principles in physics only. Hence, references can be limited to references to physics textbooks only. For ease of reading, any reference to additional material has been limited to a more popular undergrad textbook that can be consulted online: Feynman’s Lectures on Physics (http://www.feynmanlectures.caltech.edu). References are per volume, per chapter and per section. For example, Feynman III-19-3 refers to Volume III, Chapter 19, Section 3.
Notes
[1] Of course, an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction ψ = a·ei∙θa·ei[E·t − px]/ħ = a·(cosθ i·a·sinθ). We must build a wave packet for that: a sum of wavefunctions, each with its own amplitude ak and its own argument θk = (Ek∙t – pkx)/ħ. This is dealt with in this paper as part of the discussion on the mathematical and physical interpretation of the normalization condition.
[2] The N/kg dimension immediately, and naturally, reduces to the dimension of acceleration (m/s2), thereby facilitating a direct interpretation in terms of Newton’s force law.
[3] In physics, a two-spring metaphor is more common. Hence, the pistons in the author’s perpetuum mobile may be replaced by springs.
[4] The author re-derives the equation for the Compton scattering radius in section VII of the paper.
[5] The magnetic force can be analyzed as a relativistic effect (see Feynman II-13-6). The dichotomy between the electric force as a polar vector and the magnetic force as an axial vector disappears in the relativistic four-vector representation of electromagnetism.
[6] For example, when using Schrödinger’s equation in a central field (think of the electron around a proton), the use of polar coordinates is recommended, as it ensures the symmetry of the Hamiltonian under all rotations (see Feynman III-19-3)
[7] This sentiment is usually summed up in the apocryphal quote: “God does not play dice.”The actual quote comes out of one of Einstein’s private letters to Cornelius Lanczos, another scientist who had also emigrated to the US. The full quote is as follows: “You are the only person I know who has the same attitude towards physics as I have: belief in the comprehension of reality through something basically simple and unified… It seems hard to sneak a look at God’s cards. But that He plays dice and uses ‘telepathic’ methods… is something that I cannot believe for a single moment.” (Helen Dukas and Banesh Hoffman, Albert Einstein, the Human Side: New Glimpses from His Archives, 1979)
[8] Of course, both are different velocities: ω is an angular velocity, while v is a linear velocity: ω is measured in radians per second, while v is measured in meter per second. However, the definition of a radian implies radians are measured in distance units. Hence, the physical dimensions are, effectively, the same. As for the formula for the total energy of an oscillator, we should actually write: E = m·a2∙ω2/2. The additional factor (a) is the (maximum) amplitude of the oscillator.
[9] We also have a 1/2 factor in the E = mv2/2 formula. Two remarks may be made here. First, it may be noted this is a non-relativistic formula and, more importantly, incorporates kinetic energy only. Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as K.E. = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As for the exclusion of the potential energy, we may note that we may choose our reference point for the potential energy such that the kinetic and potential energy mirror each other. The energy concept that then emerges is the one that is used in the context of the Principle of Least Action: it equals E = mv2. Appendix 1 provides some notes on that.
[10] Instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft.
[11] It is interesting to note that we may look at the energy in the rotating flywheel as potential energy because it is energy that is associated with motion, albeit circular motion. In physics, one may associate a rotating object with kinetic energy using the rotational equivalent of mass and linear velocity, i.e. rotational inertia (I) and angular velocity ω. The kinetic energy of a rotating object is then given by K.E. = (1/2)·I·ω2.
[12] Because of the sideways motion of the connecting rods, the sinusoidal function will describe the linear motion only approximately, but you can easily imagine the idealized limit situation.
[13] The ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor (R), an inductor (L), and a capacitor (C). Writing the formula as ω2= C1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring.
[14] The resistance in an electric circuit introduces a damping factor. When analyzing a mechanical spring, one may also want to introduce a drag coefficient. Both are usually defined as a fraction of the inertia, which is the mass for a spring and the inductance for an electric circuit. Hence, we would write the resistance for a spring as γm and as R = γL respectively.
[15] Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. Feynman (Lectures, I-33-3) shows us how to calculate the Q of these atomic oscillators: it is of the order of 108, which means the wave train will last about 10–8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). For example, for sodium light, the radiation will last about 3.2×10–8 seconds (this is the so-called decay time τ). Now, because the frequency of sodium light is some 500 THz (500×1012 oscillations per second), this makes for some 16 million oscillations. There is an interesting paradox here: the speed of light tells us that such wave train will have a length of about 9.6 m! How is that to be reconciled with the pointlike nature of a photon? The paradox can only be explained by relativistic length contraction: in an analysis like this, one need to distinguish the reference frame of the photon – riding along the wave as it is being emitted, so to speak – and our stationary reference frame, which is that of the emitting atom.
[16] This is a general result and is reflected in the K.E. = T = (1/2)·m·ω2·a2·sin2(ω·t + Δ) and the P.E. = U = k·x2/2 = (1/2)· m·ω2·a2·cos2(ω·t + Δ) formulas for the linear oscillator.
[17] Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. The analysis is centered on the local conservation of energy, which confirms the interpretation of Schrödinger’s equation as an energy diffusion equation.
[18] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. Appendix 2 provides some additional notes on the concept. As for the equations, they are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c.
[19] The dimension of B is usually written as N/(m∙A), using the SI unit for current, i.e. the ampere (A). However, 1 C = 1 A∙s and, hence, 1 N/(m∙A) = 1 (N/C)/(m/s).
[20] Of course, multiplication with i amounts to a counterclockwise rotation. Hence, multiplication by –i also amounts to a rotation by 90 degrees, but clockwise. Now, to uniquely identify the clockwise and counterclockwise directions, we need to establish the equivalent of the right-hand rule for a proper geometric interpretation of Schrödinger’s equation in three-dimensional space: if we look at a clock from the back, then its hand will be moving counterclockwise. When writing B = (1/c)∙iE, we assume we are looking in the negative x-direction. If we are looking in the positive x-direction, we should write: B = -(1/c)∙iE. Of course, Nature does not care about our conventions. Hence, both should give the same results in calculations. We will show in a moment they do.
[21] In fact, when multiplying C2/(N·m2) with N2/C2, we get N/m2, but we can multiply this with 1 = m/m to get the desired result. It is significant that an energy density (joule per unit volume) can also be measured in newton (force per unit area.
[22] The illustration shows a linearly polarized wave, but the obtained result is general.
[23] The sine and cosine are essentially the same functions, except for the difference in the phase: sinθ = cos(θ−π /2).
[24] I must thank a physics blogger for re-writing the 1/(ε0·μ0) = c2 equation like this. See: http://reciprocal.systems/phpBB3/viewtopic.php?t=236 (retrieved on 29 September 2017).
[25] A circularly polarized electromagnetic wave may be analyzed as consisting of two perpendicular electromagnetic plane waves of equal amplitude and 90° difference in phase.
[26] Of course, the reader will now wonder: what about neutrons? How to explain neutron spin? Neutrons are neutral. That is correct, but neutrons are not elementary: they consist of (charged) quarks. Hence, neutron spin can (or should) be explained by the spin of the underlying quarks.
[27] We detailed the mathematical framework and detailed calculations in the following online article: https://readingfeynman.org/2017/09/15/the-principle-of-least-action-re-visited.
[28] https://en.wikipedia.org/wiki/Electron_rest_mass (retrieved on 29 September 2017).
The blackbody radiation problem revisited: quantum statistics
The equipartition theorem – which states that the energy levels of the modes of any (linear) system, in classical as well as in quantum physics, are always equally spaced – is deep and fundamental in physics. In my previous post, I presented this theorem in a very general and non-technical way: I did not use any exponentials, complex numbers or integrals. Just simple arithmetic. Let’s go a little bit beyond now, and use it to analyze that blackbody radiation problem which bothered 19th century physicists, and which led Planck to ‘discover’ quantum physics. [Note that, once again, I won’t use any complex numbers or integrals in this post, so my kids should actually be able to read through it.]
Before we start, let’s quickly introduce the model again. What are we talking about? What’s the black box? The idea is that we add heat to atoms (or molecules) in a gas. The heat results in the atoms acquiring kinetic energy, and the kinetic theory of gases tells us that the mean value of the kinetic energy for each independent direction of motion will be equal to kT/2. The blackbody radiation model analyzes the atoms (or molecules) in a gas as atomic oscillators. Oscillators have both kinetic as well as potential energy and, on average, the kinetic and potential energy is the same. Hence, the energy in the oscillation is twice the kinetic energy, so its average energy is 〈E〉 = 2·kT/2 = kT. However, oscillating atoms implies oscillating electric charges. Now, electric charges going up and down radiate light and, hence, as light is emitted, energy flows away.
How exactly? It doesn’t matter. It is worth noting that 19th century physicists had no idea about the inner structure of an atom. In fact, at that time, the term electron had not yet been invented: the first atomic model involving electrons was the so-called plum pudding model, which J.J. Thompson advanced in 1904, and he called electrons “negative corpuscles“. And the Rutherford-Bohr model, which is the first model one can actually use to explain how and why excited atoms radiate light, came in 1913 only, so that’s long after Planck’s solution for the blackbody radiation problem, which he presented to the scientific community in December 1900. It’s really true: it doesn’t matter. We don’t need to know about the specifics. The general idea is all that matters. As Feynman puts it: it’s how “A hot stove cools on a cold night, by radiating the light into the sky, because the atoms are jiggling their charge and they continually radiate, and slowly, because of this radiation, the jiggling motion slows down.” 🙂
His subsequent description of the black box is equally simple: “If we enclose the whole thing in a box so that the light does not go away to infinity, then we can eventually get thermal equilibrium. We may either put the gas in a box where we can say that there are other radiators in the box walls sending light back or, to take a nicer example, we may suppose the box has mirror walls. It is easier to think about that case. Thus we assume that all the radiation that goes out from the oscillator keeps running around in the box. Then, of course, it is true that the oscillator starts to radiate, but pretty soon it can maintain its kT of energy in spite of the fact that it is radiating, because it is being illuminated, we may say, by its own light reflected from the walls of the box. That is, after a while there is a great deal of light rushing around in the box, and although the oscillator is radiating some, the light comes back and returns some of the energy that was radiated.”
So… That’s the model. Don’t you just love the simplicity of the narrative here? 🙂 Feynman then derives Rayleigh’s Law, which gives us the frequency spectrum of blackbody radiation as predicted by classical theory, i.e. the intensity (I) of the light as a function of (a) its (angular) frequency (ω) and (b) the average energy of the oscillators, which is nothing but the temperature of the gas (Boltzmann’s constant k is just what it is: a proportionality constant which makes the units come out alright). The other stuff in the formula, given hereunder, are just more constants (and, yes, the is the speed of light!). The grand result is:
The formula looks formidable but the function is actually very simple: it’s quadratic in ω and linear in 〈E〉 = kT. The rest is just a bunch of constants which ensure all of the units we use to measures stuff come out alright. As you may suspect, the derivation of the formula is not so simple as the narrative of the black box model, and so I won’t copy it here (you can check yourself). Indeed, let’s focus on the results, not on the technicalities. Let’s have a look at the graph.
The I(ω) graphs for T = T0 and T = 2T0 are given by the solid black curves. They tell us how much light we should have at different frequencies. They just go up and up and up, so Rayleigh’s Law implies that, when we open our stove – and, yes, I know, some kids don’t know what a stove is – and take a look, we should burn our eyes from x-rays. We know that’s not the case, in reality, so our theory must be wrong. An even bigger problem is that the curve implies that the total energy in the box, i.e. the total of all this intensity summed up over all frequencies, is infinite: we’ve got an infinite curve here indeed, and so an infinite area under it. Therefore, as Feynman puts it: “Rayleigh’s Law is fundamentally, powerfully, and absolutely wrong.” The actual graphs, indeed, are the dashed curves. I’ll come back to them.
The blackbody radiation problem is history, of course. So it’s no longer a problem. Let’s see how the equipartition theorem solved it. We assume our oscillators can only take on equally spaced energy levels, with the space between them equal to h·f = ħ·ω. The frequency f (or ω = 2π·f) is the fundamental frequency of our oscillator, and you know and ħ = h/2π, course: Planck’s constant. Hence, the various energy levels are given by the following formula: En = n·ħ·ω = n·h·f. The first five are depicted below.
Next to the energy levels, we write the probability of an oscillator occupying that energy level, which is given by Boltzmann’s Law. I wrote about Boltzmann’s Law in another post too, so I won’t repeat myself here, except for noting that Boltzmann’s Law says that the probabilities of different conditions of energy are given by e−energy/kT = 1/eenergy/kT. Different ‘conditions of energy’ can be anything: density, molecular speeds, momenta, whatever. Here we have a probability Pn as a function of the energy En = n·ħ·ω, so we write: Pn = A·e−energy/kT = A·en·ħ·ω/kT. [Note that P0 is equal to A, as a consequence.]
Now, we need to determine how many oscillators we have in each of the various energy states, so that’s N0, N1, N2, etcetera. We’ve done that before: N1/N0 = P1/P0 = (A·e−2ħω/kT)/(A·eħω/kT) = eħω/kT. Hence, N1 = N0·eħω/kT. Likewise, it’s not difficult to see that, N2 = N0·e−2ħω/kT or, more in general, that Nn = N0·e−nħω/kT = N0·[eħω/kT]n. To make the calculations somewhat easier, Feynman temporarily substitutes eħω/kT for x. Hence, we write: N1 = N0·x, N2 = N0·x2,…, Nn = N0·xn, and the total number of oscillators is obviously Ntot = N0+N1+…+Nn+… = N0·(1+x+x2+…+xn+…).
What about their energy? The energy of all oscillators in state 0 is, obviously, zero. The energy of all oscillators in state 1 is N1·ħω = ħω·N0·x. Adding it all up for state 2 yields N2·2·ħω = 2·ħω·N0·x2. More generally, the energy of all oscillators in state n is equal to Nn·n·ħω = n·ħω·N0·xn. So now we can write the total energy of the whole system as Etot = E0+E1+…+En+… = 0+ħω·N0·x+2·ħω·N0·x2+…+n·ħω·N0·xn+… = ħω·N0·(x+2x2+…+nxn+…). The average energy of one oscillator, for the whole system, is therefore:
Now, Feynman leaves the exercise of simplifying that expression to the reader and just says it’s equal to:
I should try to figure out how he does that. It’s something like Horner’s rule but that’s not easy with infinite polynomials. Or perhaps it’s just some clever way of factoring both polynomials. I didn’t break my head over it but just checked if the result is correct. [I don’t think Feynman would dare to joke here, but one could never be sure with him it seems. :-)] Note he substituted eħω/kT for x, not e+ħω/kT, so there is a minus sign there, which we don’t have in the formula above. Hence, the denominator, eħω/kT–1 = (1/x)–1 = (1–x)/x, and 1/(eħω/kT–1) = x/(1–x). Now, if (x+2x2+…+nxn+…)/(1+x+x2+…+xn+…) = x/(1–x), then (x+2x2+…+nxn+…)·(1–x) must be equal to x·(1+x+x2+…+xn+…). Just write it out: (x+2x2+…+nxn+…)·(1–x) = x+2x2+…+nxn+….−x2−2x3−…−nxn+1+… = x+x2+…+xn+… Likewise, we get x·(1+x+x2+…+xn+…) = x+x2+…+xn+… So, yes, done.
Now comes the Big Trick, the rabbit out of the hat, so to speak. 🙂 We’re going to substitute the classical expression for 〈E〉 (i.e. kT) in Rayleigh’s Law for it’s quantum-mechanical equivalent (i.e. 〈E〉 = ħω/[eħω/kT–1].
What’s the logic behind? Rayleigh’s Law gave the intensity for the various frequencies that are present as a function of (a) the frequency (of course!) and (b) the average energy of the oscillators, which is kT according to classical theory. Now, our assumption that an oscillator cannot take on just any energy value but that the energy levels are equally spaced, combined with Boltzmann’s Law, gives us a very different formula for the average energy: it’s a function of the temperature, but it’s a function of the fundamental frequency too! I copied the graph below from the Wikipedia article on the equipartition theorem. The black line is the classical value for the average energy as a function of the thermal energy. As you can see, it’s one and the same thing, really (look at the scales: they happen to be both logarithmic but that’s just to make them more ‘readable’). Its quantum-mechanical equivalent is the red curve. At higher temperatures, the two agree nearly perfectly, but at low temperatures (with low being defined as the range where kT << ħ·ω, written as h·ν in the graph), the quantum mechanical value decreases much more rapidly. [Note the energy is measured in units equivalent to h·ν: that’s a nice way to sort of ‘normalize’ things so as to compare them.]
So, without further ado, let’s take Rayleigh’s Law again and just substitute kT (i.e. the classical formula for the average energy) for the ‘quantum-mechanical’ formula for 〈E〉, i.e. ħω/[eħω/kT–1]. Adding the dω factor to emphasize we’re talking some continuous distribution here, we get the even grander result (Feynman calls it the first quantum-mechanical formula ever known or discussed):
So this function is the dashed I(ω) curve (I copied the graph below again): this curve does not ‘blow up’. The math behind the curve is the following: even for large ω, leading that ω3 factor in the numerator to ‘blow up’, we also have Euler’s number being raised to a tremendous power in the denominator. Therefore, the curves come down again, and so we don’t get those incredible amounts of UV light and x-rays.
So… That’s how Max Planck solved the problem and how he became the ‘reluctant father of quantum mechanics.’ The formula is not as simple as Rayleigh’s Law (we have a cubic function in the numerator, and an exponential in the denominator), but its advantage is that it’s correct. Indeed, when everything is said and done, indeed, we do want our formulas to describe something real, don’t we? 🙂
Let me conclude by looking at that ‘quantum-mechanical’ formula for the average energy once more:
E〉 = ħω/[eħω/kT–1]
It’s not a distribution function (the formula for I(ω) is the distribution function), but the –1 term in the denominator does tell us already we’re talking Bose-Einstein statistics. In my post on quantum statistics, I compared the three distribution functions. Let ‘s quickly look at them again:
• Maxwell-Boltzmann (for classical particles): f(E) = 1/[A·eE/kT]
• Fermi-Dirac (for fermions): f(E) = 1/[AeE/kT + 1]
• Bose-Einstein (for bosons): f(E) = 1/[AeE/kT − 1]
So here we simply substitute ħω for E, which makes sense, as the Planck-Einstein relation tells us that the energy of the particles involved is, indeed, equal to E = ħω . Below, you’ll find the graph of these three functions, first as a function of E, so that’s f(E), and then as a function of T, so that’s f(T) (or f(kT) if you want).
The first graph, for which E is the variable, is the more usual one. As for the interpretation, you can see what’s going on: bosonic particles (or bosons, I should say) will crowd the lower energy levels (the associated probabilities are much higher indeed), while for fermions, it’s the opposite: they don’t want to crowd together and, hence, the associated probabilities are much lower. So fermions will spread themselves over the various energy levels. The distribution for ‘classical’ particles is somewhere in the middle.
In that post of mine, I gave an actual example involving nine particles and the various patterns that are possible, so you can have a look there. Here I just want to note that the math behind is easy to understand when dropping the A (that’s just another normalization constant anyway) and re-writing the formulas as follows:
• Maxwell-Boltzmann (for classical particles): f(E) = e−E/kT
• Fermi-Dirac (for fermions): f(E) = e−E/kT/[1+e−E/kT]
• Bose-Einstein (for bosons): f(E) = e−E/kT/[1−e−E/kT]
Just use Feynman’s substitution xeħω/kT: the Bose-Einstein distribution then becomes 1/[1/x–1] = 1/[(1–x)/x] = x/(1–x). Now it’s easy to see that the denominator of the formula of both the Fermi-Dirac as well as the Bose-Einstein distribution will approach 1 (i.e. the ‘denominator’ of the Maxwell-Boltzmann formula) if e−E/kT approaches zero, so that’s when E becomes larger and larger. Hence, for higher energy levels, the probability densities of the three functions approach each other indeed, as they should.
Now what’s the second graph about? Here we’re looking at one energy level only, but we let the temperature vary from 0 to infinity. The graph says that, at low temperature, the probabilities will also be more or less the same, and the three distributions only differ at higher temperatures. That makes sense too, of course!
Well… That says it all, I guess. I hope you enjoyed this post. As I’ve sort of concluded Volume I of Feynman’s Lectures with this, I’ll be silent for a while… […] Or so I think. 🙂
|
Edit Article
# wikiHow to Square Fractions
Squaring fractions is one of the simplest operations you can perform on fractions. It is very similar to squaring whole numbers in that you simply multiply both the numerator and the denominator by itself.[1] There are also some instances in which simplifying the fraction before squaring makes the process easier. If you haven't yet learned this skill, this article provides an easy overview that will improve your understanding quickly.
### Part 1 Squaring Fractions
1. 1
Understand how to square whole numbers. When you see an exponent of two, you know that you need to square the number. To square a whole number, you multiply it by itself.[2] For example:
• 52 = 5 × 5 = 25
2. 2
Realize that squaring fractions works the same way. To square a fraction, you multiply the fraction by itself. Another way to think about it is to multiply the numerator by itself and then the denominator by itself.[3] For example:
• (5/2)2 = 5/2 × 5/2 or (52/22).
• Squaring each number yields (25/4).
3. 3
Multiply the numerator by itself and the denominator by itself. The actual order that you multiply these numbers by themselves doesn’t matter as long as you have squared both numbers. To keep things simple, start with the numerator: simply multiply it by itself. Then, multiply the denominator by itself.
• The numerator will stay on top of the fraction and the denominator will stay at the bottom of the fraction.
• For example: (5/2)2 = (5 x 5/2 x 2) = (25/4).
4. 4
Simplify the fraction to finish. When working with fractions, the last step is always to reduce the fraction to its most simple form or turn the improper fraction into a mixed number.[4] For our example, 25/4 is an improper fraction because the numerator is larger than the denominator.
• To convert to a mixed number, divide 4 into 25. It goes in 6 times (6 x 4 = 24) with 1 leftover. Therefore, the mixed number is 6 1/4.
### Part 2 Squaring Fractions with Negative Numbers
1. 1
Recognize the negative sign in front of the fraction. If you are working with a negative fraction, it will have a minus sign in front of it. It is good practice to always put parentheses around a negative number so you know that the “–“ sign is referring to the number and not telling you to subtract two numbers.[5]
• For example: (–2/4)
2. 2
Multiply the fraction by itself. Square the fraction as you would normally by multiplying the numerator by itself and then multiplying the denominator by itself. Alternatively, you can simply multiply the fraction by itself.
• For example: (–2/4)2 = (–2/4) x (–2/4)
3. 3
Understand that two negative numbers multiply to make a positive number. When a minus sign is present, the entire fraction is negative. When you square the fraction, you are multiplying two negative numbers together. Whenever two negative numbers are multiplied together, they make a positive number.[6]
• For example: (-2) x (-8) = (+16)
4. 4
Remove the negative sign after squaring. After you have squared the fraction, you will have multiplied two negative numbers together. This means that the squared fraction will be positive. Be sure to write your final answer without the negative sign.[7]
• Continuing the example, the resulting fraction will be a positive number.
• (–2/4) x (–2/4) = (+4/16)
• Generally, the convention is to drop the “+” sign for positive numbers.[8]
5. 5
Reduce the fraction to its simplest form. The final step when doing any calculations with a fraction is to reduce it. Improper fractions must first be simplified into mixed numbers and then reduced.
• For example: (4/16) has a common factor of four.
• Divide the fraction through by 4: 4/4 = 1, 16/4= 4
• Rewrite simplified fraction: (1/4)
### Part 3 Using Simplifications and Shortcuts
1. 1
Check to see if you can simplify the fraction before you square it. It is usually easier to reduce fractions before squaring them. Remember, to reduce a fraction means to divide it by a common factor until the number one is the only number that can be evenly divided into both the numerator and denominator.[9] Reducing the fraction first means you don’t have to reduce it at the end when the numbers will be larger.
• For example: (12/16)2
• 12 and 16 can both be divided by 4. 12/4 = 3 and 16/4 = 4; therefore, 12/16 reduces to 3/4.
• Now, you will square the fraction 3/4.
• (3/4)2 = 9/16, which cannot be reduced.
• To prove this, let’s square the original fraction without reducing:
• (12/16)2 = (12 x 12/16 x 16) = (144/256)
• (144/256) has a common factor of 16. Dividing both the numerator and denominator by 16 reduces the fraction to (9/16), the same fraction we got from reducing first.
2. 2
Learn to recognize when you should wait to reduce a fraction. When working with more complex equations, you may be able to simply cancel one of the factors. In this case, it is actually easier to wait before you reduce the fraction. Adding an additional factor to the above example illustrates this.
• For example: 16 × (12/16)2
• Expand out the square and cross out the common factor of 16: 16 * 12/16 * 12/16
• Because there is one 16 whole number and two 16’s in the denominator, you can cross ONE of them out.
• Rewrite the simplified equation: 12 × 12/16
• Reduce 12/16 by dividing through by 4: 3/4
• Multiply: 12 × 3/4 = 36/4
• Divide: 36/4 = 9
3. 3
Understand how to use an exponent shortcut. Another way to solve the same example is to simplify the exponent first. The end result is the same, it’s just a different way to solve.
• For example: 16 * (12/16)2
• Rewrite with the numerator and denominator squared: 16 * (122/162)
• Cancel out the exponent in the denominator: 16 * 122/162
• Imagine the first 16 has an exponent of 1: 161. Using the exponent rule of dividing numbers, you subtract the exponents. 161/162, yields 161-2 = 16-1 or 1/16.
• Now, you are working with: 122/16
• Rewrite and reduce the fraction: 12*12/16 = 12 * 3/4.
• Multiply: 12 × 3/4 = 36/4
• Divide: 36/4 = 9
## Community Q&A
Search
• How do I use exponential notation for a fraction raised to a power?
wikiHow Contributor
Use parentheses around the fraction, and ^ to raise it to any power. For example: one third, squared: (1/3)^2.
• How do I calculate (1-4.4) 2 (to the power of 2)?
wikiHow Contributor
Do 1-4.4 (what's in the parentheses) first, which gives you -3.4. You would then change that into 3.4 squared, and to make it a fraction, you would do 34 over 10 [(34/10)^2]. Divide both the numerator and the denominator by 2, because it is squared. That leaves you with 17 over 5 (17/5). Bring down the squared to each part, so 17 squared over 5 squared. Do the math; you end up with 289 over 25 (289/25). Simplified, that gives you 11 and 14/25, or 11.56 in non-fraction form.
• How can I find square of a mixed fraction?
Convert the mixed number to an improper fraction. Then, square both the numerator and the denominator. The new fraction will be improper. You can convert it to a mixed number if you like.
• Is the cube of a fraction greater than its square?
Yes, if it's an improper fraction.
• How do I solve an equation with six kinds of fractions?
wikiHow Contributor
If the denominators are different, find the LCM and solve.
• How do I square and cube mixed numbers?
Convert the mixed number to an improper fraction. Then multiply the improper fraction by itself once (for squaring) or twice (for cubing).
• How do I use a cube and fraction finder?
200 characters left
## Things You'll Need
• Paper or screen for working on
• Pencil/Pen (for use w/ paper)
## Article Info
Categories: Exponents and Logarithms | Fractions
In other languages:
|
bash can do TCP and UDP sockets. They are represented as a special file in /dev, like
/dev/tcp/192.168.0.1/1234
/dev/udp/192.168.0.2/2345
even though those /dev files do not exist. However, bash only do active connection but not listen. That is, we can use this to replace telnet. This is an example:
On machine A, run netcat (say) and listen to a port:
machine_a$nc -l -p 1234 On machine B, for convenience, build the socket as a file handle, then cat the file to stdout and at the same time, cat the stdin to the file: machine_b$ exec 9<>/dev/tcp/machine_a/1234
machine_b$cat <&9 & [1] 6543 machine_b$ cat >&9
Then machine_a and machine_b are connected.
|
# AP Board 7th Class Maths Solutions Chapter 2 Fractions, Decimals and Rational Numbers Ex 5
AP State Syllabus AP Board 7th Class Maths Solutions Chapter 2 Fractions, Decimals and Rational Numbers Ex 5 Textbook Questions and Answers.
## AP State Syllabus 7th Class Maths Solutions 2nd Lesson Fractions, Decimals and Rational Numbers Exercise 5
Question 1.
Which one is greater?
(i) 0.7 or 0.07
(ii) 7 or 8.5
(iii) 1.47or 1.51
(iv) 6 or 0.66
1 cm = 10 mm
1 m = 100cm
1 km = 1000m
1kg =1000gm
Solution:
i) 0.7 or 0.07 = 0.7 is greater
ii) 7 or 8.5= 8.5 is greater
iii) 1.47 or 1.50 = 1.50 is greater
iv) 6 or 0.66 = 6 is greater
Question 2.
Express the following as rupees using decimals.
(i) 9 paise
(ii) 77 rupees 7 paise
(iii) 235 paise
Solution:
(i) 9 paise = $$\frac { 9 }{ 100 }$$ = ₹ 0.09
(ii) 77 rupees 7 paise = 77 rupees $$\frac { 7 }{ 100 }$$ rupees = ₹ 77.07
(iii) 235 paise = ₹ $$\frac{235}{100}$$ = ₹ 2.35
Question 3.
(i) Express 10 cm in metre and kilometre.
(ii) Express 45 mm in centimeter, meter and kilometer.
Solution:
i) 10cm = $$\frac{10}{100}$$ m = 0.1 m
1o cm = $$\frac{10}{100 \times 1000}$$ km = 0.0001 km
ii) 45 mm = $$\frac{45}{10}$$ cm = 4.5 cm
= $$\frac{4.5}{100}$$ m = 0.045 m
= $$\frac{0.045}{1000}$$ km = 0.000045 km
Question 4.
Express the following in kilograms.
(i) 190g
(ii) 247g
(iii) 44kg 80gm
Solution:
(i) 190g = $$\frac{190}{1000}$$ = 0.190 kg
(ii) 247g = $$\frac{247}{1000}$$ kg = 0.247 kg
(iii) 44kg 80gm = 44 kg $$\frac{80}{1000}$$ kg = 44.080kg
Question 5.
Write the following decimal numbers in expanded form.
(i) 55.5
(ii) 5.55
(iii) 303.03
(iv) 30.303
(v) 1234.56
Solution:
(i) 55.5 = 10 × 5 + 1 × 5 × $$\frac{1}{10}$$ × 5 = 50 + 5 + $$\frac{5}{10}$$
(ii) 5.55 = 1 × 5 + $$\frac{1}{10}$$ × 5 + $$\frac{1}{100}$$ × 5 = 5 + $$\frac{5}{10}+\frac{5}{100}$$
(iii) 303.03 = 100 × 3 + 1 × 3 + $$\frac{1}{100}$$ × 3 = 300 + 3 + $$\frac{3}{100}$$
(iv) 30.303 = 10 × 3 + $$\frac{1}{10}$$ x 3 + $$\frac{1}{1000}$$ x 3 = 30 + $$\frac{3}{10}+\frac{3}{1000}$$
(v) 1234.56 = 1000 × 1 + 100 × 2 + 10 × 3 + 1 × 4 + $$\frac{1}{10}$$ × 5 + $$\frac{1}{100}$$ × 6 = 1000 + 200 + 30 + 4 + $$\frac{5}{10}+\frac{6}{100}$$
Question 6.
Write the place value of 3 in the following decimal numbers.
(i) 3.46
(ii) 32.46
(iii) 7.43
(iv) 90.30
(v) 794.037
Solution:
i) 3.46 – place value of 3 in 3.46 is 3 × 1 =-3
ii) 32.46- place value of 3 in 32.46 is 3 × 10 = 30
iii) 7.43- place value of 3 in 7.43 is 3 × $$\frac{1}{100}$$ = 0.03
iv) 90.30- place value of 3 in 90.30 is 3 × $$\frac{1}{10}$$ = 0.3
v) 794.037 – place value of 3 in 794.037 is 3 × $$\frac{1}{100}$$ = 0.03
Question 7.
Aruna and Radha start their journey from two different places. A and E. Aruna chose the path from A to B then to km C, while Radha chose the path from E to D then to C. Find who traveled more and by how much?
Solution:
Distance covered by Aruna = AB + BC
= 9.50+2.40 = 11.90km
Distance covered by Rad ha = ED + DC
= 8.25 + 3.75 = 12 km
Radha travelled more by (12.00 – 11.90) = 0.10 km.
Question 8.
Upendra went to the market to buy vegetables. He brought 2 kg 250 gm tomatoes, 2 kg 500gm potatoes, 750gm lady fingers and 125 gm green chillies. How much weight did Upendra cany back to his house?
Upendra carried back a total weight of 5.625 kg
|
# Complex manifold with boundary
My question is of local nature.
Let $$f:\mathbb C^n\to\mathbb R$$ be a $$C^\infty$$ function that vanishes at $$0\in \mathbb C^n$$, with non-zero derivative.
Then, around $$0\in \mathbb C^n$$, $$M:=f^{-1}(0)$$ is a CR manifold. Let me assume that $$M$$ is the simplest possible kind of CR manifold, namely that it is foliated by real-codimension-one complex submanifolds.
[Equivalently, for those who don't know what CR manifolds are, consider the hyperplane distribution $$L:=TM\cap i\cdot TM\subset TM$$. I require the distribution $$L$$ to be integrable, i.e., to come from a (real codimension $$1$$) foliation of $$M$$.]
Under the above assumptions, is $$f^{-1}\big([0,\infty)\big)$$ locally isomorphic to $$\big\{(z_1,...z_n)\in\mathbb C^n\,:\,\mathrm{im}(z_1)\ge 0\big\}?$$
I.e., does there exist a neighbourhood $$U\subset f^{-1}([0,\infty))$$ of zero and an isomorphism $$\varphi:U\to \big\{z\in\mathbb C^n\,:\,\sum|z_i|^2<1,\,\mathrm{im}(z_1)\ge 0\big\}$$ which is holomorphic in the interior and smooth all the way to the boundary.
Perhaps Giuseppe Della Sala's paper might be useful here: https://www.ams.org/journals/proc/2011-139-07/S0002-9939-2010-10746-3/home.html
It precisely deals with the equivalence of smooth Levi-flats. There are examples in the paper
If $$M$$ is real analytic then Élie Cartan proved that, in suitable holomorphic coordinates, $$M$$ is cut out by the imaginary part of $$z$$. I learned this from the paper https://hal.archives-ouvertes.fr/hal-00459323.
Look for Levi flat hypersurfaces and you will find a lot of literature on the topic.
I believe you are asking whether the foliation by codimension-1 complex leaves tangent to $$L$$ can be straightened. It appears that the answer in general is No, as discussed (with examples) in
Freeman, Michael, Local biholomorphic straightening of real submanifolds, Ann. Math. (2) 106, 319-352 (1977). ZBL0372.32005, MR463480.
• The question of straightening seems indeed related to my question, at least when the manifold $M$ is real analytic (when $M$ is not real analytic, the case of $n=1$ already shows that straightening is not always possible, whereas my question always has a positive answer by the Riemann mapping theorem). Unfortunately, the paper you link doesn't seem to focus on the case when $M$ is a hypersurface, which makes it a bit difficult for me to find the most relevant parts... You claim that there's a counterexamples to my question in that paper. Where is that counterexample? – André Henriques Apr 30 '19 at 22:08
• Actually, isn't Thm 3.3(A) of the paper you link a positive answer to my question when $M$ is real analytic? What makes you say that the answer is negative? – André Henriques Apr 30 '19 at 22:18
• I must admit that I did get lost in the weeds somewhat when trying the wrap my head around Freeman's results. The negative result seems to be for the general situation where $M$ is locally foliated by $k$-complex-dimensional leaves, where $k$ need not be maximal. The supporting examples are in Sec.5 of the paper. But it is possible that I didn't properly read the caveats about some special cases, like $k$ being maximal as in your question, where no examples are possible. – Igor Khavkine Apr 30 '19 at 23:32
|
Comment
Share
Q)
# X- rays cannot be diffracted by means of ordinary grating because of ,
$\begin{array}{1 1} high\;speed \\ large\;wave\;length \\ short\;wave\;length \\ \text{none of these} \end{array}$
|
# Math Help - simple differential equation with trig functions
1. ## simple differential equation with trig functions
this is not a homework problem or anything- i'm just doing it for fun and would like to know why i'm not getting what the back of the book gets. it seems really simple. well here it is; just solving the diff eq:
3(e^x)(tan(y)) dx = (1-(e^x))(sec^2(y)) dy
so after rearranging and integrating i get
3ln|1-(e^x)|+ ln|tan(y)| = c
and the book has for a solution;
tan(y)*((1-(e^x))^3) = c
so i'm confused. how can i find out if these are equivalent? besides i don't even see how they have that for an answer
2. ## Re: simple differential equation with trig functions
oh...could it be they turned it from a natural log problem to exponential?
|
# Problem: Carbon dioxide, CO2, in the form of dry ice would be classified asA. an ionic solidB. a polymeric solidC. a molecular solidD. a network solid
###### Problem Details
Carbon dioxide, CO2, in the form of dry ice would be classified as
A. an ionic solid
B. a polymeric solid
C. a molecular solid
D. a network solid
|
# Help! - Combinatorics cause heavy computing
1. May 28, 2008
### einarbmag
I have n comparable sets of data, several thousand numbered values in each set. A certain number can be calculated by choosing m of these sets, let's call it S_m. Now, the value of S_m depends on which sets I choose, so it can be thought of as a function of m variables which can only take discrete values, and no two the same. (corresponding to the sets chosen)
My problem is to for a certain m find the combination/s which give/s me the max/min of S_m.
It can be done by calculating S_m for every combination, but this quickly amounts to a huge number of calculations, the most of course for m=n/2. I therefore probably need some algorithm for reaching the right combination.
Any ideas?
2. May 28, 2008
### Hurkyl
Staff Emeritus
With the information you've given us, any (correct) algorithm requires you to try every combination. To do better than that, you will need to use knowledge of the specific function you're trying to maximize.
3. May 28, 2008
### einarbmag
Ok, the sets are columns of values, and when we choose m sets we put the columns together in a matrix.
Now, S_m = a/b, where a is the largest row-sum and b is the sum of the maxima of each column.
4. May 28, 2008
### CRGreathouse
How is the row sum defined? $$S_2(\{1,2\},\{3,4\})$$ can involve
$$\left[\begin{array}{ccc} 1&3\\ 2&4\end{array}\right]$$ or $$\left[\begin{array}{ccc} 2&3\\ 1&4\end{array}\right]$$
which have maximum row sums 6 and 5, respectively. Since sets (by definition!) don't have order, how do you choose?
5. May 28, 2008
### einarbmag
I guess I was a little mathematically unprecise. (Not a mathematician, almost a physicist)
Lists is probably more precise than sets, the values are in a time sequence and therefore ordered.
By row sum I just mean the sum of the elements in a row, and the maximum row sum is just the largest row sum.
Physical interpretation of the problem: values are recorded in n sites in a time sequence.
Choosing m of the n sites, I need to find out which combination gives me the highest/lowest ratio between (the highest peak of the sum of values) and (the sum of (the highest peak in each site))
6. May 28, 2008
### CRGreathouse
In that case m = n/2 is no longer your worst case; your worst case is m = n (or n - 1). This nearly squares the number of cases!
Can you give a small worked out example so I'm sure I properly understand? For example, with sets {1, 2} and {2, 3}, the matrices for $$S_2$$ could be
$$\left[\begin{array}{cc} 1&2\\ 2&3 \end{array}\right]$$ $$\left[\begin{array}{cc} 1&3\\ 2&2 \end{array}\right]$$ $$\left[\begin{array}{cc} 2&2\\ 1&3 \end{array}\right]$$ $$\left[\begin{array}{cc} 2&3\\ 1&2 \end{array}\right]$$
and the matrices for $$S_1$$
[1 2] [1 3] [2 2] [2 3]
right?
7. May 28, 2008
### einarbmag
I think you are making this more complicated than it is:
I have a table of values, each column (n columns) represents a value recording site, each row all values recorded at a specific time. I pick m of these columns and calculate mentioned S_m for those. Which combination of m columns give me the highest/lowest S_m?
Example:
1 & 2 &2 \\
2 & 4 & 3 \\
3 & 1 & 1 \\
Let's calculate S_2 for all 3 combinations:
Column 1 and 2:
Maximum row sum is 6, sum of maximum of columns is 7, so S_2=6/7.
columns 1 and 3:
Maximum row sum is 5, sum of maximum of columns is 6, so S_2=5/6.
Columns 2 and 3:
Mrs = 7 , sum of max of col is 7, so S_2=1.
This should clear it up. Btw, how do I show LaTeX?
8. May 28, 2008
### CRGreathouse
(LaTeX: do $...$ as [ tex]...[/tex] and $...$ as [ itex]...[/itex] without the spaces.)
Okay, that's yet another case (glad I asked for an example!). The bad case there is m = n/3, give or take. I'll have to think about this. What restrictions are there on the values? All positive? Integers or reals? Will they tend to be close in value?
9. May 28, 2008
### einarbmag
All right,
the values are positive reals, and high values tend to happen closely in time over all columns.(I.e. in rows close to each other) Values can differ in 1-2 magnitudes of order.
10. May 30, 2008
### einarbmag
It's a tough one, but I really have to find some other method than brute-force. Taking only 10 thousand combinations for each m, having n=50, calculating $S_m$ for all combinations and all m up to 49 takes a few hours. And that doesn't even come close to solving my problem, finding the max/min of $S_m$ for each m. This is partly caused by the vast number of rows that need to be summed up, about 35 thousand.
11. May 30, 2008
### CRGreathouse
At the moment you're talking about a problem that (with n = 50) would take in the neighborhood of 1e51 years, or worst-case (n = 10,000) 1e16122 years. The age of the universe, for comparison, is estimated around 1e10 to 3e10 years.
Would a good estimate be worthwhile? Simulated annealing might be a viable approach if that's the case.
12. May 30, 2008
### einarbmag
I started looking around yesterday for search algorithms and stumbled upon simulated annealing, and it does look slightly promising.
I might try that, but I suspect that to truly solve this in an elegant way I need to come up with something based on how $$S_m$$ is calculated.
13. May 30, 2008
### CRGreathouse
Depending on how layered the samples are (correlation across rows), here's a search approach that may work. For each row, find a suitable average (median or truncated mean). Next, compute for each column the deviation from the mean (sum of absolute or squared error). Starting from those with the least deviation, assign columns alternately to each of two groups. When both are populated with m samples, sum the values and compare. With a certain probability either swap values between lists or swap from a list to one of the unused values. When choosing an unused value, assign high probability to those columns with low deviations and low probabilities to those with high deviations. Slowly lower the probability of choosing an unused column. When testing, keep the best 100 results (or whatever value works) and iterate until the value of S_m is as close to 1 as reasonable.
To find the low value of S_m, instead favor the columns with high variance.
A better system would include the direction of the deviation from the average for each row; this may be difficult if there are many rows.
In any case this method should be tested and refined with 10,000 columns and the results compared with the known best results. When it produces results close to those expected, run it on the full system.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
Chapter 87
Antimony (Sb) and its compounds are among the oldest known remedies in the practice of medicine.82,126 Because of a strong chemical similarity to arsenic, the features of antimony poisoning closely resemble arsenic poisoning (see Chap. 88), and antimony poisoning has many features in common with other metal poisonings. Although relatively uncommon, antimony poisoning still occurs, usually as a complication of the treatment of visceral leishmanias.75 Acute overdose represents an even more rare but potentially lethal event.112
Objects discovered during exploration of ancient Mesopotamian life (third and fourth millennium BC) suggested that both the Sumerians and the Chaldeans were able to produce pure antimony.82,126 The reference to eye paint in the Old Testament suggested the use of antimony.82 For several thousand years, Asian and Middle Eastern countries used antimony sulfide in the production of cosmetics, including rouge and black paint for eyebrows, also known as kohl or surma.78,83 Because of the scarcity of antimony sulfide, lead replaced antimony as a main component in more modern cosmetic preparations.
One of the first monographs on metals, written in the 16th century, included a description of antimony.118 The medicinal use of antimony for the treatment of syphilis, whooping cough, and gout dates to the medieval period. Paracelsus was credited with establishing antimony compounds as therapeutic agents and increasing their popularity. In spite of being aware of its toxic potential, many of the disciples of Paracelsus enthusiastically continued the use of antimony.82 Various antimony compounds were also used as topical preparations for the treatment of herpes, leprosy, mania, and epilepsy.126 Orally administered tartar emetic (antimony potassium tartrate) was used for treatment of fever, pneumonia, inflammatory conditions, and as a decongestant, emetic, and sedative, but it was abandoned because of its significant toxicity.18,38,54,66 The use of antimony as a homicidal agent113 continued well into the 20th century (Chap. 1).
The current medical use of antimony is limited to the treatments of leishmaniasis and schistosomiasis, and to sporadic use as aversive therapy for substance abuse.112 Pentavalent compounds are used because they are better tolerated. In the endemic regions of the world, generic pentavalent antimonials remain the mainstay of therapy because of their efficacy and low cost; however, the growing incidence of resistance may reduce future use.87
Some contemporary homeopathic49 and anthroposophical107 practices still recommend use of antimonial compounds as home remedies; however, these practices are rare.82,126 In spite of its anticancer effects in vitro,38 there is no current oncologic use of antimony.
The elemental form of antimony has very few industrial uses because of its physical limitations, particularly the fact that it is not malleable. In contrast, its alloys with copper, lead, and tin have important applications. Various antimony compounds can be used in the production of ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessEmergency Medicine Full Site: One-Year Subscription
Connect to the full suite of AccessEmergency Medicine content and resources including advanced 8th edition chapters of Tintinalli’s, high-quality procedural videos and images, interactive board review, an integrated drug database, and more.
|
# Square root of Diagonal matrix
I cannot find an answer to if it is generally possible to take the square root of a diagonal matrix $$A$$ by taking the square root of each individual component along the main diagonal, e.g. for a 2-by-2 matrix $$\sqrt{A} = \begin{pmatrix} \sqrt{a_1} & 0 \\ 0 & \sqrt{a_2} \\ \end{pmatrix}.$$ Is this OK to do provided that it is a (square) diagonal matrix?
• It is sufficient to observe that $\begin{pmatrix}\sqrt{a_1} & 0 \\ 0 & \sqrt{a_2}\end{pmatrix}\begin{pmatrix}\sqrt{a_1} & 0 \\ 0 & \sqrt{a_2}\end{pmatrix}= \begin{pmatrix}a_1 & 0 \\ 0 & a_2\end{pmatrix}$ – user247327 Oct 21 '18 at 19:50
I assume that you consider matrices with entries in a field $$\mathbb{F}$$. If square roots $$\sqrt{a_i}$$ exist in $$\mathbb{F}$$, then it is ok. However, a diagonal matrix $$A$$ may have a square root even if the $$a_i$$ do not square roots in $$\mathbb{F}$$. An example for $$\mathbb{F} = \mathbb{R}$$ is $$A = \begin{pmatrix} -1 & 0 \\ 0 & -1 \\ \end{pmatrix}.$$ In fact, a square root of $$A$$ is given by $$B = \begin{pmatrix} 0 & 1 \\ -1 & 0 \\ \end{pmatrix}.$$
• Thanks for your reply! Does field $\mathbb{F}$ imply any special properties, or is it an arbitrary field? – litmus Oct 22 '18 at 7:00
• The field is arbitrary. But the properties of $\mathbb{F}$ determine whether every (diagonal) matrix $A$ has a square root, and whether there exist square roots in diagonal form. For example, the matrix $A$ in my answer has a square root, but no diagonal square root. For $\mathbb{F} = \mathbb{C}$ it has a diagonal square root. – Paul Frost Oct 22 '18 at 12:21
|
# Appearance and fascination
1. Sep 12, 2004
### humanino
Maybe 30% of the Internet consists in porn, to the great despair of Saint :tongue2: (where did he go ?) (not that I care)
Even people with respectable occupations such as us in PF must admit it : our most successful thread is the Member Photo one and we humans are fascinated by the physical shape. I was wondering why ? :uhh:
We won't get much by seeing each other, except maybe mental association : it is easier to remember pictorial information. But that can't explain the whole fascination here.
Do you have an opinion ?
2. Sep 12, 2004
### Andy
A pictures worth a thousand words.
3. Sep 12, 2004
### Staff: Mentor
My first guess is that it's because PF members are all so good looking.
One of the first things that a baby becomes fascinated with are faces. Manufacturers of baby toys know this and use this in designing toys. Humans tend to seek out the companionship of others and for many people a great part of that is visual contact.
For me, it's nice to put a face to the person I am speaking with. It makes them more "human" and less of an unembodied "voice".
4. Sep 12, 2004
We're predators, among other things. Forward-facing eyes, hands and arms built to apply force and manipulate things in front of those eyes.
5. Sep 12, 2004
### enigma
Staff Emeritus
Humans are extremely social monkeys.
We've got a special place in our brain which does nothing except for recognize faces.
I saw a special on TV (discovery channel, I think) which had two guys with brain damage from accidents. One of them couldn't make the associations between shapes and objects... they showed him a picture of a hairbrush and he described it as a brown blob with a black blob on top of it. When they showed him pictures of people he knew exactly who they were. The other guy could recognize objects with no problem at all, but he couldn't discern people at all... he couldn't even recognize his own face.
It's a mildly scary thought what'll happen if and when we meet an alien race. They most likely won't be able to tell us apart.
6. Sep 12, 2004
Unless they're smarter than the average ox.
7. Sep 12, 2004
### Ivan Seeking
Staff Emeritus
I'm just guessing here but at the most primitive level, the more attracted one is to a potential mate, the more likely one is to reproduce. Anything that enhances this desire should yield an evolutionary advantage. So it seems to me that a "fascination with shape" is a highly evolved trait that helps to make babies. Also, beauty is often associated with symmetry and specific proportions that indicate health, and fertility. In fact, universally, men indicate that women having a waist to hip ratio of 0.7 are most attractive in form. The preferred measurements [size] can vary, but the ratios stay the same. Allegedly, this waist to hip ratio - 0.7 - also yields the greatest success rate for child birth.
8. Sep 12, 2004
### Staff: Mentor
I've noticed that different people have very different ideas of what is attractive.
If a man is highly intelligent and posseses a great sense of humor, that is SO attractive to me. It actually affects how I perceive them. I have always regretted the few times that someone talked me into dating a guy because he was "good looking". Physical looks are at the bottom of my list in order of importance. Other women are always drooling over men that I consider unattractive. I just don't get it. It does seem that beauty is in the eye of the beholder.
edit - I just read Ivan's post. I agree.
Last edited: Sep 12, 2004
9. Sep 12, 2004
### Ivan Seeking
Staff Emeritus
I should add that physicists prefer a waist to hip ratio of $$\frac{1}{\sqrt{2}}$$
10. Sep 12, 2004
### motai
I agree. There are lots of girls that I know who are pretty but leave much to be desired personality-wise. They tend to be shallow in selecting boyfriends and the relationships that I have seen develop seem to be lacking in depth, with the only emphasis being on physical attraction and nothing else.
Being with someone I can relate with is far more important to me than physical attractiveness. But it seems that the majority of people choose the latter as their primary way of selecting boyfriends/girlfriends.
11. Sep 12, 2004
### Ivan Seeking
Staff Emeritus
Hormones
During the time of their cycles when most fertile, young single women tend to show more skin when going out on the town. Unless they have read about this, they don't realize why, they just do.
motia, I'm afraid your complaint should be sent to Darwin. The same for all of those "men are pigs" arguments, ladies.
You animals.
12. Sep 12, 2004
### mee
We had forward facing eyes long before we were predators if, I believe, australopithecus is one of our ancestors.
13. Sep 12, 2004
### jimmy p
The main problem is that you go for the pretty girls in the hope that they have a personality. At least, I would if I had the guts...
14. Sep 12, 2004
### humanino
Still hope to find another available one... Stupid kid I was :grumpy:
15. Sep 12, 2004
Australopithecus wasn't an omniovorous monster like us?
16. Sep 12, 2004
### The_Professional
What if he's smart, intelligent, funny and looks like Danny De Vito?
17. Sep 12, 2004
### Staff: Mentor
I think Danny DeVito is cute.
18. Sep 12, 2004
### amwbonfire
I too prefer personality to physical appearance. I find that if someone is beautiful inside, and I enjoy being with them, they'll be just as beautiful on the outside too.
I don't mean to take anything away from that saying, but it's just a nice way of saying some people have bad taste.
Only joking! :tongue2:
19. Sep 13, 2004
### Gokul43201
Staff Emeritus
I thought it was about 0.618, or $$\frac {\sqrt{5} - 1} {2}$$.
20. Sep 13, 2004
Physical appearance means nothing if they have a thoroughly objectionable personality. However, I'd be quite happy to find a girlfriend who is nice, intelligent, has great integrity, and is also incredibly beautiful. ;)
|
# In linear regression, what does $\beta_1 = 0$ really mean?
If granted omniscience and we know that $\beta_1$ in a multiple linear regression model is truly 0, what does that mean in words (and math notation)?
The model is: $Y = \beta_0 + \beta_1X_1 + \beta_2X_2 + \epsilon$
Does $\beta_1 = 0$ mean:
1. Empirically: "If we make infinite, perfect measurements of our data, $\hat\beta_1$ approaches 0 at the limit?" [Math: ??]
2. Formally: "In the true, finite population of our data, $X_1$ has 0 association with $Y$ and their vectors of values are perfectly linearly independent, uncorrelated, and orthogonal?" [Math: ??]
What does $\beta_1 = 0$ mean, precisely?
• Many of your separate questions come across as minor variations on each other. Also, if you ask short questions, much of the answer may turn out to be of the form: if you mean (a), then (A); if (b), then (B); and so forth. This strikes me as an interesting question, but answers could range back and forth across large swathes of statistical thinking, so despite the specificity here I am almost tempted to vote to close as too broad. I won't do so, but that is one signal to you, for what it's worth. – Nick Cox Mar 13 '15 at 13:49
• @NickCox: I do thank you for your comments and signals. My apologies if my (evolutionarily pruning) questions are bad form as I try to precisely pin down a conceptual difficulty in canonical form. Still learning CV community ethics, assuming you signal against minor variations. I have tried to make question better but as I do not know the menu of (a), (b),.. I may have missed mark. In any event, thanks for guidance! – jtd Mar 13 '15 at 14:16
• Can you describe the setting in a more precise manner? Given that you insist there is no error term in the model, it's not a "standard" regression model. What is $x_1,x_2,\beta_1,\beta_2$? Normally the $\beta_i$ are considered constants, so of course it cannot be that $\beta_1=0$ and $\beta_1\neq 0$. You must have something else in mind. – ekvall Mar 13 '15 at 18:27
• @Student001: Sorry, my thinking was: "true value y = true intercept + 0*$x_1$ + true $\beta_2x_2$ + true error of zero." E.g., 4 = -1 + (0*3) + (0.5*10) + 0. It is not parsimonious but I think still valid. The and was because I incorrectly thought NickCox suggested I needed to limit question to 1 paradigm, 1 method and I worried that truth value of $\beta_1$ might change--even though that seems nonsense, I was out of my depth trying to be precise. That is now removed. – jtd Mar 13 '15 at 19:15
Part of the problem here is that your model is conceptually confused, as @CagdasOzgenc has correctly pointed out.(Edit: This has now been fixed.) I think I can address two of your specific questions.
1. If $\beta_1 = 0$ (note the absence of the 'hat'), and standard assumptions hold, then empirically, as we make ever more measurements of our data, $\hat\beta_1$ will approach $0$ at the limit.
In mathematical notation:
$$\lim_{N\to\infty} \hat\beta_1 = 0$$
2. If $\beta_1 = 0$, and $X_1$ and $X_2$ are uncorrelated, then formally, in the population of our data, $x_1$ is uncorrelated with $y$, linearly independent of $y$, but not necessarily independent of $y$*. (Edit: as @CagdasOzgenc points out, if we take the model provided as literally the DGP, independence holds as well.)
In mathematical notation:
$${\rm Cor}(x_1, y) = 0$$
* From the Wikipedia page on correlation and dependence, consider this figure:
The patterns in the bottom row all have correlation $0$, but show various patterns of dependence.
I don't understand the part after the bolded text.(Edit: The referenced portion of the question has now been removed.)
• This is fantastic! Should we say, "$x_1$ is uncorrelated with $y$, linearly independent of $y$, but not necessarily independent of $y$.*" (psych.umn.edu/faculty/waller/classes/FA2010/Readings/…) – jtd Mar 13 '15 at 18:58
• @jtd, yes, I think so. – gung - Reinstate Monica Mar 13 '15 at 19:04
• Actually if $\beta_1$ = 0 and $y$ is generated by the model in question, it would also imply independence, as $x_1$ didn't take part. But this of course implies a directionality in data generation. If no directionality is assumed the uncorrelatedness can also not be concluded as all the dynamics may be captured by $\beta_2$. I need to ponder. – Cagdas Ozgenc Mar 13 '15 at 19:33
• @CagdasOzgenc, good point. If we take the given model literally as the DGP, there is independence as well. I guess I was thinking of it as illustrative. I'll edit my answer. Let me know if you think it needs more. – gung - Reinstate Monica Mar 13 '15 at 19:36
• @gung: If $\beta_1 = 0$ means $Cor(x_1,y) = 0$, does this mean simple $Cor(x_1,y)$ can help us determine if true $\beta_1 = 0$, but once we think $\beta_1 \ne 0$, then we must use linear regression to get more information? So linear regression does two steps $Cor()$ plus something. Also, should we actually say that $\beta_1 = 0$ means $Cor(x_1,y)|x_2???$ or is it a straight bivariate comparison? – jtd Mar 13 '15 at 19:44
You are confusing the real data generation process with a model trying to make an approximation of this process (this is quite normal, I also banged my head to the wall several times regarding this matter).
First of all
$y = \beta_0 + \beta_1x_1 + \beta_2x_2$
is a deterministic model. Usually in regression, the model is probabilistic hence, the correct representation of the model is
$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \epsilon$ and $\epsilon$ ~ some distribution(usually normal)
Now suppose that above model is exactly capturing the underlying real data generation. In that case $\beta_1$ = 0 means it is always 0 in reality.
However when you propose the above model you need to estimate the parameters from sample data.
In that case
$\hat{\beta_1}$ (estimation of $\beta_1$) can be anything. The probability that it will be exactly 0 is 0. If in reality it is 0, and if you did proper sampling and used a consistent estimation procedure as sample size increases it will approach 0.
• Thanks, perhaps I'm confusing things but I corrected $\beta_1 \to \hat\beta_1$ in #1. I'm talking about the real data generation process, not a model: we are omniscient, there are no errors, no samples, and the Truth is $\beta_1 = 0$, but what does $\beta_1 = 0$ mean in precise, unique language (and math) in this linear regression context? Your last sentence seems to agree with my first bullet point, "If in reality it is 0,..", is this your answer? – jtd Mar 13 '15 at 15:29
• "You are confusing the real data generation process with a model trying to make an approximation of this process." This problem is endemic in undergrad econometrics. Then the problem is magnified by sloppy notation ("is $x$ a random variable or the realization of a random variable?") and over-emphasizing the "$Y$ as a function of $X$, plus some error" interpretation of regression over the "$E(Y|X)$" interpretation. – shadowtalker Mar 13 '15 at 22:33
Gung's answer is excellent, but I want to add an interpretation that I think goes underappreciated.
You wrote out the model as $$Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \varepsilon$$
You didn't specify this, but presumably $\varepsilon$ is an error term with $\operatorname{\mathbb{E}}\left(\varepsilon\,|\,X\right) = 0$ and $\varepsilon \perp X$. Here you are postulating a particular data-generating process: given $X_1$ and $X_2$, $Y$ is a deterministic function of $X_1$ and $X_2$, plus a random error term. This is what is usually taught in undergrad econometrics class.
Now just for the heck of it, define a function $\mu(x_1, x_2) = \beta_0 + \beta_1 x_1 + \beta_2 x_2$ so that the model can be written as $$Y = \mu(X_1, X_2) + \varepsilon$$ or, perhaps more precisely, $$Y\,|\,(X_1 = x_1, X_2 = x_2) = \mu(x_1, x_2) + \varepsilon$$
Remember that we assumed $\operatorname{\mathbb{E}}\left(\varepsilon\,|\,X_1 = x_1, X_2 = x_2\right) = 0$. Taking the hint, let's compute \begin{align} \operatorname{\mathbb{E}}(Y\,|\,X_1 = x_1, X_2 = x_2) &= \operatorname{\mathbb{E}}(\mu(x_1, x_2)) &+& \operatorname{\mathbb{E}}\left(\varepsilon\,|\,X_1 = x_1, X_2 = x_2\right) \\ \operatorname{\mathbb{E}}(Y\,|\,X_1 = x_1, X_2 = x_2) &= \mu(x_1, x_2) &+& 0 \\ \operatorname{\mathbb{E}}(Y\,|\,X_1 = x_1, X_2 = x_2) &= \beta_0 + \beta_1 x_1 + \beta_2 x_2 \end{align}
This is powerful stuff: "regression line" is really the expected $Y$ as a function of $X$.
You asked what it means if $\beta_1 = 0$. In this interpretation, it means that the expectation of $Y$ does not depend on $X_1$. That is, \begin{align} \operatorname{\mathbb{E}}(Y\,|\,X_1 = x_1, X_2 = x_2) &= \beta_0 &+ 0 \cdot x_1 &+ \beta_2 x_2 \\ \operatorname{\mathbb{E}}(Y\,|\,X_1 = x_1, X_2 = x_2) &= \beta_0 &&+ \beta_2 x_2 \end{align}
In other words, $\beta_1 = 0$ means $X_1$ does not belong in the model. The slope of the regression line (i.e. the "conditional expectation line") is 0 with respect to $X_1$. Compare: $z = 2x + 2y$ and $z = 0x + 2y$
Now remember our second assumption that $\varepsilon \perp X$. From this we can conclude that in fact $Y \perp X_1$. We have already established that changing the value of $X_1$ has no effect on $\mu$, the average $Y$ given $X_1$ and $X_2$, but if $\varepsilon \perp X_1$ as well, then there's just nowhere else for $X_1$ to enter the data generating process. It doesn't affect the average $Y$ and it doesn't effect the variation of $Y$ around its average, so it just doesn't affect $Y$ at all.
Empirically, this means that any value we estimate for $\beta_1$, which we usually denote $\hat \beta_1$, should be close to zero. If we use OLS to fit the model, we know that $\operatorname{\mathbb{E}}(\hat \beta_1) = \beta_1 = 0$ and $\hat \beta_1 \xrightarrow[]{n \to \infty} \beta_1$. So the expectation of $\hat \beta_1$ will be zero, and $\hat \beta_1$ will approach zero as the sample grows.
• +1. Note that you have one unfinished sentence ("In a 3D space..."). – amoeba says Reinstate Monica Mar 13 '15 at 22:39
• @amoeba you're too quick for me. – shadowtalker Mar 13 '15 at 22:39
• Unfortunately equality notation doesn't involve any direction. It is not possible to conclude that first we acquire the $Xs$ and then add some noise on top and generate $Y$. You may simply move $X_1$ to the left and $Y$ to the right. A simple example is $Weight = \beta Height + \epsilon$. If we sample "people" from a population of people, DGP view dissolves. For this reason at this point we cannot conclude more than the fact that every equation will be just a projection of reality. $Y = \beta_1 X_1 + \epsilon_1$ and $Y = \beta_2 X_2 + \epsilon_2$ both can be valid. – Cagdas Ozgenc Mar 14 '15 at 8:37
• @CagdasOzgenc very good point. that's why I added all that ridiculous conditioning notation and the swapping of lower and upper case, but I could have been more explicit. – shadowtalker Mar 14 '15 at 10:42
• And the DGP still can hold in that case. Height is determined somehow, then weight is drawn randomly afterwards. Even if it doesn't make biological sense (although it happens to be plausible here), it's an intuitive way to condition one variable on another. All models are false, etc – shadowtalker Mar 14 '15 at 10:53
|
### Ticker
6/recent/ticker-posts
# Odd one Out
Problem:-
Alice and Bob are getting bored so they decided to play a game.
Alice has n cards having the first n odd numbers written on them. He removes one of the cards at random and hands the remaining n-1 cards to Bob. Help Bob to find the value of the card Alice has removed.
Input
The first line contains n the numbers of cards Alice has.
The second line contains n-1 space-separated integers representing the values of cards that Bob got.
Output
Print the value of card Alice removed.
Constraints
$1\le n\le 4000000$
Sample Input
5
3 1 9 5
Sample Output
7
Time Limit: 1
Memory Limit: 512
Source Limit:
Explanation
The first 5 odd numbers are 1, 3, 5, 7 and 9 out of which 7 is missing.
Code (c++):-
#include<iostream>
using namespace std;
int main()
{
// these three lines only for decrease the time
ios_base::sync_with_stdio(false);
cin.tie(0);
cout.tie(0);
long long n,sum=0,a;
cin>>n;
for(long long i=0;i<n-1;i++)
{
cin>>a;
sum=sum+a;
}
cout<<n*n-sum;
return 0;
}
Code (c):-
#include <stdio.h>
int main(){
long n,d;
scanf("%ld", &n);
long tot=n*n,sum=0;
for (int i=0;i<n-1;i++)
{
scanf("%ld",&d);
sum+=d;
}
printf("%ld\n",tot - sum);
}
|
# Obeleži sve kategorije koje odgovaraju problemu
### Još detalja - opišite nam problem
Uspešno ste prijavili problem!
Status problema i sve dodatne informacije možete pratiti klikom na link.
Nažalost nismo trenutno u mogućnosti da obradimo vaš zahtev.
Molimo vas da pokušate kasnije.
# Events¶
## When reading states is not enough¶
At the beginning of the section about interaction we mentioned that there are two basic ways for a program to get information about user actions. The first way is to read the state of the mouse and the keyboard, and we are familiar with that way by now.
Reading the states of mouse and keyboard is easy and sufficient for many applications. However, in some situations it is not the most convenient way of doing things. For example, if we wanted to know when the user clicks the mouse:
• frequently reading the state of the mouse may result in multiple consecutive readings indicating that a mouse button is down, but we do not know if it is all the same click or there were more clicks.
• infrequently reading the state of the mouse may result in that the user presses and releases a button after one reading and before the next one. In this case, the program will not receive information about that click.
Let us look at the following example.
Example - broken switch:
The following program draws an image of a wiring diagram for each frame, and then over it images of a switch and a bulb. The idea is to “turn the light on and off” by clicking on the switch.
When solving the task by reading the state of the mouse, various unwanted behaviors may come about due to the shortcomings described above, such as not reacting to a click (reading the state too infrequently) or flickering of the light (reading the state too frequently). Even if your speed of clicking is just right, so you can avoid these unwanted effects and normally turn the light on or off, someone who clicks faster or slower could feel the problem.
Try the program out with clicking at different speeds.
As we mentioned in the introduction to this chapter, we can also track user actions in another way, which is to use system events. The events we deal with here can be understood as changes in the state of the mouse or keyboard (though there are other events, such as those generated by the system clock). For example, when a key on the keyboard or a mouse button goes down, the computer’s operating system receives a signal from the input device and registers it as an event. The same happens at the moment of releasing the keys (buttons), changing the position of the mouse, etc.
All events are logged and remembered, so it cannot happen that we miss a user’s action, like when we only read the status.
The PyGame library allows us to get one object for each event with information about that event, to examine what sort of event it is, and to programmatically respond to the event as needed.
## Using events in programs¶
In programs that use events, we will write a special function handle_event(event) (you can give it a different name). This function gets a PyGame object event as an argument, which contains all the necessary event information. We add the name of our event processing function as the third argument in the petljapg.frame_loop function call. This enables our handle_event function to be called for each event that occurs while the program is running.
Now let’s look at how exactly we handle the event.
In the handle_event function, we check if this event is of the type “a mouse button going down”. We do this by comparing the event type, stored in the event.type field, with the PyGame constant pg.MOUSEBUTTONDOWN, which has the described meaning.
If the event is of the type we are interested in (moving a mouse button down, that is, start of a click), using the command mouse_point = event.pos we place the coordinates of the point where the mouse was at the time the event occurred in the variable mouse_point, because we want to know what the user clicked on.
The following commands check if the user clicked on the switch, and if so, change the value of the logical variable switch_on, which shows the state of the switch.
Example - switch:
This program does the same thing as the previous one, but uses the mouse down event so there are no unwanted effects.
|
## Math Notation Help
This glossary will help you build complex mathematical equations using the Tex markup language. This will involve using @@ or $$before and after the expression to display the desired results. Browse the glossary using this index Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL ### S #### s.u.m$$\sum_{n+2}^x$$is $\sum_{n+2}^x$ Keyword(s): sum #### sigma (lower case greek letter)$$\sigma$$gives $\sigma$ #### Sigma (upper case greek letter)$$\Sigma$$gives $\Sigma$ #### smiley$$~\unitlength{.6}~\picture(100){~~(50,50){\circle(99)}~ ~(20,55;50,0;2){+1$\hat\bullet}~~(50,40){\bullet}~~(50,35){\circle(50,25;34)}~ ~(50,35){\circle(50,45;34)}}$$is $~\unitlength{.6}~\picture(100){~~(50,50){\circle(99)}~ ~(20,55;50,0;2){+1\hat\bullet}~~(50,40){\bullet}~~(50,35){\circle(50,25;34)}~ ~(50,35){\circle(50,45;34)}}$ Keyword(s): smiley #### square bracket • Synatx: \left[...\right] • Ex.:$$\left[a,b\right]$$gives $\left[a,b\right]$ #### square root @@@sqrt{x}@@@ is @@sqrt(x)@@ Keyword(s): sqr rt #### subscript underscore$$x_2$$is $x_2$ Keyword(s): subscript_ #### sum (summation) • General syntax for symbols with a kind of lower and upper limits: \symbolname_{lowerexpression}^{upperexpression} • In general, there are two ways how these lower and upper expressions can be placed: centered below and above the symbol or in a subscript / superscript manner. In the first case the symbol name is preceded by the word "big", in the second there is no prefix. • Syntax for summation symbol:$$\bigsum_{i=k}^{n}$$gives $\bigsum_{i=k}^{n}$ and$$\sum_{i=k}^{n}$$gives $\sum_{i=k}^{n}$ • Use font size commands for a nicer picture:$$\LARGE\bigsum_{\small{i=1}}^{\small{n}}$$gives $\LARGE\bigsum_{\small{i=1}}^{\small{n}}$ and$$\large\sum_{\small{i=1}}^{\small{n}}$$gives $\large\sum_{\small{i=1}}^{\small{n}}$ Keyword(s): big sum #### superscript$$x^2$$or$$x^3$\$ is $x^2$ or $x^3$
Keyword(s): superscript^
|
This page explains how to name some common complex metal ions. 5. If there are no descriptive words that explain more about a noun, there is only a simple subject or object. In C++, we can change the way operators work for user-defined types like objects and structures. Because I had to catch the train, and as we were short on time, I forgot to pack my toothbrush for our vacation. 1. We also get your email address to automatically create an account for you in our website. is the classic example of the complex question. Please note that the book contains many inline examples and informal tables that are not provided here. The The metal is chromium, but since the complex is an anion, we will have to use the "-ate" ending, yielding "chromate". Complex object: dresses that are only suitable for the winter, Complex object: snacks that are imported from all over the world, Complex object: app designed to help people learning English improve their pronunciation, Note that the second half of this sentence, the dependent clause that begins with, Complex object: toys shaped like their logos. How To Improve Communication Skills In English For Beginners, 3 Ways to Improve Listening Skills in English for Beginners. (See examples 1-4.) Coding for the ligand. For example, Co in a complex anion is called cobaltate and Pt is called platinate. 3: Text Only. 4: Mixed. You will learn more about indicators in the XSD Indicators chapter. Once your account is created, you'll be logged-in to this account. The following are examples of complex systems. The page contains examples on basic concepts of C programming. 2. In simple terms, an independent clause can be a sentence on its own while a dependent clausecannot. Overview; Table of contents; Sample chapter; Examples; Errata; Purchase; Chapter 9: Advanced Queries. Once you have sorted out that code, the names are entirely descriptive. For example: “I burned dinner because I was watching The Walking Dead, but not the cake because I started paying attention to the oven timer … This book is the complete reference to ComplexHeatmap pacakge. Complex systems typically have input from many sources and are highly changeable. Sally goes to two different places, the park and the store. Compound-complex sentences are the most complicated sentences, as their name implies. How to respond to How Are You? In each example, the independent clause is underlined. To be "complex" simply means they consist of sugar molecules strung together in long, complex molecule chains. become the standard allusion to any question that can't be answered without self-incrimination" (I Love It When You Talk Retro, 2009). Find more ways to say complex, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. Specifies a name for the element: abstract: Optional. Find more ways to say complex, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. Another word for complex. Here the ComplexHeatmap R package provides a highly flexible way to arrange multiple heatmaps and supports various annotation graphics. eval(ez_write_tag([[468,60],'myenglishteacher_eu-medrectangle-3','ezslot_3',662,'0','0']));For example, if I have a simple sentence like “Todd hits the ball”, the object is the ball since it is the thing that is being hit. Examples and Observations "[I]n the complex sentence John left when his sister arrived, the clause when his sister arrived is a dependent clause because it is preceded by the word when, which is a subordinating conjunction.Dependent clauses are not complete sentences; they cannot stand alone as a complete sentence. True indicates that an element cannot use this complex type directly but must use a complex type derived from this complex type. Another word for complex. Conjugate of a Complex Number. The best way to learn C programming is by practicing examples. Note that the child elements, "firstname" and "lastname", are surrounded by the indicator. Specifies whether the complex type can be used in an instance document. This topic provides examples of using the SELECT statement. A complex number is a number of the form a + bi, where a and b are real numbers, and i is an indeterminate satisfying i 2 = −1.For example, 2 + 3i is a complex number. How to use complex in a sentence. When you have a complex object, it includes whatever you have used to describe that simple object identified from above. The Universe The Universe itself. Apartment complex definition: An apartment complex is a group of buildings that contain apartments and are managed by... | Meaning, pronunciation, translations and examples complexe, classe complex Class. (TBH full form) on Facebook, Instagram, Texting, Subject and Predicate. Default is false: mixed: Optional. For example, using the word “WordPress” on your domain name will be a copyright violation. name: Optional. It decides a vast chunk of that person’s perception and decision-making in terms of how they relate to others, emotional experiences and sense of self. Chloride ions were the counterions.Negatively charged species like chloride are electron donors, so it's possible to find them in complexes as ligands.An example is the tetrachlorocobaltate complex anion, whose formula is [CoCl4]2- (see image). Here are some complex carbs you're likely to encounter: Text-Only complex type element can only contain attribute and text. Complex example with regex and user defined names The following example uses a real world Apache log and demonstrates the use of regular expressions rather than simple 'split' tokenizers. The following describes the data type of z , when a and b have different data types. Example $$\PageIndex{2}$$ What is the name of [Cr(OH) 4]-? I'd like to share an example of a complex SQL query, which will return all possible data from Database. Mixed complex type element can contain element, attribute and text. Would love your thoughts, please comment. See more. 3. JSON Object Example. Main Difference: The difference between Compound and Complex Sentences is that Compound sentence is a sentence that has multiple independent clauses, but no dependent clause.Complex sentence is a sentence that has one independent clause and at least one dependent clause.. What is a Compound Sentence. Complex Empty complex type element can only have attributes but no contents. Invented Large firms often invent a short new word to describe a business that is easy to pronounce and remember for branding purposes. Following is the list of Complex Types that XSD supports. Indicators controls the ways how elements are to be organized in an XML document. Although he was wealthy, he was still unhappy. Although he was wealthy, he was still unhappy. Example: Name the complex ion with the forumla [Fe(CN) 6] 3-Anionic ligands have names ending in 'o'. . Those subjects and objects can either be simple or complex. If the verb is a linking verb (a form of the verb to be), then the object is whatever the description of the subject is. JSON example can be created by object and array. Here are some sentences, with different simple and complex objects! Sally went to the park near her house and the store where all the employees know her name. The company employee took some extra supplies because he did not want to pay for them himself. She returned the computer after she noticed it was damaged. Learn more. For example, the subtleties of a natural language such as French are such that it is a prohibitively complex task to formally document its syntax, semantics and pronunciation. In this section, we will discuss the modulus and conjugate of a complex number along with a few solved examples. For example, any adjectives, prepositional phrases, adverbs, etc. A. Coordination complexes consist of a ligand and a metal center cation. For example, Co in a complex cation is called cobalt and Pt is called platinum. Based on this definition, complex numbers can be added and … Specifies a name for the element: abstract: Optional. Invented names may be based on wordplay such as contractions of words and phrases. We can create a complex element in two ways − Define a complex type and then create an element using the type attribute. Each of these objects has a descriptor that tells you more about where Sally goes. My closet is full of dresses that are only suitable for the winter. There are also two complex objects, theeval(ez_write_tag([[250,250],'myenglishteacher_eu-banner-1','ezslot_16',671,'0','0'])); 1) park near her house and 2) the store where all the employees know her name. 4. The amount of information required to fully document a complex system at a point in time is prohibitively large such that they can't be fully modeled by any known methods. Complex: having many parts or … For example, Thai and Icelandic people expect lists to be sorted by given name instead. 15 ways to say In Conclusion Synonyms for IN CONCLUSION, Types of Adverb Adverb Examples [All You Need], LIVE Video ››› Free Chat Rooms For English Learners, 6 Ways to Immediately Improve Your English Communication Skills, What does TBH mean? 5: Indicators What is the difference between Realize and Notice? The overall charge can be positive, negative, or neutral. Naming the ligands. If you use the method described above, only the "employee" element can use the specified complex type. It is a plot of what happens when we take the simple equation z 2 +c (both complex numbers) and feed the result back into z time and time again.. Immediately we know that this complex is an anion. The metals, in turn, are Lewis acids since they accept electrons. 5. Culture & Society Culture and society are also remarkably complex. See more. name: Optional. Complex Empty complex type element can only have attributes but no contents. Peas, beans, and whole grains, for example, are complex carbs. The sponsor of the event gave out toys shaped like their logos. The store sells snacks that are imported from all over the world. Le modèle de classe décrit un objet qui stocke deux objets de type Type, un qui représente la partie réelle d’un nombre complexe et un qui représente la partie imaginaire. If you have a very short sentence, such as she swims, you only have a simple subject and one verb. Cassandra complex; Cinderella complex; Don Juan complex; Electra complex; Father complex; God complex; Hero complex; Icarus complex; Inferiority complex; Jocasta complex; Jonah complex; Laius complex; Madonna–whore complex; Martyr/Victim complex; Medusa complex; Messianic/Messiah complex; Napoleon complex; Oedipus complex; Ophelia complex; Peter pan complex; Phaedra complex In the first example above, there were ammonia ligands around a central cobalt ion. Find another word for complex. After naming the ligands, name the central metal. Try to pick out both the simple and complex objects: The simple and complex object answers for the above sentences are: When you login first time using a Social Login button, we collect your account public profile information shared by Social Login provider, based on your privacy settings. Complex Element is an XML element which can contain other elements and/or attributes. 4. A compound-complex sentence has at least two independent clauses and at least one dependent clause. When there are multiple words describing that object, though, how do you know which one is the object? Specifies whether the complex type can be used in an instance document. A complex word may consist of (1) a base (or root) and one or more affixes (for example, quicker), or (2) more than one root in a compound (for example, blackbird). For example, a name such as "Dave's Carpentry" directly describes what a business offers. Using SELECT to retrieve rows and columns. Complex numbers are the numbers which are expressed in the form of a+ib where ‘i’ is an imaginary number called iota and has the value of (√-1).For example, 2+3i is a complex number, where 2 is a real number and 3i is an imaginary number. … 3) A complex sentence contains an independent clause and a dependent clause. 03/27/2019; 12 minutes de lecture; T; o; O; S; v; Dans cet article. She returned the computer after she noticed it was damaged. There is no object there. In another example, it is possible that sort orders can also be different in different parts of the Spanish-speaking world. You could say that the park and store are the simple objects. For the subjects: Note that there are not always complex subjects or complex objects in a sentence. Let's see different JSON examples using object and array. Answer to: What are examples of the complex question fallacy? The element is used for elements which are not defined by schema. Elements-Only complex type element can only contain elements. These are similar to but distinct from complex objects because they contain conjunctions – typically and. JSON Example. Here we match all objects that aren't in the shadow pass_class and assign the payload with the ID a_shader, otherewise the default shader payload will be injected, since the second rule will match all objects.. For a more complex example, let's consider the current behavior of RMS in relation to subsurface scattering. About the Book Author. The size of z is the same as the input arguments. State the name of the machine Indicate which simple machines are used together to make the complex machine For example (from the lesson): Stapler - lever and wedge A complex number is a number of the form a + bi, where a and b are real numbers, and i is an indeterminate satisfying i 2 = −1.For example, 2 + 3i is a complex number. We are reading books by J. K. Rowling this semester in English class. Complex array, returned as a scalar, vector, matrix, or multidimensional array. PinchofYum. A Simple Sentence is one which has only one Subject and one Predicate or A Simple Sentence is one which has only one Finite Verb. that apply only to the object noun are considered part of the complex object.eval(ez_write_tag([[300,250],'myenglishteacher_eu-box-4','ezslot_4',660,'0','0'])); For the example above, the break down is this: Note that this example has a prepositional phrase that modifies park, making that phrase part of the complex object. 2: Elements Only. By the way, you cannot use a brand name on your domain name. The object of a sentence is the person or thing that the action happens to. Coordination compounds include such substances as vitamin B-12, hemoglobin, and chlorophyll. The company employee took some extra supplies because he did not want to pay for t… Conjugate of a complex number z = x + iy is denoted by z ˉ \bar z z ˉ = x – iy. If the complex ion is an anion, the name of the metal ends with the suffix –ate. If the complex ion is a cation, the metal is named same as the element. Therefore, the combination of both the real number and imaginary number is a complex number.. Note that the child elements, "firstname" and "lastname", are surrounded by … Complex Element is an XML element which can contain other elements and/or attributes. The difference between simple and complex objects is actually very simple. The Complex Origins of complex Synonym Discussion of complex. Define a Complex Type and then create an element using type attribute. The store sells snacks that are imported from all over the world. We can create a complex element in two ways −, Define a complex type and then create an element using the type attribute. Ralph Keyes has traced this example back to a 1914 book of legal humor. Elements-Only complex type element can only contain elements. 36 Examples of Verb + Noun Collocations [List], A BIG List of Prefixes and Suffixes and Their Meanings, 199 Phrases for Saying Thank You in Any Situation ✅. Define a complex type directly by naming. Complex definition is - a whole made up of complicated or interrelated parts. This allows you to specify which child elements an element can contain and to provide some structure within your XML documents. The beautiful Mandelbrot Set (pictured here) is based on Complex Numbers.. This means that the child elements must appear in the same order as they are declared. basically the combination of a real number and an imaginary number In technology, the internet is a prime example of a complex system or perhaps a system of systems: A large and Since then, he says, it "has . 60 minute long English lessons OR 60 minutes long English lessons? These files contain basic JSON data sets so you can populate them with data easily. So the name of the molecule in this example is 5-(1,2-dimethylpropyl)-nonane. The information required to fully model the Big Bang is increasing rapidly as the Universe moves from a state of order to disorder in a process of entropy. Complex systems are systems that are difficult to model and predict. Text-Only complex type element can only contain attribute and text. Here are some sentences, with different simple and complex objects! Formal and Informal Email Phrases – from Greetings to Closing Phrases! This first code example returns all rows (no WHERE clause is specified) and all columns (using the *) from the Product table in the AdventureWorks2012 database. An example of a complex sentence is this: “I burned dinner but not the cake.” 4) Compound-complex sentences contain two independent clauses and at least one dependent clause. Here is an image made by zooming into the Mandelbrot set This way, a complex number is defined as a polynomial with real coefficients in the single indeterminate i, for which the relation i 2 + 1 = 0 is imposed. By convention, when a complex substituent is included in a name, the name of the complex substituent is set off with parentheses. The following example shows three code examples. Le document JSON suivant est composé de champs simples et de champs complexes. All the programs on this page are tested and should work on all platforms. The keys are strings and the values are the JSON types. complex word definition: 1. a word consisting of a main part and one or more other parts 2. a word consisting of a main part…. A Compound Sentence is made up of two or more … Some examples of complex machines are cars, bicycles, can openers, a wheelbarrow, scissors and a stapler. Complex Type A complex type is a container for other element definitions. Compound-Complex sentence, as the name suggests is a combination of combination of Compound and Complex sentence. Let's take a look at some common complex sentence examples pertaining to everyday life. Complex definition, composed of many interconnected parts; compound; composite: a complex highway system. Mixed complex type element can contain element, attribute and text. Compound-complex sentences help us express longer more complicated thoughts, with more parts than other sentences. Which one do you use? In the physical world, the earth's weather is one example of a complex system. Because my coffee was too cold, I heated it in the microwave. . The name is a little bit clever, but you can still guess what the blog is about before seeing the homepage. Lists of names are not always sorted by family name around the world. Own while a dependent clausecannot its own while a dependent clausecannot objects: 1 “ WordPress ” on own! Abstract: Optional examples and try them on your domain name will be a copyright violation que Address Rooms... That this complex is an image made by zooming into the Mandelbrot (.: 1 sentence is the complete reference to ComplexHeatmap pacakge us express longer more complicated,. Sponsor of the molecule in this section, we can create a complex SQL query, which return... An image made by zooming into the Mandelbrot set ( pictured here ) is based on this definition complex! \ ) What is the object of complex name example sentence and objects can be..., customers buy less products ( Joseph Wood Krutch, 1893-1970 ) name: Optional be different in different of. Simple fields and complex fields it is possible that sort orders can also be different in parts! ( TBH full form ) on Facebook, Instagram, Texting, subject and complex name example examples... When a complex anion is called platinate the reason that you ca n't master... We will use the specified complex type is a fallacy in which the answer to a book... Is only a simple subject and one verb above, only the employee '' can! Bit clever, but you can still guess What the blog is about before seeing the homepage when you sorted... Complex type ; Errata ; Purchase ; chapter 9: Advanced Queries because they contain conjunctions – and... 77 related words, definitions, and whole grains, for example are! Bicycles, can openers, a name for the winter the specified complex directly. Up of complicated or interrelated parts direct experience in it should work on all platforms example (! Less products for branding purposes people expect lists to be bad. for.! Are Lewis acids since they accept electrons real number and imaginary number a... +C grows, and whole grains, for example, Co in a name for the element the... Be “ the park near her house and the values are the simple and complex subject differ just of. Element in two ways −, Define a complex number ( \PageIndex { 2 } \ What... Them himself Facebook, Instagram, Texting, subject and one verb is - a whole made of! Data in the form of key/value pair are four of them, so we will discuss the and... When his sister arrived can not use this complex type a complex system complex heatmaps are efficient visualize. At least two independent clauses and at least two independent clauses and at least two independent and. ) name: Optional grows, and chlorophyll page contains examples on basic concepts of C programming included. Anyattribute > attribute is used for attribute which are not defined by schema simple and complex is. Be a sentence on its own while a dependent clausecannot z z \bar... Word for complex a highly complex name example way to learn C programming his brother is working on an app to. We will discuss the modulus and conjugate of a complex question fallacy my closet is full of dresses that n't... More about where sally goes complex '' simply means they consist of a complex cation call... Presupposes a prior question presupposes a prior answer to a given question presupposes a prior answer to: What examples! The SELECT statement talk about the … complex systems are systems that are n't considered to be ''! Cobalt is complex name example oxidation state 2, binding with four chloride ions compound object,,... And a dependent clause and Society are also remarkably complex name example names are descriptive! Society culture and Society are also remarkably complex 5- ( 1,2-dimethylpropyl ) -nonane reason that you ca n't master! Added and … another word for complex about before seeing the homepage specified type... Are complex carbs that an element using type attribute ; Sample chapter ; examples ; Errata Purchase. Reading books by J. K. Rowling this semester in English class wealthy, he says, it possible! Presupposes a prior question various annotation graphics name for the subjects: note that the child an! Of Frostburg state University, where he received his BS in chemistry the compound object however. Computer after she noticed it was damaged complex Synonym Discussion of complex sentences are simple sentences with dependent or clauses. Coordination compounds include such substances as vitamin B-12, hemoglobin, and antonyms, we will use the complex. Les champs complexes a container for other element definitions est composé de champs complexes center cation simple. As text, number, boolean etc prior answer to a prior answer to a given question presupposes a question... The employee '' element can use the method described above, only ...
|
Differential Vector Elements
Prerequisites
Students should be familiar with basic integration and with the $dr$ vector.
In-class Content
• QUIZ (10 min)
• Small group activity - Boysenberry Patch (20 min)
Homework for Symmetries
1. (Helix)
A helix with 17 turns has height $H$ and radius $R$. Charge is distributed on the helix so that the charge density increases like the square of the distance up the helix. At the bottom of the helix the linear charge density is $0 {\hbox{C}\over\hbox{m}}$. At the top of the helix, the linear charge density is $13 {\hbox{C}\over\hbox{m}}$. What is the total charge on the helix?
Views
New Users
Curriculum
Pedagogy
Institutional Change
Publications
|
# Binary/decimal/hex converter using Tkinter
I debated long and hard before posting this question, and I did a lot of experimenting. I just can't seem to work out an 'elegant', concise way to get done what I want done in the manner I want it done. I found it very hard to research because of the difficulty in not knowing exactly what to search for.
Again, I am writing a converter for binary, decimal and hex using Tkinter and not using any of Python's built-in math functions.
In the code I have one class, within it many methods. I have the methods to convert from binary to decimal and hex working correctly (bin_to_dec & bin_to_hex, respectively). I am now working on the 'from decimal' conversions. I have the dec_to_bin method working correctly, as well. My issue is with the dec_to_hex method. I want it to use the dec_to_bin method to convert the string to binary first and then use the bin_to_hex method for the final conversion. In doing this it has to overlook the lines of code that tell the 2 methods to display their results; but rather to store the results and transport them to the dec_to_hex method.
I'm going to post the entire code except for the method that creates the Tkinter widgets:
def base_check(self):
""" Disable Checkbox that's connected with chosen Radiobutton. """
cb.configure(state = DISABLED)
else:
cb.configure(state = NORMAL)
def conv_segue(self):
""" Decides and directs towards proper conversion method.
base = self.base.get()
if base == 'bin':
bits = self.input_str.get()
# test string validity
bit_list = list(bits)
ill_bits = ['2', '3', '4', '5', '6', '7', '8', '9']
for bit in bit_list:
if bit in ill_bits:
self.output_disp.delete(0.0, END)
self.output_disp.insert(0.0, "That bit string is invalid.")
break
else:
self.from_binary(self.dec_bttn, self.hex_bttn)
##
# learned here that I had to break once the match was found (if found) and that a 'for'
# loop can use an else block too
##
elif base == 'dec':
self.from_dec(self.bin_bttn, self.hex_bttn)
elif base == 'hex':
self.from_hex(self.bin_bttn, self.dec_bttn)
def from_binary(self, dec_bttn, hex_bttn):
""" Finds what base to convert to (Decimal or Hex) from binary. """
if self.dec_bttn.get():
self.bin_to_dec()
if self.hex_bttn.get():
self.bin_to_hex()
#if self.dec_bttn.get() and self.hex_bttn.get():
#(self.bin_to_dec, self.bin_to_hex)
def from_dec(self, bin_bttn, hex_bttn):
""" Finds what base to convert to (Binary or Hex) from decimal. """
if self.bin_bttn.get():
self.dec_to_bin()
if self.hex_bttn.get():
self.dec_to_hex()
def dec_to_bin(self):
""" Convert from decimal to binary. """
# get input string and convert to an integer
digits = self.input_str.get()
digits = int(digits)
bit_string = ""
# do the conversion
while digits:
bit, ans = digits%2, digits//2
bit = str(bit)
bit_string += bit
digits = ans
total = bit_string[::-1]
self.total = total
# print output
self.print_result(total)
def dec_to_hex(self):
bit_str = self.dec_to_bin().self.total
self.bit_str = bit_str
print(self.bit_str)
def bin_to_dec(self):
""" Convert from binary to decimal. """
# get input string
bits = self.input_str.get()
# set exponent
exp = len(self.input_str.get()) - 1
tot = 0
# do conversion
while exp >= 1:
for i in bits[:-1]:
if i == '1':
tot += 2**exp
elif i == '0':
tot = tot
exp -= 1
if bits[-1] == '1':
tot += 1
total = tot
# print output
self.print_result(total)
def bin_to_hex(self):
""" Convert from binary to hex. """
# get input string
bits = self.input_str.get()
# define hex digits
hex_digits = {
10: 'a', 11: 'b',
12: 'c', 13: 'd',
14: 'e', 15: 'f'
}
# add number of necessary 0's so bit string is multiple of 4
string_length = len(bits)
number_stray_bits = string_length % 4
# test if there are any 'stray bits'
if number_stray_bits > 0:
number_zeros = 4 - number_stray_bits
bits = '0'*number_zeros + bits
string_length = len(bits)
# index slicing positions
low_end = 0
high_end = 4
total = ""
# slice bit string into half byte segments
while high_end <= string_length:
exp = 3
half_byte = bits[low_end:high_end]
# do conversion
tot = 0
while exp >= 1:
for i in half_byte[:-1]:
if i == '1':
tot += 2**exp
elif i == '0':
tot = tot
exp -= 1
if half_byte[-1] == '1':
tot += 1
# check if tot needs conversion to hex digits
for i in hex_digits.keys():
if i == tot:
tot = hex_digits[i]
else:
tot = tot
# store and concatenate tot for each while iteration
tot = str(tot)
total += tot
# move right to next half byte string
low_end += 4
high_end += 4
# print the output
self.print_result(total)
def print_result(self, total):
""" display the result of conversion. """
self.output_disp.delete(0.0, END)
self.output_disp.insert(0.0, total)
I've tried to make it as easy to read for anyone who attempts to help me as possible. The dec_to_hex method is a bit of a mess right now, I have been messing with it. With the code posted I think its a bit more clear what exactly I'm trying to do. Its very simple code as I haven't a lot of Python experience.
Its all in one class, and I'm trying to do it without the need to copy the code from the two methods I want to use for the dec_to_hex method (dec_to_bin & bin_to_hex).
I thought about breaking it into different classes and using inheritance, but I can't see where that will help me at all
My final decision to post the question now while I continue to mess with it came because I'm sure once its figured out I'm going to have learned something very important, and probably a concept or two that I don't have a complete grasp on will become clearer.
I hope someone will be willing to give me a little direction in this matter. I get an adrenalin rush when I see progress getting made. Its also very well commented and docstringed so it shouldn't be a problem for anyone.
I also thought that this would be a great situation to use some equivalent to the XHTML anchors, but then I realized they are about the same as the old goto command from my 6th grade BASIC days and figured Python was too 'clean' to use that.
• Kill dead code and variables; from_binary arguments dec_bttn and hex_bttn are useless and should be removed.
– Chris Morgan
Feb 7, 2012 at 9:57
Use more python, e.g. instead of:
for i in hex_digits.keys():
if i == tot:
tot = hex_digits[i]
else:
tot = tot
Remove extraneous keys() and it becomes:
for i in hex_digits:
if i == tot:
tot = hex_digits[i]
else:
tot = tot
Then remove unnecessary else and it is:
for i in hex_digits:
if i == tot:
tot = hex_digits[i]
And finally remove the loop:
tot = hex_digits.get(i, tot)
There I saved you a loop, a branch and 4 out of 5 lines of code.
A few iterations like this over the entire module and you might like your code after all!
• Thanks, I'll go through it and clean it up a bit. My problem is I don't prep with any pseudo-code, I just kinda wing it as I go. Terrible habit, I know.
– Icsilk
Feb 7, 2012 at 10:00
• @Icsilk, no pseudo code is a terrible habit. I don't know of any serious coders who actually use it. Feb 7, 2012 at 15:39
• I write pseudocode from time to time, and I'd consider myself a serious (albeit not professional) coder... but I certainly wouldn't say not using pseudocode is a terrible habit. Feb 7, 2012 at 21:51
def base_check(self):
""" Disable Checkbox that's connected with chosen Radiobutton. """
cb.configure(state = DISABLED)
else:
cb.configure(state = NORMAL)
def conv_segue(self):
""" Decides and directs towards proper conversion method.
base = self.base.get()
if base == 'bin':
bits = self.input_str.get()
# test string validity
bit_list = list(bits)
It's not neccessary to listify the bits. You can iterate over a string just like a list.
ill_bits = ['2', '3', '4', '5', '6', '7', '8', '9']
I'd make this a string rather then a list and iterate over that.
for bit in bit_list:
if bit in ill_bits:
Instead, I'd use if any(bit in ill_bits for bit in bit_list). Also, why don't you check for bits that are 1 or 0, rather then explicitly listing the other options. What if the user inputs a letter?
self.output_disp.delete(0.0, END)
self.output_disp.insert(0.0, "That bit string is invalid.")
break
else:
self.from_binary(self.dec_bttn, self.hex_bttn)
##
# learned here that I had to break once the match was found (if found) and that a 'for'
# loop can use an else block too
##
elif base == 'dec':
self.from_dec(self.bin_bttn, self.hex_bttn)
elif base == 'hex':
self.from_hex(self.bin_bttn, self.dec_bttn)
I don't see this function anywhere
Why do you perform sanity checks for binary, and not the other bases?
def from_binary(self, dec_bttn, hex_bttn):
""" Finds what base to convert to (Decimal or Hex) from binary. """
Why are you ussing dec_bttn and hex_bttn around if you just use self.dec_bttn anyways?
if self.dec_bttn.get():
self.bin_to_dec()
if self.hex_bttn.get():
self.bin_to_hex()
#if self.dec_bttn.get() and self.hex_bttn.get():
#(self.bin_to_dec, self.bin_to_hex)
def from_dec(self, bin_bttn, hex_bttn):
""" Finds what base to convert to (Binary or Hex) from decimal. """
if self.bin_bttn.get():
self.dec_to_bin()
if self.hex_bttn.get():
self.dec_to_hex()
def dec_to_bin(self):
""" Convert from decimal to binary. """
# get input string and convert to an integer
digits = self.input_str.get()
digits = int(digits)
bit_string = ""
# do the conversion
while digits:
bit, ans = digits%2, digits//2
bit = str(bit)
bit_string += bit
digits = ans
total = bit_string[::-1]
Python has a function, bin that does this conversion to binary for you.
self.total = total
# print output
self.print_result(total)
def dec_to_hex(self):
bit_str = self.dec_to_bin().self.total
What?
self.bit_str = bit_str
print(self.bit_str)
I'm not seeing the hex.
def bin_to_dec(self):
""" Convert from binary to decimal. """
# get input string
bits = self.input_str.get()
# set exponent
exp = len(self.input_str.get()) - 1
tot = 0
# do conversion
while exp >= 1:
for i in bits[:-1]:
if i == '1':
tot += 2**exp
elif i == '0':
tot = tot
exp -= 1
if bits[-1] == '1':
tot += 1
total = tot
# print output
self.print_result(total)
Use int(string_number, 2) to read in a binary number.
def bin_to_hex(self):
""" Convert from binary to hex. """
# get input string
bits = self.input_str.get()
# define hex digits
hex_digits = {
10: 'a', 11: 'b',
12: 'c', 13: 'd',
14: 'e', 15: 'f'
}
# add number of necessary 0's so bit string is multiple of 4
string_length = len(bits)
number_stray_bits = string_length % 4
# test if there are any 'stray bits'
if number_stray_bits > 0:
number_zeros = 4 - number_stray_bits
bits = '0'*number_zeros + bits
string_length = len(bits)
Python has an rjust method on strings that'll pad strings to a desired length. I think you can simplify this code using that.
# index slicing positions
low_end = 0
high_end = 4
total = ""
# slice bit string into half byte segments
while high_end <= string_length:
You should really use a for loop like for high_end in xrange(0, string_length, 4): exp = 3 half_byte = bits[low_end:high_end]
# do conversion
tot = 0
Don't abbreviate variables. It saves you almost nothing and makes your code harder to read.
while exp >= 1:
This doesn't really serve a purpose because exp is decremented by the inner loop.
for i in half_byte[:-1]:
I'd use for exponent, letter in enumerate(halt_byte[::-1]).
if i == '1':
tot += 2**exp
elif i == '0':
tot = tot
Completely pointless. Doesn't assign variables to themselves
exp -= 1
if half_byte[-1] == '1':
tot += 1
Why didn't you do this in the loop: exp**0 = 1
# check if tot needs conversion to hex digits
for i in hex_digits.keys():
if i == tot:
tot = hex_digits[i]
else:
tot = tot
Its a dictionary. don't use a loop on it. Put all of the numbers in it, not just the non-digit ones. Then use tot = hex_digits[tot]
# store and concatenate tot for each while iteration
tot = str(tot)
total += tot
I'd combine those two lines
# move right to next half byte string
low_end += 4
high_end += 4
If you use a for loop like I suggested this should be unneccesary.
Actually python has a hex function which will convert a number to hex. It'll replace pretty much this entire function.
# print the output
self.print_result(total)
def print_result(self, total):
""" display the result of conversion. """
self.output_disp.delete(0.0, END)
self.output_disp.insert(0.0, total)
This function doesn't really print. So I'd find a better name.
Here's how I'd approach it.
def convert_base(number_text, from_base, to_base):
BASES = {
'decimal' : 10,
'hex' : 16,
'binary', 2)
number = int(number_text, BASES[from_base])
if to_base == 'decimal':
return str(number)
elif to_base == 'hex':
return hex(number)[2:]
elif to_base == 'binary':
return bin(number)[2:]
else:
raise ValueError('Unknown base: ' + base)
In general conversions work best by converting to some neutral format, (in this case, a python integer), and then into your final format. That way you don't have to write conversion between every possible format. Instead, you just a conversion for each format into the neutral format and then out of the neutral format.
you could pass a second parameter to dec_to_bin and dec_to_hex to let the function know whether you want to return a value or print it.
def dec_to_bin(self,ret=0):
....
if ret == 0:
self.print_result(total)
else:
Then call the function like:
binstr = dec_to_bin(1) #to assign the return value to binstr
• When python has True and False built in, using numbers to approximate them seems odd...
– Shish
Feb 7, 2012 at 12:11
I'd split out the x_to_y methods into a separate package, and rather than having them do all of input, processing, output, have them just do processing.
def hex_to_bin(hex_input):
... processing goes here ...
return bin_output
And then in your GUI app, you have the input and output:
hex_input = self.input_box.get_value()
bin_output = hex_to_bin(hex_input)
self.output_box.set_value(bin_output)
|
Article quick-view
## ABSTRACT
We consider the problem of maintaining an approximately maximum (fractional) matching and an approximately minimum vertex cover in a dynamic graph. Starting with the seminal paper by Onak and Rubinfeld [STOC 2010], this problem has received significant attention in recent years. There remains, however, a polynomial gap between the best known worst case update time and the best known amortised update time for this problem, even after allowing for randomisation. Specifically, Bernstein and Stein [ICALP 2015, SODA 2016] have the best known worst case update time. They present a deterministic data structure with approximation ratio $(3/2+\epsilon)$ and worst case update time $O(m^{1/4}/\epsilon^2)$, where $m$ is the number of edges in the graph. In recent past, Gupta and Peng [FOCS 2013] gave a deterministic data structure with approximation ratio $(1+\epsilon)$ and worst case update time $O(\sqrt{m}/\epsilon^2)$. No known randomised data structure beats the worst case update times of these two results. In contrast, the paper by Onak and Rubinfeld [STOC 2010] gave a randomised data structure with approximation ratio $O(1)$ and amortised update time $O(\log^2 n)$, where $n$ is the number of nodes in the graph. This was later improved by Baswana, Gupta and Sen [FOCS 2011] and Solomon [FOCS 2016], leading to a randomised date structure with approximation ratio $2$ and amortised update time $O(1)$. We bridge the polynomial gap between the worst case and amortised update times for this problem, without using any randomisation. We present a deterministic data structure with approximation ratio $(2+\epsilon)$ and worst case update time $O(\log^3 n)$, for all sufficiently small constants $\epsilon$.
|
## Further Development of Corrected Coverage Method
• If all the user wants to do is to estimate the corrected coverage, then we can just simulate and average the binary covered column of these simulated credible sets.
• If we can fit a model, then we choose a GAM. We tried logistic regression but it wasn’t bendy enough and we don’t have enough data to fit a random forest (these aren’t smooth and need lots of data at specific points on the x axis to calculate the average and to fit a line through - only accurate estimates at these dense x regions).
• So far, our package provides users with an accurate coverage probability. Would be good if we could extend the package such that the user is provided with a new ‘required threshold’ value that they should use in the credset function to obtain a credible set with the required true coverage.
### How often is a GAM actually being fitted?
### corr.maxz0 method
colMeans(res1)
## good warning error
## 0.00000 4.10198 195.84208
### corr.muhat.ave method
colMeans(res2)
## good warning error
## 0.000000 3.264356 196.679703
• I tried to fit a GAM for the simulated credible sets for each SNP considered causal (5000 data points for each GAM). If the GAM could be fit with no problem, then my code returns ‘good’, similarly for ‘warning’/‘error’. The figures above are the colMeans for the simulations. On average 195/200 GAMs fitted (each SNP causal), an error is returned.
• I told my original simulations to just average the covered column is ANY out of the 200 GAMs (each SNP considered causal) returned an error (as these returned NaNs in the final vector and I couldn’t average).
• This means that a GAM wasn’t actually being fit at all! I was just averaging the covered column.
• We didn’t identify this earlier as we were working on lower power scenarios.
Try fitting a GAM to the cumulative pps (rather than the size of the cred set). I.e. get 200 data points for everyone 1 in original method.
BUT for just 100 simulations for each snp causal, that is $$100\times 200\times 200=4e+06$$ data points to fit a gam to which mgcv::gam struggles with. I later explore variations for quicker GAM fitting.
### Fitting GAM to Whole Data
Try fitting a GAM to the cumulative pps (rather than the size of the cred set). I.e. get 200 data points for everyone 1 in original method.
• For each simulated posterior probability system for each variant considered causal a GAM is fitted for covered vrs cpp (cumulative probabilities) where the variants have been sorted into descending order. This GAM is then used to predict the coverage probability at the claimed coverage.
• We could also consider a backwards approach, whereby the corrected target (claimed coverage on x axis) is read off for some target coverage value. For example, if a user wanted a credible set with 90% corrected coverage, then we could tell them that they need to use a ‘required’ threshold value of X. They can then use this value as the threshold parameter in the credsets function to obtain their new credible set with the desired corrected coverage.
• A problem with this is that the data from the fitted GAM often looks like this,
test <- readRDS("/Users/anna/Google Drive/PhD/feb/credsets/check_errors/test.RDS")
test
## cpp y
## 1 0.00 0.000000e+00
## 2 0.01 0.000000e+00
## 3 0.02 0.000000e+00
## 4 0.03 0.000000e+00
## 5 0.04 0.000000e+00
## 6 0.05 0.000000e+00
## 7 0.06 0.000000e+00
## 8 0.07 0.000000e+00
## 9 0.08 0.000000e+00
## 10 0.09 0.000000e+00
## 11 0.10 0.000000e+00
## 12 0.11 0.000000e+00
## 13 0.12 0.000000e+00
## 14 0.13 0.000000e+00
## 15 0.14 0.000000e+00
## 16 0.15 0.000000e+00
## 17 0.16 0.000000e+00
## 18 0.17 0.000000e+00
## 19 0.18 0.000000e+00
## 20 0.19 0.000000e+00
## 21 0.20 0.000000e+00
## 22 0.21 0.000000e+00
## 23 0.22 0.000000e+00
## 24 0.23 0.000000e+00
## 25 0.24 0.000000e+00
## 26 0.25 0.000000e+00
## 27 0.26 0.000000e+00
## 28 0.27 0.000000e+00
## 29 0.28 0.000000e+00
## 30 0.29 0.000000e+00
## 31 0.30 0.000000e+00
## 32 0.31 0.000000e+00
## 33 0.32 0.000000e+00
## 34 0.33 0.000000e+00
## 35 0.34 0.000000e+00
## 36 0.35 0.000000e+00
## 37 0.36 0.000000e+00
## 38 0.37 0.000000e+00
## 39 0.38 0.000000e+00
## 40 0.39 0.000000e+00
## 41 0.40 0.000000e+00
## 42 0.41 0.000000e+00
## 43 0.42 0.000000e+00
## 44 0.43 0.000000e+00
## 45 0.44 0.000000e+00
## 46 0.45 0.000000e+00
## 47 0.46 0.000000e+00
## 48 0.47 0.000000e+00
## 49 0.48 0.000000e+00
## 50 0.49 0.000000e+00
## 51 0.50 0.000000e+00
## 52 0.51 0.000000e+00
## 53 0.52 0.000000e+00
## 54 0.53 0.000000e+00
## 55 0.54 0.000000e+00
## 56 0.55 0.000000e+00
## 57 0.56 0.000000e+00
## 58 0.57 0.000000e+00
## 59 0.58 0.000000e+00
## 60 0.59 0.000000e+00
## 61 0.60 0.000000e+00
## 62 0.61 0.000000e+00
## 63 0.62 0.000000e+00
## 64 0.63 0.000000e+00
## 65 0.64 0.000000e+00
## 66 0.65 0.000000e+00
## 67 0.66 0.000000e+00
## 68 0.67 0.000000e+00
## 69 0.68 0.000000e+00
## 70 0.69 0.000000e+00
## 71 0.70 0.000000e+00
## 72 0.71 3.695526e-310
## 73 0.72 3.338739e-294
## 74 0.73 3.016398e-278
## 75 0.74 2.725178e-262
## 76 0.75 2.462074e-246
## 77 0.76 2.224372e-230
## 78 0.77 2.009618e-214
## 79 0.78 1.815598e-198
## 80 0.79 1.640310e-182
## 81 0.80 1.481945e-166
## 82 0.81 1.338870e-150
## 83 0.82 1.209608e-134
## 84 0.83 1.092825e-118
## 85 0.84 9.873178e-103
## 86 0.85 8.919966e-87
## 87 0.86 8.058782e-71
## 88 0.87 7.280741e-55
## 89 0.88 6.577817e-39
## 90 0.89 5.942758e-23
## 91 0.90 5.369007e-07
## 92 0.91 1.000000e+00
## 93 0.92 1.000000e+00
## 94 0.93 1.000000e+00
## 95 0.94 1.000000e+00
## 96 0.95 1.000000e+00
## 97 0.96 1.000000e+00
## 98 0.97 1.000000e+00
## 99 0.98 1.000000e+00
## 100 0.99 1.000000e+00
## 101 1.00 1.000000e+00
approx(x = test$cpp, y = test$y, xout = 0.9)$y ## [1] 5.369007e-07 approx(x = test$cpp, y = test$y, xout = 0.95)$y
## [1] 1
• So for example, if we want a 90% credible set, it will tell us to use a threshold of 5.369e-07.
• Another problem is that in a lot of cases, a GAM cannot be fitted because the covered column is all 1s.
### Too Many Data-points
• Fit a GAM to all the data points, so for 5000 simulations for each SNP causal there would be 5000*200*200 data points to fit a GAM to.
• The mgcv::gam function cannot handle this, in fact, struggles with nrep=100.
• There is another method, mgcv::bam which is for large datasets. “The fitting methods used by gam opt for certainty of convergence over speed of fit. bam opts for speed.”
• Other ideas:
1. Try optimizer = bfgs option in gam.
2. Change smoothing basis to bs = "cr" in gam. (The default thin plate regression spline is costly for large datasets).
• I investigated these other methods, applying them all to the same data. My workflow is described below:
1. Simulate some data, that is, follow the standard steps and obtain nrep*nsnp data.frames. Here we consider 200 snps and simulate 15 credible sets for each SNP considered causal.
2. Each of these data.frames has a row for the cusum of the ordered pps and a covered column (all 0s until hit the CV, then 1).
3. rbind all of these to form one big data set with nrep*nsnp*nsnp=15*200*200=6e+05 data points.
4. Add a column for logit(cusum).
5. Fit a GAM using each of the following methods:
m.gam <- mgcv::gam(cov ~ s(logitcpp), data = d, family = "binomial")
m.bam <- mgcv::bam(cov ~ s(logitcpp), data = d, family = "binomial")
m.bfgs <- mgcv::gam(cov ~ s(logitcpp), data = d, optimizer = c("outer","bfgs"), family = "binomial")
m.cr <- mgcv::gam(cov ~ s(logitcpp), data = d, bs = "cr", family = "binomial")
1. I then use each of the models to predict the some claimed coverage data.
x <- seq(0.0001, 0.999999, by = 0.01)
p.gam <- invlogit(predict(m.gam, newdata = data.frame(logitcpp = log(x))))
p.bam <- invlogit(predict(m.bam, newdata = data.frame(logitcpp = log(x))))
p.bfgs <- invlogit(predict(m.bfgs, newdata = data.frame(logitcpp = log(x))))
p.cr <- invlogit(predict(m.cr, newdata = data.frame(logitcpp = log(x))))
1. The output is shown below, note that there is really no discrepencies between the model fit for each model.
data <- readRDS("/Users/anna/Google Drive/PhD/feb/credsets/check_GAM/testdata.RDS")
d <- data$d plot(d$cpp, d$cov, xlab="size", ylab="cov", cex=0.2) points(data$x, data$p.gam, col="red", pch=18) # normal gam points(data$x, data$p.bam, col="blue", pch=20) # bam points(data$x, data$p.bfgs, col="green", pch=17) # optimizer=bfgs points(data$x, data$p.cr, col="yellow", pch=15) # bs=cr • We could then potentially use the fastest one of these methods to back estimate what threshold value would be required to obtain a credible set with the required coverage. Using something like this, # to have 95% coverage you need to use this value to obtain your credible set approx(x = data$cpp, y = data$y, xout = 0.95)$y
### Which GAM Method?
• I am running a simulation to report the time taken to fit each of the GAMs for varying parameters.
NN=sample(c(5000,40000),1)
OR=sample(c(1.05,1.2),1)
thresh=sample(c(0.6,0.95,0.99),1)
• I will do this for nrep=15, nrep=20, nrep=50.
library(data.table)
## Warning: package 'data.table' was built under R version 3.5.2
library(ggplot2)
data50 <- sapply(sims50, as.numeric )
boxplot(data50[,c(1:4)], main="nrep=50", ylab="time")
data20 <- sapply(sims20, as.numeric )
boxplot(data50[,c(1:4)], main="nrep=20", ylab="time")
data15 <- sapply(sims15, as.numeric )
boxplot(data50[,c(1:4)], main="nrep=15", ylab="time")`
|
Q
# Find the (b) Probability of getting an ace from a well shuffled deck of 52 playing cards?
3. Find the
(b) Probability of getting an ace from a well shuffled deck of 52 playing cards?
$=\frac{4}{52}=\frac{2}{26}=\frac{1}{13}=0.769$
|
×
# This note has been used to help create the KVPY Exam Preparation wiki
The shortlist for the first level of KVPY, and the second level of the AMTI NMTC results were announced. In KVPY, I have qualified for the interview, and I got a rank 15 in the NMTC. Have any other people in Brilliant qualified? If so, post in the comments!
Note by Nanayaranaraknas Vahdam
2 years, 7 months ago
Sort by:
Congratulations to @Aditya Raut , as I know he qualified in KVPY · 2 years, 7 months ago
Congrats to both of you · 2 years, 7 months ago
Many many congratulations and thank you very much! BTW our region's RMO results are out, and I'm selected for INMO and the Training Camp... What about yours (I guess Tamil Nadu region)
Also, had you appeared for NTSE? Results of NTSE are also out (yesterday only) ... One of the things I'm happy about... · 2 years, 6 months ago
Congrats, bro! Hope u represent our country... · 2 years, 6 months ago
When is the INMO training camp for you? Also, what website do you go to for RMO results? · 2 years, 6 months ago
I didn't clear NTSE, and RMO results for Tamil Nadu region are not out yet. Waiting in anticipation. · 2 years, 6 months ago
Are the state board's students eligible for KVPY or NTSE? · 2 years, 5 months ago
I qualified for KVPY interview · 2 years, 5 months ago
How did your interview go? · 2 years, 5 months ago
i have the interview on 16 feb,(i'm in Delhi) · 2 years, 5 months ago
what is meant by original marks statement in kvpy sx call letter? Is it class 10 mark sheet? · 2 years, 6 months ago
The marks you got the previous year. So, only for SA it is 10th marks · 2 years, 5 months ago
Qualified for interview....Have any ideas for interview preparation? Am totally clueless :( · 2 years, 7 months ago
One of my seniors said study 11th portion thoroughly, and be confident · 2 years, 6 months ago
Seniors say that I gotta lie about my future plans of JEE and tell them that I will go into research field...Not planning on doing that...You? · 2 years, 6 months ago
I am going for JEE prep, but I am also interested in research sciencez · 2 years, 6 months ago
×
|
Viewpoint
# Zooming in on Ultracold Matter
Physics 12, 36
Two superresolution microscopy methods can image the atomic density of ultracold quantum gases with nanometer resolution.
Ultracold atoms are an exceptionally versatile platform to test novel physical concepts. They have greatly advanced our understanding of the physics of many-body systems and allowed precision measurements of fundamental constants. They are also a promising architecture for quantum computation and quantum simulation. A key to the practicality of ultracold atoms is the ability to image them with high spatial resolution. Available microscopy schemes have reached sufficient resolution to detect individual atoms trapped in optical lattices with submicron spacings, but their spatial resolution is typically limited to about half the wavelength of the imaging light. Now, two independent teams—one led by Cheng Chin from the University of Chicago, Illinois [1], and the other led by Steve Rolston and Trey Porto from the University of Maryland, College Park [2]—have reported subwavelength-resolution imaging techniques for ultracold atoms. The methods, capable of resolving objects up to 50 times smaller than the optical wavelength, have allowed the teams to map the shape of atomic density distributions on nanometer scales. Nanoscale maps of atomic density will be important observables for probing many-body effects in cold atomic and molecular systems.
Researchers have devised several methods to image cold atomic gases. The most popular one involves switching off the trapping lattices and imaging the atomic absorption after the atoms have expanded. Another recent technique, dubbed quantum-gas microscopy, can resolve single atoms within the optical lattice [35]. Relying on standard optical imaging, these methods are fundamentally limited by diffraction. As Ernst Abbe found in 1873, the minimum resolvable distance between two spots is proportional to the wavelength of the imaging light. For imaging at optical wavelengths, the resolution typically lies between 0.3 and 0.7 $𝜇\text{m}$. However, many interesting features in the atomic distribution (represented by the atomic density wave function) are often on the scale of nanometers. Several approaches have attempted to overcome the diffraction limit. One group achieved a resolution of 150 nm by probing ultracold atoms with the focused electron beam of a scanning electron microscope [6], but such a microscope cannot be easily integrated in cold-atom setups. Other researchers have implemented a combination of laser light and microwave fields to manipulate individual atoms with subwavelength resolution [7], but the technique wouldn’t be suitable for mapping atomic density wave functions.
The teams of Chin and of Rolston and Porto have developed new imaging techniques for ultracold atoms that significantly surpass Abbe’s limit. Their work builds on developments in superresolution microscopy, which were recognized by the 2014 Nobel Prize in Chemistry. In particular, the teams’ superresolution strategies are reminiscent of that used in “stimulated emission depletion microscopy,” in which the nonlinear response of fluorescent markers is exploited to localize individual markers with nanometer resolution [8]. A similar nonlinear response of atoms is put to use by the two teams. This approach relies on preparing only a fraction of the atoms in a specific atomic energy state. This “select” population is confined to an extremely small spatial window (Fig. 1), which, thanks to saturation effects, can be much smaller than the optical wavelength. By scanning the position of the window over the cloud and selectively imaging only the atoms inside the window, the spatial structure of the cloud can be mapped with nanometer resolution.
The group from the University of Chicago used cold cesium atoms trapped in a one-dimensional optical lattice with a periodicity of 426 nm, created by a standing-wave pattern from an infrared laser. With a laser at a different wavelength, the researchers generated another standing wave that had a similar periodicity but could be shifted along the lattice direction with nanometer precision. The light from this second laser drove a specific transition between two hyperfine states of the atoms. Since just a few photons are sufficient to cause this transition, only atoms within a narrow window around the nodes of the second lattice remained unexcited. By scanning the position of this unexcited window across the lattice and imaging the fraction of atoms in the excited state, the team could build a map of the atomic density distribution with a resolution of just over 30 nm, determined by the width of the window. To demonstrate the capabilities of their microscopy technique, the researchers quickly displaced the trapping lattice by only a fraction of the wavelength and then imaged the evolution of the atomic density distribution as a function of time. They could see that the displacement induced an oscillatory motion of the atomic wave packets with an amplitude of about 100 nm.
The team from the University of Maryland followed a similar approach, using ytterbium atoms in a one-dimensional lattice. Instead of leaving the atoms unexcited in a narrow window, as in the Chicago team’s setup, their scheme only excited atoms within the window. The authors applied a combination of a standing-wave laser field and a homogeneous field. The two fields transfered the atoms into a so-called dark state made up of the quantum superposition of two internal energy states. As a result of the spatial distribution of the laser fields, the exact makeup of this superposition varied across the atomic cloud. At certain positions, corresponding to the nodes of the control field, the fields produced a subwavelength-sized window in which the atoms were predominantly in a specific dark state superposition. By moving the window’s position and selectively detecting the atoms in that state, the researchers acquired maps of the atomic density with an impressive resolution of 11 nm, about one fiftieth of the wavelength of the experimental lasers. The team demonstrated the power of the technique by imaging the atomic wave functions for two different optical lattices, one with a sinusoidal shape and the other with an approximately rectangular shape (a Kronig-Penney lattice). The setup could distinguish shape differences between the wave functions for the two cases on scales of tens of nanometers.
One can anticipate that new superresolution imaging of ultracold atoms will benefit a wide array of experiments relying on the direct measurement of atomic density wave functions, and of their dynamics, in many-body quantum systems. The technique could, for instance, image vortices with submicron sizes that occur in stirred Bose-Einstein condensates and other trapped configurations that are not periodic [9]. It could also provide a detailed picture of the complex wave functions of atoms in optical lattices in which the atoms occupy high-energy bands. Such lattices show promise as quantum emulators of exotic solid-state crystals [10]. Finally, it could allow subwavelength-resolution nondemolition measurements of the density of atoms inside an optical cavity—a setup that could see the quantum motion of atoms without disturbing them [11].
This research is published in Physical Review X.
## References
1. M. McDonald, J. Trisnadi, K.-X. Yao, and C. Chin, “Superresolution microscopy of cold atoms in an optical lattice,” Phys. Rev. X 9, 021001 (2019).
2. S. Subhankar, Y. Wang, T.-C. Tsui, S. L. Rolston, and J. V. Porto, “Nanoscale atomic density microscopy,” Phys. Rev. X 9, 021002 (2019).
3. W. S. Bakr, J. I. Gillen, A. Peng, S. Fölling, and M. Greiner, “A quantum gas microscope for detecting single atoms in a Hubbard-regime optical lattice,” Nature 462, 74 (2009).
4. S. Kuhr, “Quantum-gas microscopes: A new tool for cold-atom quantum simulators,” Natl. Sci. Rev. 3, 170 (2016).
5. C. Gross and I. Bloch, “Quantum simulations with ultracold atoms in optical lattices,” Science 357, 995 (2017).
6. T. Gericke, P. Würtz, D. Reitz, T. Langen, and H. Ott, “High-resolution scanning electron microscopy of an ultracold quantum gas,” Nat. Phys. 4, 949 (2008).
7. C. Weitenberg, M. Endres, J. F. Sherson, M. Cheneau, P. Schauß, T. Fukuhara, I. Bloch, and S. Kuhr, “Single-spin addressing in an atomic Mott insulator,” Nature 471, 319 (2011).
8. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: Stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780 (1994).
9. K. W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, “Vortex formation in a stirred Bose-Einstein condensate,” Phys. Rev. Lett. 84, 806 (2000).
10. X. Li and W. V. Liu, “Physics of higher orbital bands in optical lattices: A review,” Rep. Prog. Phys. 79, 116401 (2016).
11. D. Yang, C. Laflamme, D. V. Vasilyev, M. A. Baranov, and P. Zoller, “Theory of a quantum scanning microscope for cold atoms,” Phys. Rev. Lett. 120, 133601 (2018).
Professor Stefan Kuhr is Head of the Optics Division at the University of Strathclyde and an expert in experimental cold-atom physics. Important areas of his research are single-atom detection in optical lattices using quantum-gas microscopes and quantum-nondemolition photon measurement. He was a recipient of an ERC Starting Grant and he is a Fellow of the Institute of Physics.
## Related Articles
Optics
### Laser Propagating through Air Sets Stability Record
An optical link for communication between distant atomic clocks is 100 times more stable than previous links and could enable new precision tests of general relativity. Read More »
Atomic and Molecular Physics
### How to Find the Electron Starting Block
A new technique pinpoints, with picometer resolution, the location from which an emitted electron originates within a molecule. Read More »
Atomic and Molecular Physics
### A Record Low for Laser-Cooled Molecules
Researchers achieve the lowest temperature yet for laser-cooled molecules trapped in an optical lattice. Read More »
|
, 12.11.2019 03:31, sherlock19
# You use 4 gallons of water on 14 plants in your garden. at this rate how much water will take it to water 35 plants?
### Other questions on the subject: Mathematics
Mathematics, 21.06.2019 16:30, LittlestRedTeal8940
Suppose that f x 3 and g x 3x 3 which statement best compares the graph of g(x) with the graph of f(x)
Mathematics, 21.06.2019 18:30, sakria2002
What can each term of the equation be multiplied by to eliminate the fractions before solving? x – + 2x = + x 2 6 10 12
|
Can an asteroid go through the sun?
I'm looking into a story where the Earth is hit by an asteroid going very fast. This must be a complete surprise and I want to go a mile further than just giving the entirely valid solution of it coming from deep space.
I want the asteroid to come through the Sun and hit the Earth afterwards.
The asteroid in question can go any speed between 0.1c to 0.9c to accomplish this. Reasons for the speed are irrelevant, but ejection from a violent explosion of neutron stars or a sort of solar sail type that ablated the asteroid while pushing it can be imagined at will.
The asteroid should be large enough to impact the Earth after going through the sun. It doesn't matter if the result on Earth will just be a crater the size of a tennis ball or rip the Earth apart. It just needs to reach the surface.
I would like the Sun to be relatively unaltered afterwards.
Extra considerations can be the solar ejections that could travel along with the asteroid.
The main question is: Can an asteroid going between 0.1c and 0.9c go through the Sun and hit the Earth?
The following additions and considerations to the answer are appreciated, but not mandatory.
• I would like the Sun to be relatively unaltered afterwards.
• The impact the asteroid has on the surface of the Earth.
• Star mass that can come with the asteroid.
• If the object were hard and dense enough, and big enough, and going fast enough, a small core might, MIGHT, come out the other side. Maybe with a little handwavium?
– Len
Jul 1 at 18:31
• Do you want it to go through the center of the Sun, or passing obliquely near the surface is Ok? Jul 1 at 21:00
• To make it simple, no. The Sun gets so hot, that any kind of material (that we've heard of) will melt and then vaporize before it even reaches the surface.
– user86525
Jul 1 at 22:48
• the earth could be hit tomorrow as a complete surprise, without anything like this, we don't monitor enough of the sky to spot some rouges.
– John
Jul 2 at 3:55
• @John moreover, we have particular difficulty observing if anything is coming from the direction of our sun. Jul 2 at 16:43
No.
I'm not knowledgeable enough to derive what would happen in this scenario, but I bet you Randall Munroe is. More specifically, he knows what happens to a baseball that travels at .9 c from the pitcher's mound to the home plate. Spoiler alert: the ball never makes it, at least not in any form recognizable as solid matter, let alone as a ball.
If we consider the tiny mass of air the baseball is trying to "punch through" on its way across half the baseball diamond, and contrast that with the mass of the plasma an asteroid must displace or accelerate as it tries to punch through the sun, it should be apparent to even noobs like us that there's no object that could pull that off while also being small enough to be considered an asteroid.
1. First stopper: inertia. No matter how fast your asteroid is going, the inertia of the sun will stop it. You might think: "well, if I just crank up the speed, shouldn't it eventually have enough to punch through?" But don't forget: in the frame of reference of the asteroid, it's getting hit by a slice of the sun travelling at .9 c or whatever speed you choose. Unless your asteroid's mass starts to actually compete with a slice of the sun (which I highly suspect it couldn't, and still be called an asteroid), it will be absorbed without affecting the sun's inertia noticeably.
2. Second stopper: vaporization. (Or like...plasmification...?) Whatever we call it, what happens to Randal's Relativistic Baseball (Wondrous Item, Major, Legendary) will happen to your asteroid. Maybe the explosion would actually increase the sun's luminosity very briefly? I'm not sure, but there's definitely nothing solid left to come out the other side.
• FWIW, I had the exact same thought; faster ≠ better. Going faster rather increases the interaction between masses. The only way I can see this being plausible is for the impactor to be made of something that can remain coherent on its trip through the Sun, and you're going to have a difficult balancing act between going too fast and heating up too much as a result, and going too slow that you take too long and... heat up too much. Some sort of ablative shielding might be an answer, except we don't want to muck up the Sun in the process... Jul 1 at 18:32
• Good consideration! I had forgotten about the baseball question. The baseball had the problem that the air couldn't get out of the way quick enough. That, as well as the small size, made it evaporate quickly. But can we just go slower/use an asteroid big enough to have the asteroid survive the ablation? Jul 1 at 18:33
• @Trioxidane, unless it is self propelling, see the inertia problem. It's not quite that simple because the Sun doesn't have a lot of coherency on its own, but there's still a lot of mass you have to shove aside. Calculating this mass would probably be helpful. Jul 1 at 18:34
• the baseball scenario is very different from what we have here. size (square-cube law), composition, and speed matter very much. also, i don't get where the sun's inertia matters. "affecting the sun's inertia". inertia isn't affected by anything, it's a fundamental property of mass.
– ths
Jul 1 at 19:49
• @jamesqf - there's no requirement that it go straight through the sun's core. Just skimming the atmosphere might qualify as "through". Jul 2 at 19:26
For certain values of "through" I'll give a definite yes.
To be clear, the Sun is much too dense to permit a near-central passage of anything less dense than white dwarf matter at merely orbital or even solar escape velocity; such a "center punch" impact would result in such a lesser object being absorbed and, assuming a mass less than that of a planet like Earth or Venus, virtually no longer term effect on the Sun itself.
A mass with the density of a white dwarf traveling at 0.1 C or faster might potentially punch right through the star, but doing so would be disruptive enough to star to have an effect on planets around it similar to a supernova explosion -- which pretty well fails the "sun must be largely unaffected" desideratum.
However, stars aren't solid objects with a clearly defined surface like a rocky planet. Instead, they're balls of gas, with steadily (or unsteadily) decreasing density going from the core, where the fusion takes place, out to the limits of the atmosphere (by some definitions, for our Sun, near or even beyond the orbit of Mercury). By those definitions, the Parker Solar Probe has an orbit that takes it well inside our parent star.
And that's your solution. No, nothing in the size range you'd reasonably call an "asteroid" can go centrally through the Sun (or any other main sequence star) but it can pass through the Sun's atmosphere, and if its velocity is high enough, possibly even break through the chromosphere or photosphere -- visually, this would be "inside" the star even to astronomers of the 19th century -- and spend so little time there that it merely ablates somewhat from gas interaction. So long as it's coherent enough not to break up at that point like the Chelyabinsk bolide did in Earth's atmsophere, it could then continue (and given 0.1 C or higher speed, with little change of course) toward its rendezvous with Earth.
The amount of bending of the path by the Sun's gravity might well be just about enough for the object to have come from almost directly behind the Sun (from the POV of Earth's position at collision time). The first clue we'd get would be a disruption of the Sun's atmosphere that might be mistaken for a solar flare or coronal mass ejection of unusual configuration; then, no more than 80 minutes later, "Kaboom!" Marvin the Martian will be vindicated.
• Just grazing the top of the photosphere the density would be about 0.1 gm/ cubic meter, air at STP is about 1250. Maybe you could even graze the sun and survive. Jul 1 at 20:09
• "Merely ablates somewhat" is a serious understatement. As other answers mention, the impact upon even the least dense parts of the Sun atmosphere would shatter any kind of known material. Unless the asteroid is actually a black hole, it will come out of the Sun as a beam of ultraheated plasma which will spread out in a cone due to electromagnetic repulsion. At 0.1C just the adiabatic compression of the atmospheric hydrogen is going to provoke huge termonuclear explosions on the front side of the asteroid. Jul 2 at 8:42
• @mcalex If it's big enough not to ablate away even in the Sun's lower atmosphere, no, no "tennis ball" or even "tennis court" sized craters -- maybe one as small as Wembly stadium, but probably not, it still has to impact Earth's atmosphere at 0.1C or faster. Yes, there might be up to 80 minutes warning, probably from a day-side solar telescope (the ones that monitor sunspots, prominences, etc.) -- some of those are monitored in real time for solar weather warnings. There might also be no warning at all, if it was cloudy over the only solar telescope on the day side. Jul 2 at 11:08
• @mcalex Solar weather satellites aren't monitored in real time, quite, so there'd be less than 80 minutes warning from them. Further, seeing an eruption (or photospheric fusion event per other comment) wouldn't necessarily serve as warning something was headed for Earth -- someone would have to a) be monitoring in real time, b) realize what causes that event, and c) determine where the object is headed, and the time to do that is subtracted from the 80 minutes... Jul 2 at 11:10
• @GaryWalker At a speed of 0.1c, even a density of 0.1 gm per cubic metre is like being sandblasted with nukes. At that density, a small asteroid of radius 56 cm (cross-section 1m^2) would collide with about 3 tons of matter per second of skim-time. 3 tons * (0.1c)^2 * 0.5 is a lot of kinetic energy. Jul 3 at 3:05
TL;DR: No, because an object moving at high speeds will break apart even quicker.
I suspect it wouldn't. I don't know if there's been any work done on relativistic impacts onto the Sun, but a group modeled comet impacts more like those we'd actually observe. I think, though, that we can extrapolate a few things from their results:
• The body would rapidly lose mass thanks to the intense shock front it would produce in the Sun's atmosphere and, if it made it that far, the interior. The mass loss it would experience scales like $$\dot{M}\propto\rho v^3$$, with $$\rho$$ the density of the medium it's traveling through and $$v$$ the speed. While a faster-moving body would take less time to pass through the entire Sun, that timescale only scales as $$\tau\propto 1/v$$, so the total mass lost would be, roughly, $$\sim\dot{M}\tau\propto v^2$$, so a body traveling faster wouldn't lose less total mass.
• Because of this, at higher speeds it actually seems that the final airburst destroying the body would happen at higher altitudes above the surface, rather than further into the Sun - although I think it's a bit more complicated at relativistic speeds.
• In this regime, most of the lost energy would go partly into the ablated material and partly into the atmosphere, rather than entirely into atmospheric heating, which is interesting, although the ablated material would eventually become part of the surrounding medium.
In short, mass loss scales strongly with the speed of the object, and the asteroid would be torn apart even quicker.
• At that kind of velocity I don't think it will be anything like a spacecraft taking a bath of fire. Rather than most of the energy going into a shock front the particles are going to embed into the face of the asteroid. The only hope the asteroid has to survive is if it has enough thickness to get through before it ends up like the previously mentioned baseball. Jul 2 at 3:41
No, but it can come "out of the sun". Ask a military pilot.
The asteroid would have to be falling almost "straight down" from interstellar space or the Sun's Oort cloud, in an orbit that will take it very close to the sun. It wouldn't be noticed while it was far from the sun. It would be hidden by the sun or the sun's glare at its closest approach, and it would come "up" from somewhere inside the orbit of Mercury on a collision path with Earth.
BTW if aliens wanted to "take out" the Earth with plausible deniability, this is probably how they would do it. Just nudge an Oort cloud object of appropriate mass into the Earth-impacting orbit. If they installed a (relatively) small terminal guidance system on the object, it would get vaporized by the impact and nobody could ever prove anything.
• I appreciate the idea of aliens wanting plausible deniability. Jul 3 at 17:32
Newton came up with a fairly simple way to estimate how far a projectile can travel through a medium.
Newtons approximation for impact depth
The formula is Depth=Projectile length*projectile density/medium density
So we need some information about our asteroid. Lets assume a nickle-iron asteroid with a density of 7g/cm^3, and a diameter of 10 km.
The density of the sun averages to 1.4g/cm^3, so if the sun had uniformed density the asteroid would stop after traveling 50 km. Clearly not far enough.
But, the density of the sun depends strongly on how deep into the sun you are. At the core the density is 150g/cm^3 - our asteroid would penetrate 466 meters.
Fortunately the outer layers of the sun has much smaller densities.
So the question becomes what do you mean by "going through the sun"? - As established your asteroid will not pass through the sun core, but what about the outer layers? A natural way to view passing through the sun is passing through the photosphere, the point at which the sun becomes opaque to visible light. Here the density is 0,2g/cm^3; Our asteroid can penetrate 350 km through this density. The photosphere is about 100km thick, and we would need to traverse it twice and at an angle. - So the photosphere is on it's own almost enough to stop our asteroid. While the densities outside the photosphere drops quickly, and as such contributes less to slowing our asteroid this is probably enough that our asteroid will fail to penetrate.
Since we almost penetrates the photosphere we should be able to penetrate the chromosphere. I would definitely describe this more as a gracing the sun than going through the sun though. Alternatively we could change the asteroid; An asteroid with a diameter of 100km would penetrate easily.
Considerations that the above ignores;
• Melting and evaporation of the asteroid. While the sun is very hot, the densities we are travelling through is low enough that heating effects from contact is limited. We are also going fast enough that there is not all that much time for heating to occur.
• Xkcd style fusion effects from asteroid impacting the gasses of the sun. I do not think this will affect much; the asteroid is imparting momentum to the gas in front of it. The mechanism of this momentum transfer is not all that interesting.
• Re-shaping of the asteroid; the sun is going to act on the asteroid slowing it down. This force acts on the front of the asteroid. This could cause the asteroid to flatten, lessening the length of projectile term in the approximation above. How much of an effect this has is not a question I am qualified to answer; but I think it's more of a concern if the asteroid barely penetrates, than if it penetrates easily.
• Exit speed. The asteroid will impart momentum to the gasses it is travelling through, thus leave the sun with less speed than it arrived with. I do not know how to calculate how much speed is lost.
• Relativistic mass - traveling at relativistic speeds the asteroid has more momentum than indicated by newtonian physics. This breaks the assumption of the approximation and will mean that you can go further than you would otherwise expect.
Impact on earth. A 10 km diameter nickle-iron asteroid has a mass of 3.665×10^9 kg. Traveling at 0.1c gives a relativistic energy of 1.659×10^24 joules. WolframAlpha tells us that this is about 3.3 times the energy that was released from the Chicxulub meteor impact. The conclusion is that this would be a mass-extinction event. The gravitational binding energy of the earth is 2x10^32, so the impact is nowhere near powerful enough to destroy the planet. If instead we look at 0.9c we get an energy of 4.263×10^26, 100 times more powerful - but still not close to destroying the earth.
The 100 km diameter asteroid that could actually penetrate the photosphere would produce impacts 1000 times more powerful, so at 0.9c would be within 1% of the gravitational binding energy of the earth.
Yes, so long as it comes through the edge of the sun.
If it goes through the core it'll be stopped. Have it head through the outer, thinner layers at an enormous speed.
The earth will probably remain intact, but with substantial damage, depending on the speed.
https://www.calculatorsoup.com/calculators/physics/kinetic.php
Using this calculator and an estimated 10% light speed, with a mass of 10^16 kilos, it would have 4.5*10^30 joules of energy. You need 20 times that to destroy the earth, and with the slowing down the sun will do you'll probably not hit it at peak speed. You'd need a larger mass or a relativistic speed to destroy the earth. The sun will be shocked and probably be unstable, but will be generally fine.
Like others said earlier, the answer is a big, fat NO.
The inside of the Sun is super tightly packed, super hot plasma. It's so dense that photons emitted from the core are believed to be trapped for thousands of years until they finally bubble up to the photosphere and get fired (pun not intended) into space. There is no way any solid material could get through this.
Or is there?
Well, there is, if the asteroid itself isn't just a chunk of rock, but something like an inverted tokamak. An object that emits such a strong magnetic field that it's able to move plasma out of its way. However, the energy required for this would be enormous, as it must exceed the energy of the plasma itself.
So far all the answers discuss whether the asteroid can survive the bath of fire and don't address your secondary points at all.
Sun unaltered: That's a certainty. While you could get some pretty cataclysmic results from slamming a relativistic asteroid into the sun any such scenario destroys the asteroid. The only scenario in which the asteroid making it through is even a possibility is barely grazing the sun.
Impact on Earth: Cataclysmic. While most of the mass will burn away it won't survive at all if it's burned too small and at those kinds of speeds it won't take a lot of mass to make a mighty big boom.
Star mass brought along: Infinitesimal. At those velocities "solid" ceases to be very meaningful. Star matter doesn't hit and bounce or hit and stick, but rather penetrate into it (look at the baseball mentioned elsewhere.) Any appreciable amount of matter hitting will vaporize the layer it sticks into--and that material will leave as a very energetic plasma, taking with it the stellar gasses and their fusion results. Only on the way back out once the density has fallen low enough that the surface isn't boiling off will there actually be stellar mass retained.
|
Copyright © University of Cambridge. All rights reserved.
## 'Combining Transformations' printed from http://nrich.maths.org/
### Show menu
In this problem, we shall use four transformations, $I$, $R$, $S$ and $T$. Their effects are shown below.
We write $R^{-1}$ for the transformation that undoes'' $R$ (the inverse of $R$), and $R S$ for "do $R$, then $S$".
We can write $T$ followed by $T$ as $T T$ or $T^2$, and $T$ followed by $T$ followed by $T$ as $T T T$ or $T^3$ and so on.
Similarly, we can write $S^{-1}S^{-1}$ as $S^{-2}$ and so on.
Try to find simpler ways to write:
$R^2$, $R^3$, $R^4$, $\dots$
$S^2$, $S^3$, $S^4$, $\dots$
$T^2$, $T^3$, $T^4$, $\dots$.
What do you notice?
Can you find a simpler way to write $R^{2006}$ and $S^{2006}$?
Can you describe $T^{2006}$?
Let's think about the order in which we carry out transformations:
What happens if you do $R S$? Do you think that $S R$ will be the same? Try it and see.
Is $T^2R$ the same as $R T^2$?
Is $(R T)S$ the same as $S(R T)$?
Try this with some other transformations.
Does changing the order always/sometimes/never produce the same transformation?
Now let's think about how to undo $R S$. What combination of $I$, $R$, $S$, $T$ and their inverses might work? Try it and see: does it work? If not, why not? Can you find a combination of transformations that does work?
How can you undo transformations like $S T$, $T R$ and $R S^2$?
This problem is the middle one of three related problems.
The first problem is Decoding Transformations and the follow-up problem is Simplifying Transformations .
|
# Gram-Schmidt process vectors
• April 2nd 2008, 02:10 PM
Gram-Schmidt process vectors
Prove that if $\{ w_{1}, w_{2}, ... , w_{n} \}$ is an orthogonal set of nonzero vectors, then the vectors $v_{1}, v_{2}, . . . , v_{n}$ derived from the Gram-Schmidt process satisfy $v_{i} = w_{i} \ \ \ \forall i$
my proof so far:
I intend to use induction.
Now, $v_{1} = w_{1}$ is travial.
Suppose that n = k is true, then I have $v_{k} = w{k-1} - \sum ^{k-2}_{j=1} \frac {}{ || v_{j} || ^2 } v_{j} = w_{k}$
Now, how would I use that information as well as the fact that the vecters are orthogonal to get k+1 is true?
thanks
• April 3rd 2008, 12:08 AM
Opalg
Quote:
Prove that if $\{ w_{1}, w_{2}, ... , w_{n} \}$ is an orthogonal set of nonzero vectors, then the vectors $v_{1}, v_{2}, . . . , v_{n}$ derived from the Gram-Schmidt process satisfy $v_{i} = w_{i} \ \ \ \forall i$
Now, $v_{1} = w_{1}$ is travial.
Suppose that n = k is true, then I have $v_{k} = w{k-1} - \sum ^{k-2}_{j=1} \frac {}{ || v_{j} || ^2 } v_{j} = w_{k}$
This is easy if you use strong induction (in other words, assume that the inductive hypothesis holds for all n≤k, not just for n=k). The formula for v_{k+1} is $v_{k+1} = w_{k+1} - \sum ^{k}_{j=1} \frac {}{ || v_{j} || ^2 } v_{j}$. But if $v_j = w_j$ for 1≤j≤k then each term in that sum will vanish, because the w's are orthogonal to each other.
|
Newton proved in Principia that the total gravity that a sphere exerts over a material point is the same as if all the mass of the sphere were concentrated on its center.
I wonder if it's possible to prove that a sphere / spherical shell is the only shape with this property (or perhaps there are other shapes? I don't know).
We can define center as "center of mass", and I'm only interested in Newtonian physics (as opposed to relativistic physics).
-
It is very easy to construct arbitrary shapes that have the property that the gravitational potential outside is just like all the mass were concentrated at a point.
$$\phi(x) = - {M\over r}$$
Then take any shape, take two nested cubes for definiteness. Then make $\phi(x)$ be a constant in the interior of the inner cube larger than the supremum of the values outside the cube, and make the potential rise up in a gradually down-curving way to the inner cube's value.
Then $\rho(x) \propto - \nabla^2 \phi$ is a mass distribution which produces this field, and $\nabla^2\phi$ is zero inside the inner cube and outside the outer cube. The only thing you need to check is that the mass density is everywhere positive.
If the positive mass thing doesn't work on the first try, you can always make the potential on the inner cube bigger, or if worst comes to worst, draw an inscribed sphere in the inner cube, and a circumscribed sphere around the outer cube, and fill the region between the two spheres with a uniform positive mass density which is equal to the maximum negative magnitude of the density in the squares alone.
It is just not true that the sphere is the only shape with a pointlike exterior field, not even close.
-
Can you explain it a little further? Close to the vertex of the outer cube the potential is much bigger. What is wrong about that reasoning? – Eduardo Guerras Valera Nov 4 '12 at 1:47
By chosing a surface extremely close to one of the corners of your outer cube, the potential will be larger than in the rest of the surface. No density distribution in the cube can compensate that, since you cannot make two hyperbols match if they are not centered in the same zero, e.g. A/x will never equal B/(x-2) for all x. – Eduardo Guerras Valera Nov 4 '12 at 2:08
@Eduardo: This is false, just choose any $\phi$ with appropriate asymptotics, and choose the density to be it's Laplacian. – Ron Maimon Nov 4 '12 at 2:17
Oh sh... You are right. I erase my answer. I was thinking in analogy with the electric potential and conducting surfaces... It is too late in Spain now... – Eduardo Guerras Valera Nov 4 '12 at 2:28
@Eduardo: It's ok, I was confused too for a minute or two, and it's easy to get lost in the fog of chalk dust while teaching. – Ron Maimon Nov 4 '12 at 2:34
show 1 more comment
|
## FANDOM
37 Pages
Irrational numbers are real numbers that neither repeat nor terminate in decimal form. This includes pi and the square roots of any prime numbers.
## ExamplesEdit
Here is a small list of irrational numbers.
$\pi\,$
$\sqrt{5}\,$
$0.101001000100001000001000000100000001000000001000000000100000000001...$
$\sqrt{13}\,$
$e$
$\sqrt{\pi\,}\,$
$\sqrt{19}\,$
## Why You Can't Write Irrational Numbers As Fractions In Simplest FormEdit
We all know we can't write irrational numbers as fractions in simplest form. But how do we know?
Allow me to demonstrate.
For the time being, let's assume that I can write out the square root of 2 as a fraction in simplest form. Since I don't know the values of the numerator and denominator, it will be set to a over b.
$\sqrt{2}\, = \frac{a}{b}\,$
If we play around with it a little, we end up with 2b2 is equal to a2.
$\sqrt{2}\, = \frac{a}{b}\,$
$2 = \frac{a^2}{b^2}\,$
$2b^2 = \frac{a^2}{b^2}\,b^2$
$2b^2 = a^2$
With this, we can say that a2 is even. Hence, a is an even number. Since a is even, let's set a equal to 2c.
$2b^2 = a^2$
$2b^2 = (2c)^2$
$2b^2 = 4c^2$
$\frac{2b^2}{2}\, = \frac{4c^2}{2}\,$
$b^2 = 2c^2$
With this, we can say that b2 is even. Hence, b is an even number.
But that's the problem. I've just proven a and b are both even numbers. Any fraction with the numerator and denominator being both even numbers is not in simplest form.
$\frac{2}{4}\, = \frac{1}{2}\,$
$\frac{4}{16}\, = \frac{1}{4}\,$
$\frac{16}{34}\, = \frac{8}{17}\,$
$\frac{8}{2}\, = \frac{4}{1}\, = 4$
$\frac{32}{6}\, = \frac{16}{3}\,$
$\frac{54}{80}\, = \frac{27}{40}\,$
$\frac{16}{48}\, = \frac{1}{3}\,$
Note how all fractions in simplest form have at least one odd number? The fraction I tried making of the square root of 2 has no odd numbers! Therefore...
$\sqrt{2}\, = \frac{a}{b}\,$ is an impossible fraction!
Community content is available under CC-BY-SA unless otherwise noted.
|
#### A Dual Oxide CMOS Universal Voltage Converter for Power Management in Multi-$V_{DD}$ SoCs
Dhruva Ghai, Saraju Mohanty, Elias Kougianos
University of North Texas
#### Abstract
Level converters are becoming overhead for the circuits they are being employed in. If their power consumption continues to grow, they will fail to serve the very purpose they were built for. In this paper we propose the application of a dual-$T_{ox}$ (DOXCMOS) technique for the power-delay optimization of a DC to DC voltage level converter under oxide thickness ($T_{ox}$) and transistor geometry constraints. The results show power savings of $83\%$ and delay improvement of $60\%$ over existing designs. The proposed level converter is capable of performing level-up/down conversion, and blocking of the input signal. The design is area optimal, with a minimum number of transistors. It is a robust design producing a stable output for voltages as low as $0.6V$ and loads varying from $10fF$ to $200fF$ for a $90nm$ technology. The average power dissipation of the converter with a $45fF$ capacitive load is $19.89 \mu W$. The entire design cycle has been carried out up to physical design, including parasitic re-simulation. The physical design is fault tolerant and cross-talk noise free, thus suitable for Design For Manufacturability (DFM). To the best of the authors' knowledge, this is the first universal level converter designed using a DOXCMOS technology for power-delay optimization.
|
# Switching to Solar: What we can learn from Germany's success in harnessing clean energy.
Bob Johnstone, Prometheus Books, New York 402 pages (illustrations). ISBN 978-1-61614-222-3. Paperback: $19 (6" x 9"). Reviewed by Michael DuVernois Two of the biggest environmental issues looming over us, probably thought about far less than they should be, are global climate change and the decline of fossil fuels. Of course these two issues are tightly coupled to each other; every liter of oil burned is 3 kg of carbon dioxide in the atmosphere and one less liter of oil available. With 1 kW of solar energy arriving per square meter at the orbit of the Earth, solar energy is clearly the principle way forward from fossil fuels. As physicists, we go back to this very important number again and again: solar power is THE source of energy on Earth. It is the energy that drove the photosynthesis that grew the plants that turned into the oil. It is the energy that drives the winds and ocean waves, the other potential sources by which we can extract that energy in a sustainable manner. Switching to Solar: What we can learn from Germany's success in harnessing clean energy looks closely at a practical model of how to make this transition from carbon dioxide-emitting fossil fuels to solar power. Germany, a cloudy, northern latitude nation, would not at first seem an obvious place for a major investment in solar electric power. But the German government was willing to lower the bar to entry by way of solar investment credits and electrical buy-back guarantees. Since this book appeared a year ago, a lot has happened in the field of solar power. Solyndra has gone from a proud example of US manufacturing to a political football, and even a symbol of failed renewable energy dreams. The company lost$534 million and 1100 jobs when it failed. It had accounted for about 1.3% (either "only" or "fully" depending on your view) of the Department of Energy's loan portfolio. (Of course, the US has funded renewable energies at a much lower level than most other industrialized nations.)
Meanwhile also, the European economy (and in fact the European Experiment) has run into troubles. In the wake of the Fukushima disaster, Germany is in the process of shutting down nuclear power plants in favor of Russian natural gas. And more directly relevant to the discussion here, European nations are ending financial support for new solar installations. Part of the argument, in addition to the simple cost basis, is that the program has been a financial conduit not just to German solar manufacturers and installers, but primarily, and increasingly, to Chinese solar panel manufacturers. German subsidies of the solar industry are in rapid decline. It is probably too early to tell if the breakdown of the German solar resolution (or experiment) is short-term or not. Perhaps when economic times are better, there will be a return to the subsidies that help the solar industry start up. After all, there are government supports as well for the natural gas pipelines too.
The author, Bob Johnstone, is a journalist based in Australia who notes how few solar installations there are in one of the world’s sunniest nations. The factors which separate the solar explosion in Germany from the quiet acceptance of coal-burning in Australia are political will and a sensible economic setup. The German government provided a feed-in tariff; in essence, they guarantee that your electrical retailer will purchase your power at a rate sufficient to pay back your initial costs and provide you with a good return. It is a good deal on the government and electrical utility end if, and only if, the real price of electricity will increase in the future, perhaps due to a scarcity of fossil fuels. In the short term it is expensive, however.
With fracking, a renewed push for cheap natural gas exploitation, and difficult economic times, the economic proposition no longer looks nearly as good as it once did for these feed-in tariffs. How quickly things change…
The book lays out the case for a German model of feed-in tariffs as a sensible route towards a post-fossil-fuel world: making an investment now for an installed solar base when it is needed. Although the plan has run afoul of bad economic times and the difficulties of managing financial incentives aimed at local businesses in a worldwide economy, we will undoubtedly be looking seriously again at these plans in a few years. In the meantime, it is a worthwhile read for a practical look at government-industry cooperation leading to roofs of power-generating panels.
Michael DuVernois
University of Wisconsin
These contributions have not been peer-refereed. They represent solely the view(s) of the author(s) and not necessarily the view of APS.
|
Also, it can find equation of a circle given its center and radius. How to use the calculator Enter the radius r as a positive real number and press "calculate". Equations of a circle with given centre and radius in different forms. #4 Find the Circumference of a Circle Given the Area. To begin with, remember that pi is an irrational number written with the symbol Ï. Area of a Circle Calculator. Then tap or click the Calculate button. Disk (mathematics) is the region in a plane enclosed by a circle. Use our free online area of a semicircle calculator to find the area of a semi-circle using its radius. Recall that the formula to get the area of circle is pi × r 2 with pi = 3.141592653589793 ø = Circle diameter; A = Circle area; Ï = Pi = 3.14159⦠Area of Circle. This concept can be of significance in geometry, to find the perimeter, area and volume of solids. You may change the number of significant figures displayed by ⦠Solution: This is a two-step problem.First, since we know the area of the circle we can figure out the radius of the circle by plugging in 78.5 for A in the area of a circle formula A = Ïr 2 and solving:. Center. Get the result. If you know the diameter or radius of a circle, you can work out the circumference. Example: find the circumference of a circle. This circumference to diameter calculator is used to find the diameter of a circle given its circumference. Using the Radius Calculator. The outputs are the area and perimeter of the circle. In the search box, type whatever you think is related to the circle. Home / Mathematics / Area; Calculates the area and circumference of a circle given the radius or diameter. You can enter the radius and then compute diameter and circumference in mils, inches, feet, yards, miles, millimeters, centimeters, meters and kilometers.. Area has different units, but you can use: square mils, square inches, square feet, square yards, square miles, acres, hectares, square millimeters, square centimeters, square meters, and square kilometers. This calculator is designed to give all of the mathematical values of a circle and sphere from only one entered data value. The formula to find the circumference when radius is given is Radius = Circumference/(2*Ï) The formula to find the area when radius is given is Area of circle = Ï*Radius*Radius In the above formulas, Ï=3.14159 and R is the radius. A common problem in geometry class is to have you calculate the area of a circle based on provided information. This calculator will find either the equation of the circle from the given parameters or the center, radius, diameter, area, circumference (perimeter), eccentricity, linear eccentricity, x-intercepts, y-intercepts, domain, and range of the entered circle. Circular segment. You can also select units of measure for both input data and results. Use the below circumference of a circle calculator to find C with the known value of radius. How can you calculate the circumference of a circle? What is Diameter? Circle Defined by 3 Points Calculator ... Find the equation of a circle and its center and radius if the circle passes through the points (3 , 2) , (6 , 3) and (0 , 3). To calculate missing value in circle, based on one known value, you need to remember just three formulas. Enter the diameter of a circle. Use the this circle area calculator below to find the area of a circle given its radius, or other parameters. The Mohr's Circle calculator provides an intuitive way of visualizing the state of stress at a point in a loaded material. TIP: Please refer C Program to Calculate Area Of a Circle article to understand the various C Programming ways to calculate the Area of a Circle. If you know radius and angle you may use the following formulas to calculate remaining segment parameters: To calculate the area, you just need to enter a positive numeric value in one of the 3 fields of the calculator. The formula used to calculate circle area is: A = Ï x (ø/ 2) 2. The formula for working out the circumference of a circle is: If the radius is given, applying the formula is straightforward. An online calculator to calculate the area A and perimeter P of a circle given its radius r. The formulas for the perimeter P and area A of a circle are given by: P = 2 Ï r A = Ï r 2. One needs to know just the radius or the diameter of a circle in order to calculate its circumference. That is to say, you can find the circumference of a circle just by multiplying the diameter by pi. Radius also deals with pythagorean theorem. Calculate. For example, if the radius is 5 inches, then using the first area formula calculate Ï x 5 2 = 3.14159 x 25 = 78.54 sq in.. Just enter the value of the radius and hit calculate button. Eg: Wheel, Ring etc., The distance around a circle, on the other hand, is called the circumference (c). The general equation of a circle is given by the equation: Ax 2 + Ay 2 + Bx + Cy + D = 0 . Ï is roughly equal to 3.14. Circle Calculator. Problem: Find the circumference of a circle that has an area of 78.5 m. squared. Equation of a Circle Calculator is a free online tool that displays the equation of a circle of a given input. C = circumference . Code to add this calci to your website . The calculations are done "live": How to Calculate the Area. To graph a circle, visit the circle graphing calculator (choose the "Implicit" option). Using that radius value, it will find the Diameter, Circumference, and Area Of a Circle. Plus, unlike other online circle calculators, this calculator will show its work and give a detailed, step-by-step explanation of ⦠This calculator can find the center and radius of a circle given its equation in standard or general form. This program allows the user to enter the radius of a circle. The formula used to calculate the circle diameter is: ø = 2 x â(A / Ï) Symbols. This geometry calculator will take one known circle measurement (area, circumference, diameter, or radius) and calculate the other three. The circle calculator, formula, example calculation (work with steps), real world problems and practice problems would be very useful for grade school students (K-12 education) to understand the concept of perimeter and area of circle. Area of a circle Calculator . You need to know the formula for finding the area of a circle, =.The formula is simple and only needs the radius of the circle to find its area. circumference L . Task 1: Given the radius of a cricle, find its area. Missing value in one of the calculator enter the radius r as a positive real number and press ''. Is 12 cm value, it can find the perimeter, area and volume of solids in. Plane geometrical figure for details on the methodology and the equations used problem: find the of! That corresponds to the circle online equation of a circle C '' represents the of... Positive real number and press calculate '' calculate missing value in circle, circumference, and will different. The equations used, which is ⦠using the radius calculator works to find the... Of radius = 2Ïr and the equations used circumference or area of a circle given center. ( mathematics ) is the diameter is the region in a plane enclosed by a circle based on known... Learn about hypotenuse, x and y coordinates, circles & triangles try... It will find the area, you agree to our Cookie Policy just formulas... ÏD C = 2Ïr scientific notation but will still have the same precision display plot browser!: area of a circle given the area and circumference of the fields... Form given the radius is given, applying the formula used to find C with the Ï. a = circle area calculator below to find the perimeter, area and of... And area search box, type whatever you think is related to the circle length of disc formula and. 3 fields of the circle this online calculator displays equations of a circle circle, or. And will convert different measurement units for diameter and area, and area of a circle given diameter... Cricle, find its area a plane enclosed by a circle is: a = circle area ; Ï pi... ( ø/ 2 ) 2 our Cookie Policy radius value, you need to enter a positive real and! Can find the diameter by pi Example: find the center and radius diameter. Given its equation in a loaded material find its area 1: the... Displays the equation in a fraction of seconds applying the formula is straightforward reference for! Visit the circle related to the circle, based on provided information 2: find the circumference one known measurement... Value, which is ⦠using the radius or the diameter, circumference and! Based on provided information radius value, it will find the center and radius in different forms the. Circles & triangles, try Pythagorean Theorem calculator for free agree to our Cookie Policy one... And it displays the equation in a fraction of seconds just the radius calculator the. The circle website, you agree to our Cookie Policy calculate missing value in circle, r is its.. Have you calculate the area of a circle radius r as a positive numeric value in,! Formula is straightforward r^2 \cdot \pi a = circle diameter diameter... This calculator can find calculation perimeter of circle, and d '' represents the circumference # find. Is: a = circle area calculator below to find the area of circle, you to! For both input data and results the circumference geometry, to find the circumference of circle. Radius ) and calculate the area of a circle is a round plane geometrical figure the same.... Find C with the known value of the circle, the circumference of a cricle, find its.! For both input data and results of stress at a point in a loaded.. See the reference section for details on the methodology and the equations used learn about hypotenuse, and... Or diameter of significance in geometry class is to have you calculate the other.! Decimal places find C. a circle given its diameter entered data value, in parametric form and in general given. From only one entered data value of the 3 fields circle calc: find d the circle corresponds to specified... Perimeter, area and circumference of a circle calculator will generate a step by step explanations and graph. Denoted by ( d ) to graph a circle calculator tool makes the calculation faster, and Ï is.. Following this way a radius calculator works to find the perimeter, area and volume solids! On the methodology and the equations used can find the perimeter, area and volume of solids enter a numeric... And y coordinates, circles & triangles, try Pythagorean Theorem calculator for free, find its area is.. Point in a loaded material of stress at a point in a fraction of seconds radius value, is! Change the number of decimal places learn about hypotenuse, x and y coordinates, circles &,. Program allows the user to enter the radius is given, applying formula. Whatever you think is related to the circle = ( 3.14 ) r different forms is the in!: a = Ï x ( ø/ 2 ) 2 = 2Ïr '' represents the circumference of circle. To find the center and radius calculation faster, and it displays the equation in standard or form. Below to find C with the symbol Ï ( ø/ 2 ).! The circle circle denoted by ( d ) other parameters stress at a point in a fraction of.., applying the formula used to find the area stress at a in! Calculation area of a circle calculator to find the circumference 2 x 3.14159 x 4 = 25.13 inches circumference diameter! Circle in standard form, in parametric form and in general form may! Find out the radius of a circle calculator area, you can also select units of measure both. Positive real number and press calculate '' radius and hit calculate button calculator is designed to all! Of a semi-circle using its radius radius, or other parameters calculator ( choose the number of figures... Below to find the perimeter, area and circumference of a circle with given centre and radius different. One of the circle, circumference or calculation area of a semi-circle using its,! Calculator is designed to give all of the calculator, the step-by-step solution you may change the number significant. Is simply 2 x 3.14159 x 4 = 25.13 inches calculate circle area calculator below to find the. Geometrical figure will give you its numerical value, which is ⦠using the radius r as positive... Real number and press calculate '' '' represents the circumference of the calculator live '' How... Out the circumference form and in general form given the radius or diameter for easier readability, numbers between and. Circumference or area of a circle given its radius, and d '' represents the of! Diameter, circumference, and area of a circle given the area one of the circle r... The calculations are done live '': How to use the this area. This program allows the user to enter the radius r as a positive numeric in... ByjuâS online equation of a circle denoted by ( d ) point in a plane by. Circle graph outputs are the area is simply 2 x 3.14159 x =! X and y coordinates, circles & triangles, try Pythagorean Theorem calculator for.... Mohr 's circle calculator tool makes the calculation faster, and it displays the equation a! Disc, length of disc formula of measure for both input data and results circumference calculation! And press calculate '' decimal places online equation of a circle just by the. By a circle calculate button from the diameter, circumference, and is., circles & triangles, try Pythagorean Theorem calculator for free Implicit '' option.... Calculator works to circle calc: find d the perimeter, area and circumference of a circle given its equation in standard form in... Circle graphing calculator ( choose the number of decimal places by a circle circle just by the... It displays the equation in a plane enclosed by a circle given the r. By ⦠Example: find the area and circumference of a circle calculator, in parametric form in... Will circle calc: find d be in scientific notation but will still have the same precision in forms... The methodology and the equations used calculate circle area is: area of a circle a! # 4 find the center of a circle is: a = r^2 \cdot \pi =... On the methodology and the equations used also select units of measure for both input data and results bottom the! Entered data value with, remember that pi is an irrational number written with the Ï. Semicircle calculator to find the perimeter, area and volume of solids '' option ) the. Geometry calculator will give you its numerical value, you need to enter the or. Semicircle calculator to find the perimeter, area and volume of solids diameter. Known value circle calc: find d radius a semi-circle using its radius, and Ï is pi different! Its circumference calculator works to find the center and radius area ; Ï pi. Calculation area of a circle given its diameter is the diameter is the distance across the and... One value and choose the number of decimal places by step explanations and circle graph point in a of... 4 inches is simply 2 x 3.14159 x 4 = 25.13 inches displays equations a..., the circumference 2 ) 2 agree to our Cookie Policy calculate the area, circumference, diameter, or. Be of significance in geometry, to find out the radius is given applying... Working out the radius, diameter, circumference or calculation area of circle. How to calculate missing value in one of the calculator ø = area. And disc, length of disc formula, area and volume of solids and hit button...
Single Stem Flowers For Funerals, Marley Funeral Home Obituaries, Raised Planter Boxes Diy, Truly Devious Summary Sparknotes, Sound Therapy Training, Hr Policies Examples, How To Calm Down A Scared Dog From Fireworks, Cheap Snacks For Kids, Groove Train North Lakes,
|
Public Group
# SDL_Rect help
This topic is 2956 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I am learning SDL right now and i was just wondering what the SDL_Rect Function does?
I was reading the lazy foo tutorials (http://lazyfoo.net/SDL_tutorials/lesson02/index.php)
In the tut its in an apply surface function and i dont understand what is does or why its needed...
Can someone plz explain it to me as it doesnt really tell alot about the function.
##### Share on other sites
SDL_Rect is a structure like below:
typedef struct { Sint16 x, y; Uint16 w, h; } SDL_Rect;
It defines a rectangle where:
a) x and y are the coordinates of the top-left vertex of the rectangle
b) w and h are the width and height of the rectangle
On the case of the tutorial, he utilizes the rectangle to tell SDL in which part of the screen he wants the image to be drawn, this is offset.x and offset.y
Later on the tutorials you will see that the SDL_Rect will also serve as a cut perimeter for your image. It will tell what part of the image you want to draw (good for clipping tiles or spritesheets).
##### Share on other sites
SDL_Rect is a structure like below:
typedef struct { Sint16 x, y; Uint16 w, h; } SDL_Rect;
It defines a rectangle where:
a) x and y are the coordinates of the top-left vertex of the rectangle
b) w and h are the width and height of the rectangle
On the case of the tutorial, he utilizes the rectangle to tell SDL in which part of the screen he wants the image to be drawn, this is offset.x and offset.y
Later on the tutorials you will see that the SDL_Rect will also serve as a cut perimeter for your image. It will tell what part of the image you want to draw (good for clipping tiles or spritesheets).
So he puts the image in the rect?
##### Share on other sites
So he puts the image in the rect?
This don't make any sense without a context. With which function you using the SDL_Rect structure?
For example, in the SDL_BlitSurface function:
int SDL_BlitSurface(SDL_Surface *src, SDL_Rect *srcrect, SDL_Surface *dst, SDL_Rect *dstrect);
There is a 'source rect' and a 'destiny rect'.
The source one indicates what portion (rectangle) of the source surface will be used, passing null sdl uses the entire source surface on the blit. This rectangle is defined by the x, y, w, h members of the SDL_Rect(x/y as top/left, w/h as bottom/right).
The destiny one indicates only where the source surface will be blittled on the destiny one. The x and y members of the SDL_Rect indicates the location. This location is where the source surface starts, that is, indicates where the coordinate 0, 0 of the source portion will be on the destiny surface.
##### Share on other sites
[quote name='kuramayoko10' timestamp='1315619352' post='4859851']
SDL_Rect is a structure like below:
typedef struct { Sint16 x, y; Uint16 w, h; } SDL_Rect;
It defines a rectangle where:
a) x and y are the coordinates of the top-left vertex of the rectangle
b) w and h are the width and height of the rectangle
On the case of the tutorial, he utilizes the rectangle to tell SDL in which part of the screen he wants the image to be drawn, this is offset.x and offset.y
Later on the tutorials you will see that the SDL_Rect will also serve as a cut perimeter for your image. It will tell what part of the image you want to draw (good for clipping tiles or spritesheets).
So he puts the image in the rect?
[/quote]
It puts the lotion on the skin,
or else it gets the hose again.
(Sorry, couldn't resist. I just read your comment in Buffalo Bill's voice. I'm done now)
##### Share on other sites
Hmm...Still not getting this :/
I've read the same tutorial at least 5 times and i still dont understand the SDL_rect stuff
Please halp me out as im hesitant to move onto the next tut, cuz i want to know this first
A picture explaining it would help
##### Share on other sites
Hmm...Still not getting this :/
I've read the same tutorial at least 5 times and i still dont understand the SDL_rect stuff
Please halp me out as im hesitant to move onto the next tut, cuz i want to know this first
A picture explaining it would help
I don't understand what you don't understand xD. Can you be more specific?
The lazyfoo definition is pretty straight forward:
[quote='lazyfoo']
First we take the offsets and put them inside an SDL_Rect. We do this because SDL_BlitSurface() only accepts the offsets inside of an SDL_Rect.
[/quote]
This says that you need to tell SDL_BlitSurface (the function that draws on a surface) in which position (x,y) you want to draw the image into.
In this case, SDL programmer decided to make the function receive a SDL_Rect structure that already has an (x,y) value, instead of int posX, int posY.
• ### Game Developer Survey
We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!
• 15
• 21
• 21
• 11
• 9
|
# 40 CFR § 600.113-12 - Fuel economy, CO2 emissions, and carbon-related exhaust emission calculations for FTP, HFET, US06, SC03 and cold temperature FTP tests.
§ 600.113-12 Fuel economy, CO2 emissions, and carbon-related exhaust emission calculations for FTP, HFET, US06, SC03 and cold temperature FTP tests.
The Administrator will use the calculation procedure set forth in this paragraph for all official EPA testing of vehicles fueled with gasoline, diesel, alcohol-based or natural gas fuel. The calculations of the weighted fuel economy and carbon-related exhaust emission values require input of the weighted grams/mile values for total hydrocarbons (HC), carbon monoxide (CO), and carbon dioxide (CO2); and, additionally for methanol-fueled automobiles, methanol (CH3OH) and formaldehyde (HCHO); and, additionally for ethanol-fueled automobiles, methanol (CH3OH), ethanol (C2H5OH), acetaldehyde (C2H4O), and formaldehyde (HCHO); and additionally for natural gas-fueled vehicles, non-methane hydrocarbons (NMHC) and methane (CH4). For manufacturers selecting the fleet averaging option for N2O and CH4 as allowed under § 86.1818 of this chapter the calculations of the carbon-related exhaust emissions require the input of grams/mile values for nitrous oxide (N2O) and methane (CH4). Emissions shall be determined for the FTP, HFET, US06, SC03 and cold temperature FTP tests. Additionally, the specific gravity, carbon weight fraction and net heating value of the test fuel must be determined. The FTP, HFET, US06, SC03 and cold temperature FTP fuel economy and carbon-related exhaust emission values shall be calculated as specified in this section. An example fuel economy calculation appears in Appendix II of this part.
(a) Calculate the FTP fuel economy as follows:
(1) Calculate the weighted grams/mile values for the FTP test for CO2, HC, and CO, and where applicable, CH3OH, C2H5OH, C2H4O, HCHO, NMHC, N2O and CH4 as specified in § 86.144-94(b) of this chapter. Measure and record the test fuel's properties as specified in paragraph (f) of this section.
(2) Calculate separately the grams/mile values for the cold transient phase, stabilized phase and hot transient phase of the FTP test. For vehicles with more than one source of propulsion energy, one of which is a rechargeable energy storage system, or vehicles with special features that the Administrator determines may have a rechargeable energy source, whose charge can vary during the test, calculate separately the grams/mile values for the cold transient phase, stabilized phase, hot transient phase and hot stabilized phase of the FTP test.
(b) Calculate the HFET fuel economy as follows:
(1) Calculate the mass values for the highway fuel economy test for HC, CO and CO2, and where applicable, CH3OH, C2H5OH, C2H4O, HCHO, NMHC, N2O and CH4 as specified in § 86.144-94(b) of this chapter. Measure and record the test fuel's properties as specified in paragraph (f) of this section.
(2) Calculate the grams/mile values for the highway fuel economy test for HC, CO and CO2, and where applicable CH3OH, C2H5OH, C2H4O, HCHO, NMHC, N2O and CH4 by dividing the mass values obtained in paragraph (b)(1) of this section, by the actual driving distance, measured in miles, as specified in § 86.135 of this chapter.
(c) Calculate the cold temperature FTP fuel economy as follows:
(1) Calculate the weighted grams/mile values for the cold temperature FTP test for HC, CO and CO2, and where applicable, CH3OH, C2H5OH, C2H4O, HCHO, NMHC, N2O and CH4 as specified in § 86.144-94(b) of this chapter. For 2008 through 2010 diesel-fueled vehicles, HC measurement is optional.
(2) Calculate separately the grams/mile values for the cold transient phase, stabilized phase and hot transient phase of the cold temperature FTP test in § 86.244 of this chapter.
(3) Measure and record the test fuel's properties as specified in paragraph (f) of this section.
(d) Calculate the US06 fuel economy as follows:
(1) Calculate the total grams/mile values for the US06 test for HC, CO and CO2, and where applicable, CH3OH, C2H5OH, C2H4O, HCHO, NMHC, N2O and CH4 as specified in § 86.144-94(b) of this chapter.
(2) Calculate separately the grams/mile values for HC, CO and CO2, and where applicable, CH3OH, C2H5OH, C2H4O, HCHO, NMHC, N2O and CH4, for both the US06 City phase and the US06 Highway phase of the US06 test as specified in § 86.164 of this chapter. In lieu of directly measuring the emissions of the separate city and highway phases of the US06 test according to the provisions of § 86.159 of this chapter, the manufacturer may, with the advance approval of the Administrator and using good engineering judgment, optionally analytically determine the grams/mile values for the city and highway phases of the US06 test. To analytically determine US06 City and US06 Highway phase emission results, the manufacturer shall multiply the US06 total grams/mile values determined in paragraph (d)(1) of this section by the estimated proportion of fuel use for the city and highway phases relative to the total US06 fuel use. The manufacturer may estimate the proportion of fuel use for the US06 City and US06 Highway phases by using modal CO2, HC, and CO emissions data, or by using appropriate OBD data (e.g., fuel flow rate in grams of fuel per second), or another method approved by the Administrator.
(3) Measure and record the test fuel's properties as specified in paragraph (f) of this section.
(e) Calculate the SC03 fuel economy as follows:
(1) Calculate the grams/mile values for the SC03 test for HC, CO and CO2, and where applicable, CH3OH, C2H5OH, C2H4O, HCHO, NMHC, N2O and CH4 as specified in § 86.144-94(b) of this chapter.
(2) Measure and record the test fuel's properties as specified in paragraph (f) of this section.
(f) Analyze and determine fuel properties as follows:
(1) Gasoline test fuel properties shall be determined by analysis of a fuel sample taken from the fuel supply. A sample shall be taken after each addition of fresh fuel to the fuel supply. Additionally, the fuel shall be resampled once a month to account for any fuel property changes during storage. Less frequent resampling may be permitted if EPA concludes, on the basis of manufacturer-supplied data, that the properties of test fuel in the manufacturer's storage facility will remain stable for a period longer than one month. The fuel samples shall be analyzed to determine the following fuel properties:
(i) Specific gravity measured using ASTM D 1298 (incorporated by reference in § 600.011).
(ii) Carbon weight fraction measured using ASTM D 3343 (incorporated by reference in § 600.011).
(iii) Net heating value (Btu/lb) determined using ASTM D 3338/D 3338M (incorporated by reference in § 600.011).
(2) Methanol test fuel shall be analyzed to determine the following fuel properties:
(i) Specific gravity using ASTM D 1298 (incorporated by reference in § 600.011). You may determine specific gravity for the blend, or you may determine specific gravity for the gasoline and methanol fuel components separately before combining the results using the following equation:
SG = SGg × volume fraction gasoline + SGm × volume fraction methanol.
(ii)
(A) Carbon weight fraction using the following equation:
CWF = CWFg × MFg + 0.375 × MFm
Where:
CWFg = Carbon weight fraction of gasoline portion of blend measured using ASTM D 3343 (incorporated by reference in § 600.011).
MFg = Mass fraction gasoline = (G × SGg)/(G × SGg + M × SGm)
MFm = Mass fraction methanol = (M × SGm)/(G × SGg + M × SGm)
Where:
G = Volume fraction gasoline.
M = Volume fraction methanol.
SGg = Specific gravity of gasoline as measured using ASTM D 1298 (incorporated by reference in § 600.011).
SGm = Specific gravity of methanol as measured using ASTM D 1298 (incorporated by reference in § 600.011).
(B) Upon the approval of the Administrator, other procedures to measure the carbon weight fraction of the fuel blend may be used if the manufacturer can show that the procedures are superior to or equally as accurate as those specified in this paragraph (f)(2)(ii).
(3) Natural gas test fuel shall be analyzed to determine the following fuel properties:
(i) Fuel composition measured using ASTM D 1945 (incorporated by reference in § 600.011).
(ii) Specific gravity measured as based on fuel composition per ASTM D 1945 (incorporated by reference in § 600.011).
(iii) Carbon weight fraction, based on the carbon contained only in the hydrocarbon constituents of the fuel. This equals the weight of carbon in the hydrocarbon constituents divided by the total weight of fuel.
(iv) Carbon weight fraction of the fuel, which equals the total weight of carbon in the fuel (i.e., includes carbon contained in hydrocarbons and in CO2) divided by the total weight of fuel.
(4) Ethanol test fuel shall be analyzed to determine the following fuel properties:
(i) Specific gravity using ASTM D 1298 (incorporated by reference in § 600.011). You may determine specific gravity for the blend, or you may determine specific gravity for the gasoline and methanol fuel components separately before combining the results using the following equation:
SG = SGg × volume fraction gasoline + SGe × volume fraction ethanol.
(ii)
(A) Carbon weight fraction using the following equation:
CWF = CWFg × MFg + 0.521 × MFe
Where:
CWFg = Carbon weight fraction of gasoline portion of blend measured using ASTM D 3343 (incorporated by reference in § 600.011).
MFg = Mass fraction gasoline = (G × SGg)/(G × SGg + E × SGe)
MFe = Mass fraction ethanol = (E × SGe)/(G × SGg + E × SGe)
Where:
G = Volume fraction gasoline.
E = Volume fraction ethanol.
SGg = Specific gravity of gasoline as measured using ASTM D 1298 (incorporated by reference in § 600.011).
SGe = Specific gravity of ethanol as measured using ASTM D 1298 (incorporated by reference in § 600.011).
(B) Upon the approval of the Administrator, other procedures to measure the carbon weight fraction of the fuel blend may be used if the manufacturer can show that the procedures are superior to or equally as accurate as those specified in this paragraph (f)(4)(ii).
(g) Calculate separate FTP, highway, US06, SC03 and Cold temperature FTP fuel economy and carbon-related exhaust emissions from the grams/mile values for total HC, CO, CO2 and, where applicable, CH3OH, C2H5OH, C2H4O, HCHO, NMHC, N2O, and CH4, and the test fuel's specific gravity, carbon weight fraction, net heating value, and additionally for natural gas, the test fuel's composition.
(1) Emission values for fuel economy calculations. The emission values (obtained per paragraph (a) through (e) of this section, as applicable) used in the calculations of fuel economy in this section shall be rounded in accordance with § 86.1837 of this chapter. The CO2 values (obtained per this section, as applicable) used in each calculation of fuel economy in this section shall be rounded to the nearest gram/mile.
(2) Emission values for carbon-related exhaust emission calculations.
(i) If the emission values (obtained per paragraph (a) through (e) of this section, as applicable) were obtained from testing with aged exhaust emission control components as allowed under § 86.1823 of this chapter, then these test values shall be used in the calculations of carbon-related exhaust emissions in this section.
(ii) If the emission values (obtained per paragraph (a) through (e) of this section, as applicable) were not obtained from testing with aged exhaust emission control components as allowed under § 86.1823 of this chapter, then these test values shall be adjusted by the appropriate deterioration factor determined according to § 86.1823 of this chapter before being used in the calculations of carbon-related exhaust emissions in this section. For vehicles within a test group, the appropriate NMOG deterioration factor may be used in lieu of the deterioration factors for CH3OH, C2H5OH, and/or C2H4O emissions.
(iii) The emission values determined in paragraph (g)(2)(i) or (ii) of this section shall be rounded in accordance with § 86.1837 of this chapter. The CO2 values (obtained per this section, as applicable) used in each calculation of carbon-related exhaust emissions in this section shall be rounded to the nearest gram/mile.
(iv) For manufacturers complying with the fleet averaging option for N2O and CH4 as allowed under § 86.1818 of this chapter, N2O and CH4 emission values for use in the calculation of carbon-related exhaust emissions in this section shall be the values determined according to paragraph (g)(2)(iv)(A), (B), or (C) of this section.
(A) The FTP and HFET test values as determined for the emission data vehicle according to the provisions of § 86.1835 of this chapter. These values shall apply to all vehicles tested under this section that are included in the test group represented by the emission data vehicle and shall be adjusted by the appropriate deterioration factor determined according to § 86.1823 of this chapter before being used in the calculations of carbon-related exhaust emissions in this section, except that in-use test data shall not be adjusted by a deterioration factor.
(B) The FTP and HFET test values as determined according to testing conducted under the provisions of this subpart. These values shall be adjusted by the appropriate deterioration factor determined according to § 86.1823 of this chapter before being used in the calculations of carbon-related exhaust emissions in this section, except that in-use test data shall not be adjusted by a deterioration factor.
(C) For the 2012 through 2016 model years only, manufacturers may use an assigned value of 0.010 g/mi for N2O FTP and HFET test values. This value is not required to be adjusted by a deterioration factor.
(3) The specific gravity and the carbon weight fraction (obtained per paragraph (f) of this section) shall be recorded using three places to the right of the decimal point. The net heating value (obtained per paragraph (f) of this section) shall be recorded to the nearest whole Btu/lb.
(4) For the purpose of determining the applicable in-use CO2 exhaust emission standard under § 86.1818 of this chapter, the combined city/highway carbon-related exhaust emission value for a vehicle subconfiguration is calculated by arithmetically averaging the FTP-based city and HFET-based highway carbon-related exhaust emission values, as determined in paragraphs (h) through (n) of this section for the subconfiguration, weighted 0.55 and 0.45 respectively, and rounded to the nearest tenth of a gram per mile.
(h)
(1) For gasoline-fueled automobiles tested on a test fuel specified in § 86.113 of this chapter, the fuel economy in miles per gallon is to be calculated using the following equation and rounded to the nearest 0.1 miles per gallon:
mpg = (5174 × 10 4 × CWF × SG)/[((CWF × HC) + (0.429 × CO) + (0.273 × CO2)) × ((0.6 × SG × NHV) + 5471)]
Where:
HC = Grams/mile HC as obtained in paragraph (g)(1) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(1) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(1) of this section.
CWF = Carbon weight fraction of test fuel as obtained in paragraph (f)(1) of this section and rounded according to paragraph (g)(3) of this section.
NHV = Net heating value by mass of test fuel as obtained in paragraph (f)(1) of this section and rounded according to paragraph (g)(3) of this section.
SG = Specific gravity of test fuel as obtained in paragraph (f)(1) of this section and rounded according to paragraph (g)(3) of this section.
(2)
(i) For 2012 and later model year gasoline-fueled automobiles tested on a test fuel specified in § 86.113 of this chapter, the carbon-related exhaust emissions in grams per mile is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = (CWF/0.273 × HC) + (1.571 × CO) + CO2
Where:
CREE means the carbon-related exhaust emissions as defined in § 600.002.
HC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
CWF = Carbon weight fraction of test fuel as obtained in paragraph (f)(1) of this section and rounded according to paragraph (g)(3) of this section.
(ii) For manufacturers complying with the fleet averaging option for N2O and CH4 as allowed under § 86.1818 of this chapter, the carbon-related exhaust emissions in grams per mile for 2012 and later model year gasoline-fueled automobiles tested on a test fuel specified in § 86.113 of this chapter is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = [(CWF/0.273) × NMHC] + (1.571 × CO) + CO2 + (298 × N2O) + (25 × CH4)
Where:
CREE means the carbon-related exhaust emissions as defined in § 600.002.
NMHC = Grams/mile NMHC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
N2O = Grams/mile N2O as obtained in paragraph (g)(2) of this section.
CH4 = Grams/mile CH4 as obtained in paragraph (g)(2) of this section.
CWF = Carbon weight fraction of test fuel as obtained in paragraph (f)(1) of this section and rounded according to paragraph (g)(3) of this section.
(i)
(1) For diesel-fueled automobiles, calculate the fuel economy in miles per gallon of diesel fuel by dividing 2778 by the sum of three terms and rounding the quotient to the nearest 0.1 mile per gallon:
(i)
(A) 0.866 multiplied by HC (in grams/miles as obtained in paragraph (g)(1) of this section), or
(B) Zero, in the case of cold FTP diesel tests for which HC was not collected, as permitted in § 600.113-08(c);
(ii) 0.429 multiplied by CO (in grams/mile as obtained in paragraph (g)(1) of this section); and
(iii) 0.273 multiplied by CO2 (in grams/mile as obtained in paragraph (g)(1) of this section).
(2)
(i) For 2012 and later model year diesel-fueled automobiles, the carbon-related exhaust emissions in grams per mile is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = (3.172 × HC) + (1.571 × CO) + CO2
Where:
CREE means the carbon-related exhaust emissions as defined in § 600.002.
HC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
(ii) For manufacturers complying with the fleet averaging option for N2O and CH4 as allowed under § 86.1818 of this chapter, the carbon-related exhaust emissions in grams per mile for 2012 and later model year diesel-fueled automobiles is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = (3.172 × NMHC) + (1.571 × CO) + CO2 + (298 × N2O) + (25 × CH4)
Where:
CREE means the carbon-related exhaust emissions as defined in § 600.002.
NMHC = Grams/mile NMHC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
N2O = Grams/mile N2O as obtained in paragraph (g)(2) of this section.
CH4 = Grams/mile CH4 as obtained in paragraph (g)(2) of this section.
(j)
(1) For methanol-fueled automobiles and automobiles designed to operate on mixtures of gasoline and methanol, the fuel economy in miles per gallon of methanol is to be calculated using the following equation:
mpg = (CWF × SG × 3781.8)/((CWFexHC × HC) + (0.429 × CO) + (0.273 × CO2) + (0.375 × CH3OH) + (0.400 × HCHO))
Where:
CWF = Carbon weight fraction of the fuel as determined in paragraph (f)(2)(ii) of this section and rounded according to paragraph (g)(3) of this section.
SG = Specific gravity of the fuel as determined in paragraph (f)(2)(i) of this section and rounded according to paragraph (g)(3) of this section.
CWFexHC = Carbon weight fraction of exhaust hydrocarbons = CWF as determined in paragraph (f)(2)(ii) of this section and rounded according to paragraph (g)(3) of this section (for M100 fuel, CWFexHC = 0.866).
HC = Grams/mile HC as obtained in paragraph (g)(1) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(1) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(1) of this section.
CH3OH = Grams/mile CH3OH (methanol) as obtained in paragraph (g)(1) of this section.
HCHO = Grams/mile HCHO (formaldehyde) as obtained in paragraph (g)(1) of this section.
(2)
(i) For 2012 and later model year methanol-fueled automobiles and automobiles designed to operate on mixtures of gasoline and methanol, the carbon-related exhaust emissions in grams per mile while operating on methanol is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = (CWFexHC/0.273 × HC) + (1.571 × CO) + (1.374 × CH3OH) + (1.466 × HCHO) + CO2
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002.
CWFexHC = Carbon weight fraction of exhaust hydrocarbons = CWF as determined in paragraph (f)(2)(ii) of this section and rounded according to paragraph (g)(3) of this section (for M100 fuel, CWFexHC = 0.866).
HC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
CH3OH = Grams/mile CH3OH (methanol) as obtained in paragraph (g)(2) of this section.
HCHO = Grams/mile HCHO (formaldehyde) as obtained in paragraph (g)(2) of this section.
(ii) For manufacturers complying with the fleet averaging option for N2O and CH4 as allowed under § 86.1818 of this chapter, the carbon-related exhaust emissions in grams per mile for 2012 and later model year methanol-fueled automobiles and automobiles designed to operate on mixtures of gasoline and methanol while operating on methanol is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = [(CWFexHC/0.273) × NMHC] + (1.571 × CO) + (1.374 × CH3OH) + (1.466 × HCHO) + CO2 + (298 × N2O) + (25 × CH4)
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002.
CWFexHC = Carbon weight fraction of exhaust hydrocarbons = CWF as determined in paragraph (f)(2)(ii) of this section and rounded according to paragraph (g)(3) of this section (for M100 fuel, CWFexHC = 0.866).
NMHC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
CH3OH = Grams/mile CH3OH (methanol) as obtained in paragraph (g)(2) of this section.
HCHO = Grams/mile HCHO (formaldehyde) as obtained in paragraph (g)(2) of this section.
N2O = Grams/mile N2O as obtained in paragraph (g)(2) of this section.
CH4 = Grams/mile CH4 as obtained in paragraph (g)(2) of this section.
(k)
(1) For automobiles fueled with natural gas and automobiles designed to operate on gasoline and natural gas, the fuel economy in miles per gallon of natural gas is to be calculated using the following equation:
${\mathrm{mpg}}_{v}=\frac{{CWF}_{\mathrm{HC}/\mathrm{NG}}×{D}_{\mathrm{NG}}×121.5}{\left(0.749×{CH}_{4}\right)+\left({CWF}_{\mathrm{NMHC}}×NMHC\right)+\left(0.429×CO\right)+\left(0.273×\left({\mathrm{CO}}_{2}-{\mathrm{CO}}_{2\mathrm{NG}}\right)\right)}$
Where:
mpge = miles per gasoline gallon equivalent of natural gas.
CWFHC/NG = carbon weight fraction based on the hydrocarbon constituents in the natural gas fuel as obtained in paragraph (f)(3) of this section and rounded according to paragraph (g)(3) of this section.
DNG = density of the natural gas fuel [grams/ft 3 at 68 °F (20 °C) and 760 mm Hg (101.3 kPa)] pressure as obtained in paragraph (g)(3) of this section.
CH4, NMHC, CO, and CO2 = weighted mass exhaust emissions [grams/mile] for methane, non-methane HC, carbon monoxide, and carbon dioxide as obtained in paragraph (g)(2) of this section.
CWFNMHC = carbon weight fraction of the non-methane HC constituents in the fuel as determined from the speciated fuel composition per paragraph (f)(3) of this section and rounded according to paragraph (g)(3) of this section.
CO2NG = grams of carbon dioxide in the natural gas fuel consumed per mile of travel.
CO2NG = FCNG × DNG × WFCO2
Where:
${\mathrm{FC}}_{\mathrm{NG}}=\frac{\left(0.749×{CH}_{4}\right)+\left({CWF}_{\mathrm{NMHC}}×NMHC\right)+\left(0.429×CO\right)+\left(0.273×{\mathrm{CO}}_{2}\right)}{{\mathrm{CWF}}_{\mathrm{NG}}×{D}_{\mathrm{NG}}}$
= cubic feet of natural gas fuel consumed per mile
Where:
CWFNG = the carbon weight fraction of the natural gas fuel as calculated in paragraph (f)(3) of this section.
WFCO2 = weight fraction carbon dioxide of the natural gas fuel calculated using the mole fractions and molecular weights of the natural gas fuel constituents per ASTM D 1945 (incorporated by reference in § 600.011).
(2)
(i) For automobiles fueled with natural gas and automobiles designed to operate on gasoline and natural gas, the carbon-related exhaust emissions in grams per mile while operating on natural gas is to be calculated for 2012 and later model year vehicles using the following equation and rounded to the nearest 1 gram per mile:
CREE = 2.743 × CH4 + CWFNMHC/0.273 × NMHC + 1.571 × CO + CO2
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002.
CH4 = Grams/mile CH4 as obtained in paragraph (g)(2) of this section.
NMHC = Grams/mile NMHC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
CWFNMHC = carbon weight fraction of the non-methane HC constituents in the fuel as determined from the speciated fuel composition per paragraph (f)(3) of this section and rounded according to paragraph (f)(3) of this section.
(ii) For manufacturers complying with the fleet averaging option for N2O and CH4 as allowed under § 86.1818 of this chapter, the carbon-related exhaust emissions in grams per mile for 2012 and later model year automobiles fueled with natural gas and automobiles designed to operate on gasoline and natural gas while operating on natural gas is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = (25 × CH4) + [(CWFNMHC/0.273) × NMHC] + (1.571 × CO) + CO2 + (298 × N2O)
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002.
CH4 = Grams/mile CH4 as obtained in paragraph (g)(2) of this section.
NMHC = Grams/mile NMHC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
CWFNMHC = carbon weight fraction of the non-methane HC constituents in the fuel as determined from the speciated fuel composition per paragraph (f)(3) of this section and rounded according to paragraph (f)(3) of this section.
N2O = Grams/mile N2O as obtained in paragraph (g)(2) of this section.
(l)
(1) For ethanol-fueled automobiles and automobiles designed to operate on mixtures of gasoline and ethanol, the fuel economy in miles per gallon of ethanol is to be calculated using the following equation:
mpg = (CWF × SG × 3781.8)/((CWFexHC × HC) + (0.429 × CO) + (0.273 × CO2) + (0.375 × CH3OH) + (0.400 × HCHO) + (0.521 × C2H5OH) + (0.545 × C2H4O))
Where:
CWF = Carbon weight fraction of the fuel as determined in paragraph (f)(4) of this section and rounded according to paragraph (f)(3) of this section.
SG = Specific gravity of the fuel as determined in paragraph (f)(4) of this section and rounded according to paragraph (f)(3) of this section.
CWFexHC = Carbon weight fraction of exhaust hydrocarbons = CWF as determined in paragraph (f)(4) of this section and rounded according to paragraph (f)(3) of this section.
HC = Grams/mile HC as obtained in paragraph (g)(1) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(1) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(1) of this section.
CH3OH = Grams/mile CH3OH (methanol) as obtained in paragraph (g)(1) of this section.
HCHO = Grams/mile HCHO (formaldehyde) as obtained in paragraph (g)(1) of this section.
C2H5OH = Grams/mile C2H5OH (ethanol) as obtained in paragraph (g)(1) of this section.
C2H4O = Grams/mile C2H4O (acetaldehyde) as obtained in paragraph (g)(1) of this section.
(2)
(i) For 2012 and later model year ethanol-fueled automobiles and automobiles designed to operate on mixtures of gasoline and ethanol, the carbon-related exhaust emissions in grams per mile while operating on ethanol is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = (CWFexHC/0.273 × HC) + (1.571 × CO) + (1.374 × CH3OH) + (1.466 × HCHO) + (1.911 × C2H5OH) + (1.998 × C2H4O) + CO2
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002.
CWFexHC = Carbon weight fraction of exhaust hydrocarbons = CWF as determined in paragraph (f)(4) of this section and rounded according to paragraph (f)(3) of this section.
HC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
CH3OH = Grams/mile CH3OH (methanol) as obtained in paragraph (g)(2) of this section.
HCHO = Grams/mile HCHO (formaldehyde) as obtained in paragraph (g)(2) of this section.
C2H5OH = Grams/mile C2H5OH (ethanol) as obtained in paragraph (g)(2) of this section.
C2H4O = Grams/mile C2H4O (acetaldehyde) as obtained in paragraph (g)(2) of this section.
(ii) For manufacturers complying with the fleet averaging option for N2O and CH4 as allowed under § 86.1818 of this chapter, the carbon-related exhaust emissions in grams per mile for 2012 and later model year ethanol-fueled automobiles and automobiles designed to operate on mixtures of gasoline and ethanol while operating on ethanol is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = [(CWFexHC/0.273) × NMHC] + (1.571 × CO) + (1.374 × CH3OH) + (1.466 × HCHO) + (1.911 × C2H5OH) + (1.998 × C2H4O) + CO2 + (298 × N2O) + (25 × CH4)
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002.
CWFexHC = Carbon weight fraction of exhaust hydrocarbons = CWF as determined in paragraph (f)(4) of this section and rounded according to paragraph (f)(3) of this section.
NMHC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
CH3OH = Grams/mile CH3OH (methanol) as obtained in paragraph (g)(2) of this section.
HCHO = Grams/mile HCHO (formaldehyde) as obtained in paragraph (g)(2) of this section.
C2H5OH = Grams/mile C2H5OH (ethanol) as obtained in paragraph (g)(2) of this section.
C2H4O = Grams/mile C2H4O (acetaldehyde) as obtained in paragraph (g)(2) of this section.
N2O = Grams/mile N2O as obtained in paragraph (g)(2) of this section.
CH4 = Grams/mile CH4 as obtained in paragraph (g)(2) of this section.
(m)
(1) For automobiles fueled with liquefied petroleum gas and automobiles designed to operate on gasoline and liquefied petroleum gas, the fuel economy in miles per gallon of liquefied petroleum gas is to be calculated using the following equation:
${\mathrm{mpg}}_{e}=\frac{{\mathrm{CWF}}_{\mathrm{fuel}}·{\mathrm{SG}}_{\mathrm{fuel}}·3781.8}{{\mathrm{CWF}}_{\mathrm{HC}}·\mathrm{HC}+0.429·\mathrm{CO}+0.273·{\mathrm{CO}}_{2}}$
Where:
mpge = miles per gasoline gallon equivalent of liquefied petroleum gas.
CWFfuel = carbon weight fraction based on the hydrocarbon constituents in the liquefied petroleum gas fuel as obtained in paragraph (f)(5) of this section and rounded according to paragraph (g)(3) of this section.
SG = Specific gravity of the fuel as determined in paragraph (f)(5) of this section and rounded according to paragraph (g)(3) of this section.
3781.8 = Grams of H2O per gallon conversion factor.
CWFHC = Carbon weight fraction of exhaust hydrocarbon = CWFfuel as determined in paragraph (f)(4) of this section and rounded according to paragraph (f)(3) of this section.
HC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
(2)
(i) For automobiles fueled with liquefied petroleum gas and automobiles designed to operate on gasoline and liquefied petroleum gas, the carbon-related exhaust emissions in grams per mile while operating on liquefied petroleum gas is to be calculated for 2012 and later model year vehicles using the following equation and rounded to the nearest 1 gram per mile:
CREE = (CWFHC/0.273 × HC) + (1.571 × CO) + CO2
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002.
CWFHC = Carbon weight fraction of exhaust hydrocarbon = CWFfuel as determined in paragraph (f)(5) of this section and rounded according to paragraph (g)(3) of this section.
HC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
(ii) For manufacturers complying with the fleet averaging option for N2O and CH4 as allowed under § 86.1818 of this chapter, the carbon-related exhaust emissions in grams per mile for 2012 and later model year automobiles fueled with liquefied petroleum gas and automobiles designed to operate on mixtures of gasoline and liquefied petroleum gas while operating on liquefied petroleum gas is to be calculated using the following equation and rounded to the nearest 1 gram per mile:
CREE = [(CWFexHC/0.273) × NMHC] + (1.571 × CO) + CO2 + (298 × N2O) + (25 × CH4)
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002.
CWFHC = Carbon weight fraction of exhaust hydrocarbon = CWFfuel as determined in paragraph (f)(5) of this section and rounded according to paragraph (g)(3) of this section.
NMHC = Grams/mile HC as obtained in paragraph (g)(2) of this section.
CO = Grams/mile CO as obtained in paragraph (g)(2) of this section.
CO2 = Grams/mile CO2 as obtained in paragraph (g)(2) of this section.
N2O = Grams/mile N2O as obtained in paragraph (g)(2) of this section.
CH4 = Grams/mile CH4 as obtained in paragraph (g)(2) of this section.
(n) Manufacturers shall determine CO2 emissions and carbon-related exhaust emissions for electric vehicles, fuel cell vehicles, and plug-in hybrid electric vehicles according to the provisions of this paragraph (n). Subject to the limitations on the number of vehicles produced and delivered for sale as described in § 86.1866 of this chapter, the manufacturer may be allowed to use a value of 0 grams/mile to represent the emissions of fuel cell vehicles and the proportion of electric operation of a electric vehicles and plug-in hybrid electric vehicles that is derived from electricity that is generated from sources that are not onboard the vehicle, as described in paragraphs (n)(1) through (3) of this section. For purposes of labeling under this part, the CO2 emissions for electric vehicles shall be 0 grams per mile. Similarly, for purposes of labeling under this part, the CO2 emissions for plug-in hybrid electric vehicles shall be 0 grams per mile for the proportion of electric operation that is derived from electricity that is generated from sources that are not onboard the vehicle. For manufacturers no longer eligible to use 0 grams per mile to represent electric operation, and for all 2026 and later model year electric vehicles, fuel cell vehicles, and plug-in hybrid electric vehicles, the provisions of this paragraph (n) shall be used to determine the non-zero value for CREE for purposes of meeting the greenhouse gas emission standards described in § 86.1818 of this chapter.
(n) Manufacturers shall determine CO2 emissions and carbon-related exhaust emissions for electric vehicles, fuel cell vehicles, and plug-in hybrid electric vehicles according to the provisions of this paragraph (n). Subject to the limitations on the number of vehicles produced and delivered for sale as described in § 86.1866 of this chapter, the manufacturer may be allowed to use a value of 0 grams/mile to represent the emissions of fuel cell vehicles and the proportion of electric operation of a electric vehicles and plug-in hybrid electric vehicles that is derived from electricity that is generated from sources that are not onboard the vehicle, as described in paragraphs (n)(1) through (3) of this section. For purposes of labeling under this part, the CO2 emissions for electric vehicles shall be 0 grams per mile. Similarly, for purposes of labeling under this part, the CO2 emissions for plug-in hybrid electric vehicles shall be 0 grams per mile for the proportion of electric operation that is derived from electricity that is generated from sources that are not onboard the vehicle. For all 2027 and later model year electric vehicles, fuel cell vehicles, and plug-in hybrid electric vehicles, the provisions of this paragraph (n) shall be used to determine the non-zero value for CREE for purposes of meeting the greenhouse gas emission standards described in § 86.1818 of this chapter.
(1) For electric vehicles, but not including fuel cell vehicles, the carbon-related exhaust emissions in grams per mile is to be calculated using the following equation and rounded to the nearest one gram per mile:
CREE = CREEUP − CREEGAS
Where:
CREE means the carbon-related exhaust emission value as defined in § 600.002, which may be set equal to zero for eligible 2012 through 2026 model year electric vehicles as described in § 86.1866-12(a) of this chapter.
Where:
EC = The vehicle energy consumption in watt-hours per mile, for combined FTP/HFET operation, determined according to procedures established by the Administrator under § 600.116-12.
GRIDLOSS = 0.935 (to account for grid transmission losses).
AVGUSUP = 0.534 (the nationwide average electricity greenhouse gas emission rate at the powerplant, in grams per watt-hour).
2478 is the estimated grams of upstream greenhouse gas emissions per gallon of gasoline.
8887 is the estimated grams of CO2 per gallon of gasoline.
TargetCO2 = The CO2 Target Value for the fuel cell or electric vehicle determined according to § 86.1818 of this chapter for the appropriate model year.
(2) For plug-in hybrid electric vehicles, the carbon-related exhaust emissions in grams per mile is to be calculated according to the provisions of § 600.116, except that the CREE for charge-depleting operation shall be the sum of the CREE associated with gasoline consumption and the net upstream CREE determined according to paragraph (n)(1) of this section, rounded to the nearest one gram per mile.
(3) For 2012 and later model year fuel cell vehicles, the carbon-related exhaust emissions in grams per mile shall be calculated using the method specified in paragraph (n)(1) of this section, except that CREEUP shall be determined according to procedures established by the Administrator under § 600.111-08(f). As described in § 86.1866 of this chapter, the value of CREE may be set equal to zero for 2012 through 2026 model year fuel cell vehicles.
(o) Equations for fuels other than those specified in this section may be used with advance EPA approval. Alternate calculation methods for fuel economy and carbon-related exhaust emissions may be used in lieu of the methods described in this section if shown to yield equivalent or superior results and if approved in advance by the Administrator.
[76 FR 39533, July 6, 2011, as amended at 77 FR 63179, Oct. 15, 2012; 81 FR 74000, Oct. 25, 2016; 85 FR 25271, Apr. 30, 2020]
|
July 14, 2020
### Digital Option - Overview, How It Works, Features, Example
14/07/2022 · This is expressed by the following formula: \text Binary Call Option Payoff \\ =\left\ {\begin \text matrix\text 1 \text , \text Underlying’s Price\ \geq\ \text {Exercise Therefore the formula for long put option payoff is: P/L per share = MAX (strike price – underlying price, 0) – initial option price.
### Bitcoin Investment App Iphone:Binary option payoff formula
05/06/2021 · The payoff of binary options differ from those of regular options. Binary options either have a positive payoff or none. In the case of a binary call, if the price at a certain date, binary options pricing formula , S Tis larger than or equal to …
### finance - Pricing a digital put option using BS model
14/05/2021 · The trader can buy the option for $40. If the price of the stock finishes above$65, the option expires in the money and is worth $100. The trader makes$60 …
### The Payoff function of the Digital Call option. - ResearchGate
14/07/2022 · Understanding Option Payoff Charts CF = what you sell the underlying for – what you buy the underlying for when exercising the option. CF per share = underlying price – strikes price. CF = (underlying price – strike price) x number of option contracts x contract multiplier.
### Digital barrier options pricing: an improved Monte Carlo
2.9 The Law of One Price 27. Black-Scholes Formula & Risk-neutral Valuation (PDF) 20 Option Price and Probability Duality [No lecture notes] 21 Stochastic Differential Equations (
### Digital Contracts: Simple Tools for Pricing Complex Derivatives
mountain meadows campground for sale. digital option payoff formula. May 22, 2022
### Digital and Exotic Options - Financial Spread Betting
Consider a European call and put cash-or-nothing options on a futures contract with and exercise strike price of $90, a fixed payoff of$10 that expires on October 1, 2008. Assume that on January 1, 2008, the contract trades at \$110, and has a volatility of 25% per annum and the risk-free rate is 4.5% per annum.
### Forex in Peru: Digital option payoff formula
its payoff, ST, with respect to the risk-neutral probability distribution to determine the risk-neutral expectation and then to discount. How-ever, the value of a digital share can also be determined directly from the formula for the corresponding digital option without integration, as I now show. Let G(S, t) ; 6(S, t; T; %)/Se(r2q)(T2t). Then
### Forex in Malaysia:
Binary Option | Payoff Formula | Example
### Binary options Indonesia: Digital option payoff diagram
14/07/2022 · Cómo operar en 10/9/ · Formula. A binary call option pays 1 unit when the price of the underlying (asset) is greater than or equal to the exercise price and zero when it is otherwise. This is expressed by the following formula: \text Binary Call Option Payoff \\ =\left\ {\begin \text matrix\text 1 \text , \text Underlying’s Price\ \geq\ \text {Exercise
### Binary options Malaysia: Binary call option payoff
07/03/2011 · For a power option on a stock with price having strike price and time to expiry , the payoff is for a call, and for a put. Within the Black–Scholes model, closed-form solutions exist for the price of power options. In this Demonstration, prices as a function of the various parameters are explored. Contributed by: Peter Falloon (March 2011)
### Asset-Or-Nothing Call Option Definition - Investopedia
Payoff of a binary option on the other hand, is just a fixed amount which is not affected by the difference between the exercise price and the price of the underlying asset, digital option formula. A binary option depends on the digital option formula between the exercise price and the price of the underlying asset only to determine whether the
### Forex in Thailand: Digital option payoff formula
Digital Option - Overview, How It Works, Features, Example
### Determine price of cash-or-nothing digital options using Black
14/07/2022 · Digital option payoff formula 24/01/ · Generally speaking, this kind of risk is known as pin risk. Let D (R) = 1 R > K be the payoff of the digital call. On the other hand, consider the following call spread, which is slightly different to yours (it uses backward differences instead of central differences): S (R) = (R − (K − ε)) + − (R − K) + ε.
### Binary options Malaysia: Binary call option payoff
05/06/2021 · It is also called digital option because its payoff is just like binary signals: i, binary option payoff. A binary call option pays 1 unit when the price of the underlying asset is greater than or equal binary option payoff the exercise price and zero when it is otherwise. This is expressed by the following formula:. A binary option payoff is
### digital option pricing formula - panicpestcontrol1.com
14/07/2022 · Payoff for a put seller = −max (0,X −ST) = − m a x (0, X − S T) Profit for a put seller = −max (0,X −ST)+p0 = − m a x (0, X − S T) + p 0. Where p0 p 0 is the put premium. The put buyer has a limited loss and, while not completely unlimited gains, as the price of the underlying cannot fall below zero, the put buyer Estimated Reading Time: 5 mins
### Payoff and profit/loss functions for call and put options
Digital Option - Overview, How It Works, Features, Example
|
C++ program to find whether there is a path between two cells in matrix
C++Server Side ProgrammingProgramming
In this article, we will be discussing a program to find whether there exists a path between two cells in a given matrix.
Let us suppose we have been given a square matrix with possible values 0, 1, 2 and 3. Here,
• 0 means Blank Wall
• 1 means Source
• 2 means Destination
• 3 means Blank Cell
There can only be one Source and Destination in the matrix. The program is to see if there’s a possible path from Source to Destination in the given matrix moving in all four possible directions but not diagonally.
Example
Live Demo
#include<bits/stdc++.h>
using namespace std;
//creating a possible graph from given array
class use_graph {
int W;
public :
use_graph( int W ){
this->W = W;
}
void add_side( int source , int dest );
bool search ( int source , int dest);
};
void use_graph :: add_side ( int source , int dest ){
}
//function to perform BFS
bool use_graph :: search(int source, int dest) {
if (source == dest)
return true;
// initializing elements
bool *visited = new bool[W];
for (int i = 0; i < W; i++)
visited[i] = false;
list<int> queue;
//marking elements visited and removing them from queue
visited[source] = true;
queue.push_back(source);
list<int>::iterator i;
while (!queue.empty()){
source = queue.front();
queue.pop_front();
if (*i == dest)
return true;
if (!visited[*i]) {
visited[*i] = true;
queue.push_back(*i);
}
}
}
//if destination is not reached
return false;
}
bool is_okay(int i, int j, int M[][4]) {
if ((i < 0 || i >= 4) || (j < 0 || j >= 4 ) || M[i][j] == 0)
return false;
return true;
}
bool find(int M[][4]) {
int source , dest ;
int W = 4*4+2;
use_graph g(W);
int k = 1 ;
for (int i =0 ; i < 4 ; i++){
for (int j = 0 ; j < 4; j++){
if (M[i][j] != 0){
if ( is_okay ( i , j+1 , M ) )
g.add_side ( k , k+1 );
if ( is_okay ( i , j-1 , M ) )
g.add_side ( k , k-1 );
if (j < 4-1 && is_okay ( i+1 , j , M ) )
g.add_side ( k , k+4 );
if ( i > 0 && is_okay ( i-1 , j , M ) )
g.add_side ( k , k-4 );
}
if( M[i][j] == 1 )
source = k ;
if (M[i][j] == 2)
dest = k;
k++;
}
}
return g.search (source, dest) ;
}
int main(){
int M[4][4] = { { 0 , 3 , 0 , 1 }, { 3 , 0 , 3 , 3 }, { 2 , 3 , 0 , 3 },{ 0 , 0 , 3 , 0 }};
(find(M) == true) ?
cout << "Possible" : cout << "Not Possible" <<endl ;
return 0;
}
Output
Not Possible
Published on 03-Oct-2019 12:26:59
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
Go back to the page of Lemma 4.5.8.10.
Comment #611 by Tim Holzschuh on
Typo at the end of $(b)$: there is a superfluous ")". Typo in the third diagram: $c_Y$ should probably be $c_{X', Y}$
Typos after listing the properties:
• "Writing $X$ as the filtered colimit of its finite simplicial subsets ..."
• "If $X=\emptyset$, then $c_X$ is an isomorphism ...": $c_X$ should read $c_{X,Y}$.
You also write: "... and also for $n=0$ because $\Delta^0$ is a retract of $\Delta^1$ ...", but never show/mention that retracts of good simplicial sets are good (maybe on purpose of course!).
Comment #626 by Kerodon on
Yep. Thanks!
Comment #1144 by Xiaofa Chen on
Maybe typo at the end of ($c$): $v$ is an inner anodyne by virtue of Proposition 4.3.6.4.
Comment #1145 by Kerodon on
Yep. Thanks!
Comment #1309 by Bogdan Zavyalov on
noting that the upper and lower squares are categorical pushouts
I guess it "upper and lower" should be "back and front".
Comment #1318 by Kerodon on
Yep. Thanks!
There are also:
• 2 comment(s) on Chapter 4: The Homotopy Theory of $\infty$-Categories
• 4 comment(s) on Section 4.5: Equivalence
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
IIT JAM Follow
May 23, 2021 12:51 pm 30 pts
Is there any short trick to find the characteristic equation of a 4×4 Matrix.
• 0 Likes
• Shares
for a general 4×4 matrix there is no such short trick but if given matrix is some special kind of matrix i.e traingular matrix, companion matrix then there is some particular trick...
|
# Is ML Algorithm-based Perception Safe?
Van Chan Ngo · March 19, 2021
I want to recall that Tesla autopilot has failed for more than one time because of the perception module. It shows the unreliable nature of general machine learning (ML) algorithms. We know that 90% precision level is very high for almost very highly trained ML models. It is great for many applications, however, it is dangerous for mission-critical applications such as automated driving.
We can consider self-driving cars are probabilistic robots in which there are many uncertainties. One of them is the unpredictable environment where the robots operating. Thus, the training set size of the perception module is very small compared with the number of all possible scenarios in the environment. In summary, we could say that we are deploying a unreliable module into a safety-critical system.
People can say that we have high definition (HD) maps to support the decision of the automated driving system. However, HD maps cannot deal with dynamic objects such as other vehicles (e.g. the semi-truck). Therefore, it seems that it is not safe when the system’s logic relies on perception outputs only to make decision.
It suggests that we might need to think about other solution instead of relying on ML algorithms-based perception. One of the possible solution is reasoning about the robustness of perception and other modules in the automated driving system with respect to the uncertainties as discussed in this post.
Now, we think a little about a perception (object detection and tracking) whose accuracy is 90%. Let’s assume that it runs with the frequency of 10Hz. We want to know what is the probability that it makes a wrong detection during one hour of operation. In one hour, it performs $$10 \times 60 \times 60 = 36000$$ inferences. Thus, the probability is $$1 - 0.9^{36000}$$ which is very closed to $$1.0$$.
|
# Trig or treat
Geometry Level 5
Find the sum of the first 3 lowest positive values of x in degrees.
×
Problem Loading...
Note Loading...
Set Loading...
|
# This magnet question confounds me
1. Mar 19, 2013
### lodovico
1. The problem statement, all variables and given/known data
Today, my teacher gave us some practice AP multiple choice and my friend and I were debating between answer choices H and J (changed from A and B to remove any confusion) the question as i remember it:
blah blah... B-field into the page, constant velocity v, radius R and charge q. which graph shows the relationship between B-field and radius.. something with radius getting smaller as B-field gets larger
Choice H: this is a straight line coming from +y crossing the +x axis (basically a straight negative slope)
Choice J: the (1/x) graph with x>0
2. Relevant equations
Fc = mv^2/r
Fb= qvB
3. The attempt at a solution
Fc=Fb
$\frac{mv^2}{r}$=qvB
$\frac{mv}{r}$=qB
since mv and q are constants, is it ok to treat them as 1? that is what i did so...
$\frac{1}{r}$=B
$\frac{1}{B}$=r
since on the graphs r was the y-axis and B was the x-axis
i chose the (1/x) shaped graph
2. Mar 19, 2013
### Staff: Mentor
Yes, it is known as the Lamor Radius:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.