url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://physics.stackexchange.com/questions/154510/wave-particle-duality-as-result-of-taking-different-limits-of-a-qft | Wave/particle-duality as result of taking different limits of a QFT
There is an account on dualities in quantum field theories and string theories by Polchinski from last week
http://arxiv.org/abs/1412.5704
At the end of page 4, he writes the wave/particle dichotomy arises from different limits you can take in a quantum field theory.
Which limits are meant here exactly, and can one give a proper example? I assume it might relate to many/few quanta states.
• The particle picture arises from QFT e.g. by looking at the path that gives the larges contribution to the action in a Feynman path integral, which happens to be the classical path. Experimentally one can arrive at this with weak measurements e.g. in a cloud or bubble chamber on a single particle, with each interaction changing the momentum of a particle very little. This works on a "single" high energy particle (although there are still a lot of individual quantum processes!). The wave picture emerges by looking at the collective of many quanta, each of which makes a single interaction. – CuriousOne Dec 22 '14 at 15:06
• I would be interested in seeing a for the interested amateur type answer to this. That is, an answer that expects you to work at it but doesn't assume any specific knowledge of QFT. If the consensus is that such an answer is possible/useful I'd be willing to place a bounty on the question. – John Rennie Dec 23 '14 at 10:54
• @CuriousOne : The least action principle is known from classical mechanics. But I have doubts that picking a path gives a "particle picture". One can pick whatever one wishes. With classical bodies we can deprive the object from the other paths, i.e. limit its evolution to that one path, and it's O.K. But with a quantum object, if we only dare to limit its evolution at one point of that path, to $\Delta r = 0$ the particle subsequently may follow whatever paths in the universe. – Sofia Dec 23 '14 at 11:19
• OP here. The paper suggestion you have two limits, a particle limit and another field limit. I'd like to know/see both different limits, and explicitly with some QFT. – Nikolaj-K Dec 23 '14 at 13:05
• I've placed a (large!) bounty on this because I see it as an important contribution to writing the definitive article on wave particle duality. An answer targeted at the mathematically sophisticated amateur (like me :-) would be ideal. An answer of this type is likely to be long, because I'm guessing lots of side issues will also need to be explained. But then I'm offering the maximum bounty, and you have the Christmas/New Year holiday to write it in :-) – John Rennie Dec 26 '14 at 7:31
There are probably various answers to this question and I will try to provide one that I consider quite interesting. It is a specific realization/example of the fact that the path integral is dominated by estrema of the action.
The wave aspect of a QFT is probably trivial as QFT is dealing with wave equations. This is particularly apparent for massless particles and I will not discuss it any further.
Let me thus focus instead on the opposite limit when the particles are very heavy. I will use the Schwinger proper time and heavily follow Matt Schwartz textbook.
For simplicity, consider the propagator of a scalar particle in an external field source $A_\mu$ that in the Schwinger proper time takes a path-integral form over the particle trajectory $$G_A(x,y)=\langle A|T\phi(x)\phi(y)|A\rangle=\int_0^\infty ds e^{-is m^2}\langle y| e^{-i\hat{H}s}|x\rangle$$ where $$\langle y| e^{-i\hat{H}s}|x\rangle =\int_{z(0)=x}^{z(s)=y} [dz(\tau)] e^{i\mathcal{L}(z,\dot{z})}$$ with $$\mathcal{L}=-\int_0^s d\tau \left(\frac{dz^\mu(\tau)}{2d\tau}\right)^2+e \int A_\mu(z) dz^\mu\,.$$ It is convenient to rescale the variables with the mass, $s\rightarrow s/m^2$ and $\tau\rightarrow/m^2$ so that the path-integral is clearly dominated by the free kinetic energy when the mass is large $$G_A(x,y)=\frac{1}{m^2}\int_0^\infty ds e^{-is}\int_{z(0)=x}^{z(s/m^2)=x} [dz(\tau)]e^{-i\int_0^s d\tau m^2(\frac{dz^\mu}{d\tau})^2+i\int eA_\mu dz^\mu}$$ This is the limit of particle that takes a well definite trajectory since the path-integral is dominated by the point of stationary phase that corresponds to the free particle solution $$z^\mu(\tau)=x^\mu+\tau v^\mu\qquad v^\mu=(y-x)^\mu/s\,.$$ Moreover, on this solution the propagator becomes (after rescaling back to the original variables) $$G_A(x,y)=\int_0^\infty ds e^{-i\left[s m^2+\frac{(y-x)^2}{4s}-ev^\mu\int_0^s d\tau A_\mu z(\tau)\right]}$$ where the last term is the same that one get by adding the source current $$J_\mu=v_\mu \delta(x-v\tau)$$ so that the heavy particle creates the field $A_\mu$ as if moving in a classical trajectory at constant speed. As Schwartz says, when a particle is heavy the QFT can be approximated by treating the particle as a classical source (but treating everything else as quantum, e.g. the particle can possibly generates quantum radiation $A_\mu$ upon which we haven't integrated over yet).
• tl,dr: The stationary phase approximation is better for heavier, i.e. more classical, particles. Therefore you can understand the physics in that limit by considering only the extrema of the action, which is like tracing the path of a particle. – DanielSank Dec 29 '14 at 16:58
• @DanielSank I think you don't realize that extrema of the action can describe a priori also classical wave solutions, not necessarily particle-like solutions. In my answer I show a particular limit where the extremum of the action of a (quantum) field theory gives in fact a (classical) particle-like behavior as opposed to a (classical) wave behavior. – TwoBs Jan 1 '15 at 17:41
• Good point! I did not appreciate that. Thanks. – DanielSank Jan 1 '15 at 17:42
Wave particle duality is not a quantum physical issue! Here is a full description of its simple mechanism, exclusively based on special relativity which is easily understandable by any interested person.
Wave-particle duality is deeply embedded into the foundations of quantum mechanics (Wikipedia).
This statement is entirely disproved in the following by showing one case which may entirely be explained classically: light in vacuum.
The following derivation is based exclusively on the two postulates of special relativity from which is resulting directly and compellingly the entire model for light in vacuum.
There is one unexplored zone in special relativity which seems to yield only meaningless results. When particles are moving not only near speed of light (v < c) but at speed of light (v=c), the Lorentz transforms cease to operate. The proper time is reduced mathematically to zero, but there is no reference system from which this could be observed. Also, lengths would be reduced to zero for such a hypothetical non-existent reference system.
As a consequence, up to now the corresponding equations deriving from special relativity (time dilation and length contraction) were simply confined to massive particles, excluding the case v=c from the domain of definition of these equations. There is no physical legitimation for such a break in their application (implying de facto a limitation of universal validity of special relativity), and Einstein's special relativity does not cease to exist at v=c as it is shown by the means of an example in the following chart:
By consequence, it follows from the equations for proper time and length contraction that a photon which is traveling the distance Sun-Earth according to our observations in t=8 minutes for a distance of s=8 light minutes, has from its (hypothetical) own point of view a proper time t'=0 and travels a distance s'= 0.
If time and traveled distance are both zero that would mean that there was no movement. When I am traveling zero meters in zero seconds, I did not move, and there is no movement which could be subject to a measurement of velocity. My velocity is not defined (0m/ 0 sec.)
The Lorentz factor splits realities
The twin paradox shows with unequaled clarity the effects of the Lorentz factor.
Example: A twin brother undertakes a space travel and returns after 20 years. At his return to Earth the twin brother who remained at home observes that the traveling twin aged only by 5 years.
In this example the observed time on the observer's clock is 20 years. The proper time (and thus the real aging) is only 5 years instead of 20 years. These two realities are linked arithmetically by the proper time equation and by the Lorentz factor.
Moreover we can notice a hierarchical order of realities: We cannot say that the traveling twin has become 20 years older, even if all observers on Earth have measured 20 years. This would be in contradiction with the physical condition of the traveling twin who looks younger than the twin who stayed on Earth. This means with regard to photons that the proper reality of the photon, even if it may not be observed by anyone, reflects its primary reality. All observations are secondary with regard to this primary reality. Even the constant of speed of light c.
By consequence, and in accordance with the wording of the second postulate of special relativity, light velocity c is a secondary observer's reality. We observe a movement of light which according to the primary reality of the photon is a standstill.
The Lorentz factor is assigning to photons two realities, that means, the transmission of the light momentum is double-tracked:
The secondary reality is the (commonly known) observed reality: Maxwell equations are describing a light quantum in the form of an electromagnetic wave moving at speed of light (v = c, t = 8 min, s = 8 light minutes). The transmission of the momentum occurs indirectly from Sun to the wave and then from the wave to Earth.
The primary reality is the unobserved proper reality of the photon: t'=0 and s'=0, proper time and distance are zero, there is no velocity. That means that the momentum is transmitted directly outside of spacetime from Sun to Earth, without intermediate medium.
Result:
1. A classical explanation of Young's double slit: while we are observing nothing but an interfering wave, the particle characteristics of light in vacuum are transmitted directly (path length = 0) and in parallel to the electromagnetic wave.
2. Light in vacuum is a primitive border case of quantum physics which can be explained classically. As a result, the mere wave-particle duality can be described without non-locality issue (see also the open (former bounty) question) as a classical phenomenon.
3. This fact does not change at all quantum physics with all its non locality issues. But it shows that there is one classical case of wave-particle duality, with no need of recourse to quantum mechanics and/ or QFT.
4. A simple answer to the question of NikolajK and John Rennie what the nature of wave-particle duality is.
• I appreciate the effort you've put in, but your answer seems to be unrelated to the original question or to the targets I laid out for the bounty. Just to clarify, while I'm interested in the question because I have a deeper interest in wave-particle duality, to earn this bounty you need to answer Nikolaj's question. – John Rennie Dec 26 '14 at 16:11
• @john rennie: No problem, I understand! Anyway thank you for this very nice Christmas bounty you offered to Stack Exchange users, I find this a very good idea! For your private interests in wave-particle duality, I remain at your disposal with regard to my text. – Moonraker Dec 26 '14 at 16:26
• What you call different "realities" are just different coordinate systems, it's fundamentally no different from the fact that you can describe the same Newtonian scenario with different Galilean coordinate systems that assign different x and y coordinates to a given event. And I don't see how your answer gives a non-quantum version of wave-particle duality, since classical electromagnetic waves aren't measured to set off detectors at highly localized positions like individual quanta (photons, electrons) are. – Hypnosifl Dec 31 '14 at 20:28
• @hypnosifl : Within the first reality there is no coordinate system, instead there is a banal pointlike reality which, however, is real. - Wave-particle duality of photons in vacuum can be explained mathematically and by analogy with the classical twin phenomenon ---- not to be confounded with the fact that photons may be subject to measurements of quantum physics. Photons in vacuum may be considered as primitive border case of quantum physics, characterized by their empty space time interval. – Moonraker Jan 1 '15 at 15:56
• "Wave-particle duality of photons in vacuum can be explained mathematically and by analogy with the classical twin phenomenon" -- Your answer doesn't make clear what precisely this analogy is supposed to consist of, it all seems rather handwavey. What exactly is the SR analogue of the "particle" aspect of the photon and what exactly is the SR analogue of the "wave" aspect, and what is the SR analogue of the "duality" aspect where a given experimental setup will only reveal one or the other? – Hypnosifl Jan 1 '15 at 16:55
These might be two different issues. Wave-particle duality is one issue, different classical limits is another issue.
Wave-particle duality often refers to the fact that when choosing an experiment historically, people sometimes chose options that revealed wave properties and sometimes chose options that revealed particle properties. So the hypothesis was that nature has both qualities waiting to be revealed by different choices of experimental setups that measure the same initial input.
As for classical limits, (assuming you aren't doing MIW or dBB) a classical limit is one where you can ignore (relative) phases (classical fields and particles are entirely real, they have no phase).
For a bosonic field, you can take something like a classical wave limit. You have the option to take a high quantum number limit that is also a coherent state, then there is no relative phase, so the phase can be ignored, and it looks like a classical field. So it's not just a high quanta limit, you also need the coherence. I didn't go into much detail because Motl seems to cover it in detail at the level you are looking for in http://motls.blogspot.com/2011/11/how-classical-fields-particles-emerge.html
You can also take something like a classical particle limit, this is a low quanta limit but also a limit where the energy is kept high. So for the electromagnetic case, this would be single gamma rays, and now the scattering of a single quanta (where QFT reduces to just relativistic quantum mechanics since there is only a single quanta). In this limit the phase doesn't matter for the scattering angle, and you can compute it as compton scattering by a photon of fixed momentum $h\nu$. The details about the QFT to RQM limit (single quanta) is well known and how the high energy RQM scattering reduces to that of compton scattering I think is simply because there are so few options that conserve energy and momentum and scattering states have to be on shell. Again, probably well known.
None of this is as deep as I think you expected, but I wanted to provide the filling in of what I thought the authors meant and it might be things you probably already knew but they just didn't give enough details for you to know it was stuff you already knew. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781872391700745, "perplexity": 513.6989755491862}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00647.warc.gz"} |
http://repository.aust.edu.ng/xmlui/handle/123456789/3902?show=full | # The monotone wrapped Fukaya category and the open-closed string map
dc.creator Ritter, Alexander F dc.creator Smith, Ivan dc.date.accessioned 2016-07-19 dc.date.accessioned 2018-11-24T23:26:51Z dc.date.available 2016-09-13T09:31:29Z dc.date.available 2018-11-24T23:26:51Z dc.date.issued 2016-08-09 dc.identifier https://www.repository.cam.ac.uk/handle/1810/260145 dc.identifier.uri http://repository.aust.edu.ng/xmlui/handle/123456789/3902 dc.description.abstract We build the wrapped Fukaya category $\textit{W}$($\textit{E}$)for any monotone symplectic manifold $\textit{E}$, convex at infinity. We define the open-closed and closed-open string maps, OC : HH$_{*}$($\textit{W}$($\textit{E}$)) → $\textit{SH}^{*}$($\textit{E}$) and CO : $\textit{SH}^{*}$($\textit{E}$) → HH$^{*}$($\textit{W}$($\textit{E}$)). We study their algebraic properties and prove that the string maps are compatible with the $\textit{c}_1$($\textit{TE}$)-eigenvalue splitting of $\textit{W}$($\textit{E}$). We extend Abouzaid’s generation criterion from the exact to the monotone setting. We construct an acceleration functor $\textit{AF}$ : $\textit{F}$($\textit{E}$) → $\textit{W}$($\textit{E}$) from the compact Fukaya category which on Hochschild (co)homology commutes with the string maps and the canonical map $\textit{c}^{*}$ : $\textit{QH}^{*}$($\textit{E}$) → $\textit{SH}^{*}$($\textit{E}$). We define the $\textit{SH}^{*}$($\textit{E}$)-module structure on the Hochschild (co)homology of $\textit{W}$($\textit{E}$) which is compatible with the string maps (this was proved independently for exact convex symplectic manifolds by Ganatra). The module and unital algebra structures, and the generation criterion, also hold for the compact Fukaya category $\textit{F}$($\textit{E}$), and also hold for closed monotone symplectic manifolds. As an application, we show that the wrapped category of $\textit{O}$(−$\textit{k}$) → $\Bbb {CP}^m$ is proper (cohomologically finite) for 1 ≤ $\textit{k}$ ≤ $\textit{m}$. For any monotone negative line bundle $\textit{E}$ over a closed monotone toric manifold $\textit{B}$, we show that $\textit{SH}^{*}$($\textit{E}$) $\neq$ 0, $\textit{W}$($\textit{E}$) is non-trivial and $\textit{E}$ contains a non-displaceable monotone Lagrangian torus $\textit{L}$ on which OC is non-zero. dc.language en dc.publisher Springer dc.publisher Selecta Mathematica dc.title The monotone wrapped Fukaya category and the open-closed string map dc.type Article
## Files in this item
FilesSizeFormatView
Ritter_et_al-2016-Selecta_Mathematica-AM.pdf1.052Mbapplication/pdfView/Open | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960322380065918, "perplexity": 2451.5521947583934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00605.warc.gz"} |
http://en.wikipedia.org/wiki/Cayley-Dickson_construction | # Cayley–Dickson construction
(Redirected from Cayley-Dickson construction)
In mathematics, the Cayley–Dickson construction, named after Arthur Cayley and Leonard Eugene Dickson, produces a sequence of algebras over the field of real numbers, each with twice the dimension of the previous one. The algebras produced by this process are known as Cayley–Dickson algebras. They are useful composition algebras frequently applied in mathematical physics.
The Cayley–Dickson construction defines a new algebra based on the direct sum of an algebra with itself, with multiplication defined in a specific way and an involution known as conjugation. The product of an element and its conjugate (or sometimes the square root of this) is called the norm.
The symmetries of the real field disappear as the Cayley–Dickson construction is repeatedly applied: first losing order, then commutativity of multiplication, and next associativity of multiplication.
More generally, the Cayley–Dickson construction takes any algebra with involution to another algebra with involution of twice the dimension.[1]
## Complex numbers as ordered pairs
Main article: Complex number
The complex numbers can be written as ordered pairs (ab) of real numbers a and b, with the addition operator being component-by-component and with multiplication defined by
$(a, b) (c, d) = (a c - b d, a d + b c).\,$
A complex number whose second component is zero is associated with a real number: the complex number (a, 0) is the real number a.
Another important operation on complex numbers is conjugation. The conjugate (ab)* of (ab) is given by
$(a, b)^* = (a, -b).\,$
The conjugate has the property that
$(a, b)^* (a, b) = (a a + b b, a b - b a) = (a^2 + b^2, 0),\,$
which is a non-negative real number. In this way, conjugation defines a norm, making the complex numbers a normed vector space over the real numbers: the norm of a complex number z is
$|z| = (z^* z)^{1/2}.\,$
Furthermore, for any nonzero complex number z, conjugation gives a multiplicative inverse,
$z^{-1} = {z^* / |z|^2}.\,$
In as much as complex numbers consist of two independent real numbers, they form a 2-dimensional vector space over the real numbers.
Besides being of higher dimension, the complex numbers can be said to lack one algebraic property of the real numbers: a real number is its own conjugate.
## Quaternions
Main article: Quaternion
The next step in the construction is to generalize the multiplication and conjugation operations.
Form ordered pairs $(a, b)$ of complex numbers $a$ and $b$, with multiplication defined by
$(a, b) (c, d) = (a c - d^* b, d a + b c^*).\,$
Slight variations on this formula are possible; the resulting constructions will yield structures identical up to the signs of bases.
The order of the factors seems odd now, but will be important in the next step. Define the conjugate $(a, b)^*\,$ of $(a, b)$ by
$(a, b)^* = (a^*, -b).\,$
These operators are direct extensions of their complex analogs: if $a$ and $b$ are taken from the real subset of complex numbers, the appearance of the conjugate in the formulas has no effect, so the operators are the same as those for the complex numbers.
The product of an element with its conjugate is a non-negative real number:
$(a, b)^* (a, b) = (a^*, -b) (a, b) = (a^* a + b^* b, b a^* - b a^*) = (|a|^2 + |b|^2, 0 ).\,$
As before, the conjugate thus yields a norm and an inverse for any such ordered pair. So in the sense we explained above, these pairs constitute an algebra something like the real numbers. They are the quaternions, named by Hamilton in 1843.
Inasmuch as quaternions consist of two independent complex numbers, they form a 4-dimensional vector space over the real numbers.
The multiplication of quaternions is not quite like the multiplication of real numbers, though. It is not commutative, that is, if $p$ and $q$ are quaternions, it is not generally true that $p q = q p$, but it is true that $p q = (q p)'$, where $(a, b)' = (a, -b)$.
## Octonions
Main article: Octonion
From now on, all the steps will look the same.
This time, form ordered pairs $(p, q)$ of quaternions $p$ and $q$, with multiplication and conjugation defined exactly as for the quaternions:
$(p, q) (r, s) = (p r - s^* q, s p + q r^*).\,$
Note, however, that because the quaternions are not commutative, the order of the factors in the multiplication formula becomes important—if the last factor in the multiplication formula were $r^*q$ rather than $qr^*$, the formula for multiplication of an element by its conjugate would not yield a real number.
For exactly the same reasons as before, the conjugation operator yields a norm and a multiplicative inverse of any nonzero element.
This algebra was discovered by John T. Graves in 1843, and is called the octonions or the "Cayley numbers".
Inasmuch as octonions consist of two quaternions, the octonions form an 8-dimensional vector space over the real numbers.
The multiplication of octonions is even stranger than that of quaternions. Besides being non-commutative, it is not associative: that is, if $p$, $q$, and $r$ are octonions, it is generally not true that
$(p q) r = p (q r).\$
For the reason of this non-associativity, octonions have no matrix representation.
## Further algebras
The algebra immediately following the octonions is called the sedenions. It retains an algebraic property called power associativity, meaning that if $s$ is a sedenion, $s^n s^m = s^{n + m}$, but loses the property of being an alternative algebra and hence cannot be a composition algebra.
The Cayley–Dickson construction can be carried on ad infinitum, at each step producing a power-associative algebra whose dimension is double that of the algebra of the preceding step. All the algebras generated in this way over a field are quadratic: that is, each element satisfies a quadratic equation with coefficients from the field.[2]
## General Cayley–Dickson construction
Albert (1942, p. 171) gave a slight generalization, defining the product and involution on B=AA for A an algebra with involution (with (xy)* = y*x*) to be
$(p, q) (r, s) = (p r - \gamma s^* q, s p + q r^*)\,$
$(p, q)^* = (p^*, -q)\$
for γ an additive map that commutes with * and left and right multiplication by any element. (Over the reals all choices of γ are equivalent to −1, 0 or 1.) In this construction, A is an algebra with involution, meaning:
• A is an abelian group under +
• A has a product that is left and right distributive over +
• A has an involution *, with x** = x, (x + y)* = x* + y*, (xy)* = y*x*.
The algebra B=AA produced by the Cayley–Dickson construction is also an algebra with involution.
B inherits properties from A unchanged as follows.
• If A has an identity 1A, then B has an identity (1A, 0).
• If A has the property that x + x*, xx* associate and commute with all elements, then so does B. This property implies that any element generates a commutative associative *-algebra, so in particular the algebra is power associative.
Other properties of A only induce weaker properties of B:
• If A is commutative and has trivial involution, then B is commutative.
• If A is commutative and associative then B is associative.
• If A is associative and x + x*, xx* associate and commute with everything, then B is an alternative algebra.
## Notes
1. ^ Schafer (1995) p.45
2. ^ Schafer (1995) p.50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643745422363281, "perplexity": 435.94260315540953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424937406179.50/warc/CC-MAIN-20150226075646-00176-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-do-we-formalize-the-following-for-all-natural-nos.521923/ | How do we formalize the following:For all natural Nos :
1. Aug 16, 2011
solakis
How do we formalize the following:
For all natural Nos : $n^2$ is even$\Rightarrow$ n is even
2. Aug 16, 2011
Rasalhague
Re: formalization
Here's one way:
$$(\forall n \in \mathbb{N})[((\exists p \in \mathbb{N})[n^2=2p])\Rightarrow((\exists q\in \mathbb{N})[n=2q])].$$
Here's another:
$$\left \{ n \in \mathbb{N} \; | \; (\exists p \in \mathbb{N})[n^2 = 2p] \right \} \subseteq \left \{ n \in \mathbb{N} \; | \; (\exists q \in \mathbb{N})[n=2q] \right \}.$$
The symbol $\mathbb{N}$, like the name "the natural numbers", is ambiguous, so you might want to specify whether you mean the positive integers, $\mathbb{N}_1=\mathbb{Z}_+ = \left \{ 1,2,3,... \right \}$, or the non-negative integers, $\mathbb{N}_0 =\mathbb{Z}_+ \cup \left \{ 0 \right \}=0,1,2,3,...$.
3. Aug 18, 2011
solakis
Re: formalization
Thank you ,i think the first formula is the more suitable to use in formalized mathematics.
The question now is how do we prove this formula in formalized mathematics
4. Aug 18, 2011
Rasalhague
Re: formalization
$$1. \enspace (\exists p \in \mathbb{N}_0)[n^2 = 2p]$$
$$\Rightarrow ((\exists a \in \mathbb{N}_0)[n=2a]) \vee ((\exists b \in \mathbb{N}_0)[n=2b+1]).$$
$$2. \enspace (\exists b \in \mathbb{N}_0)[n=2b+1])$$
$$\Rightarrow (\exists q \in \mathbb{N}_0)[n^2=(2b+1)^2=4b^2+4b+1=2q+1]$$
$$\Rightarrow \neg (\exists p \in \mathbb{N}_0)[n^2 = 2p].$$
$$3. \enspace\therefore ((\exists p \in \mathbb{N}_0)[n^2 = 2p]) \Rightarrow (\exists a \in \mathbb{N}_0)[n=2a] \enspace\enspace \blacksquare$$
Or, to see the argument clearer, let $A = (\exists a \in \mathbb{N}_0)[n=2a]$ and let $B = (\exists b \in \mathbb{N}_0)[n=2b+1],$ and suppose $(\exists p \in \mathbb{N}_0)[n^2 = 2p]$. Then $A \vee B.$ But $\neg B.$ So $A.$
5. Aug 18, 2011
solakis
Re: formalization
I think to change the variables inside the existential quantifier without dropping it first it is not allowed by logic.
Also it becomes confusing and impossible to follow.
I also wander is necessarily a formalized proof also a formal one??
Or to put in a better way is it possible to have a proof like the one above ,where one uses only formulas , without mentioning the laws of logic and the theorems upon which these laws act to give us the formalized conclusions of the above proof??
6. Aug 19, 2011
Rasalhague
Re: formalization
Simple answer: I don't know. I'm curious. I hope someone better informed can give say more about this. Seems like it would at least be a good idea to make explicit which statements and which rules of inference are used in each line; so, we could append to the final line: / 1,2,DS. (Disjunctive syllogism.) I could be mistaken but here is what looks like an example of someone using formal and formalized as synonyms: http://arxiv.org/abs/math/0410224
A problem with my proof as it stands: a full, formal proof would have to justify the assumption that natural numbers are odd or even but not both.
I don't see what the problem is with substitution. Maybe you could elaborate?
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: How do we formalize the following:For all natural Nos : | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262062907218933, "perplexity": 770.152732797749}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00559.warc.gz"} |
http://www.zora.uzh.ch/27116/ | # Translated Poisson approximation to equilibrium distributions of Markov population processes
Socoll, S N; Barbour, A D (2010). Translated Poisson approximation to equilibrium distributions of Markov population processes. Methodology and Computing in Applied Probability, 12(4):567-586.
## Abstract
The paper is concerned with the equilibrium distributions of continuous-time density dependent Markov processes on the integers. These distributions are known typically to be approximately normal, with $O( 1 /{\sqrt{n}})$ error as measured in Kolmogorov distance. Here, an approximation in the much stronger total variation norm is established, without any loss in the asymptotic order of accuracy; the approximating distribution is a translated Poisson distribution having the same variance and (almost) the same mean. Our arguments are based on the Stein–Chen method and Dynkin’s formula.
The paper is concerned with the equilibrium distributions of continuous-time density dependent Markov processes on the integers. These distributions are known typically to be approximately normal, with $O( 1 /{\sqrt{n}})$ error as measured in Kolmogorov distance. Here, an approximation in the much stronger total variation norm is established, without any loss in the asymptotic order of accuracy; the approximating distribution is a translated Poisson distribution having the same variance and (almost) the same mean. Our arguments are based on the Stein–Chen method and Dynkin’s formula.
## Altmetrics
28 downloads since deposited on 04 Feb 2010
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Institute of Mathematics 510 Mathematics English 2010 04 Feb 2010 14:54 05 Apr 2016 13:44 Springer 1387-5841 The original publication is available at www.springerlink.com https://doi.org/10.1007/s11009-009-9124-8 http://arxiv.org/abs/0902.0884
Permanent URL: https://doi.org/10.5167/uzh-27116
Preview
Content: Accepted Version
Filetype: PDF
Size: 1MB
View at publisher
Filetype: PDF - Registered users only
Size: 1MB
## TrendTerms
TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058666229248047, "perplexity": 1364.0551168974368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00413-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www2.ccs.tsukuba.ac.jp/Astro/achievements/ja/2021/11/01/inoue2021a/ | # 研究成果・発表論文
## Fragmentation of ring galaxies and transformation to clumpy galaxies
### Inoue, Shigeki, Yoshida, Naoki, & Hernquist, Lars
##### 要旨
We study the fragmentation of collisional ring galaxies (CRGs) using a linear perturbation analysis that computes the physical conditions of gravitational instability, as determined by the balance of self-gravity of the ring against pressure and Coriolis forces. We adopt our formalism to simulations of CRGs and show that the analysis can accurately characterize the stability and onset of fragmentation, although the linear theory appears to underpredict the number of fragments of an unstable CRG by a factor of 2. In addition, since the orthodox ’density- wave’ model is inapplicable to such self-gravitating rings, we devise a simple approach that describes the rings propagating as material waves. We find that the toy model can predict whether the simulated CRGs fragment or not using information from their pre-collision states. We also apply our instability analysis to a CRG discovered at a high redshift, z = 2.19. We find that a quite high-velocity dispersion is required for the stability of the ring, and therefore the CRG should be unstable to ring fragmentation. CRGs are rarely observed at high redshifts, and this may be because CRGs are usually too faint. Since the fragmentation can induce active star formation and make the ring bright enough to observe, the instability could explain this rarity. An unstable CRG fragments into massive clumps retaining the initial disc rotation, and thus it would evolve into a clumpy galaxy with a low surface density in an interclump region.
We study the fragmentation of collisional ring galaxies (CRGs) using a linear perturbation analysis that computes the physical conditions of gravitational instability, as determined by the balance of self-gravity of the ring against pressure and Coriolis forces. We adopt our formalism to simulations of CRGs and show that the analysis can accurately characterize the stability and onset of fragmentation, although the linear theory appears to underpredict the number of fragments of an unstable CRG by a factor of 2. In addition, since the orthodox ’density- wave’ model is inapplicable to such self-gravitating rings, we devise a simple approach that describes the rings propagating as material waves. We find that the toy model can predict whether the simulated CRGs fragment or not using information from their pre-collision states. We also apply our instability analysis to a CRG discovered at a high redshift, z = 2.19. We find that a quite high-velocity dispersion is required for the stability of the ring, and therefore the CRG should be unstable to ring fragmentation. CRGs are rarely observed at high redshifts, and this may be because CRGs are usually too faint. Since the fragmentation can induce active star formation and make the ring bright enough to observe, the instability could explain this rarity. An unstable CRG fragments into massive clumps retaining the initial disc rotation, and thus it would evolve into a clumpy galaxy with a low surface density in an interclump region. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8358187675476074, "perplexity": 1453.5998940798283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00219.warc.gz"} |
https://www.physicsforums.com/threads/summer-assignment-resulting-slick.128419/ | # Summer Assignment - resulting slick
1. Aug 8, 2006
Ive seen questions like this one done here before but I dont think im understanding this well. The question is:
Suppose 300 cubic meters of oil is spilled into the ocean. Find the are of the resulting slick, assuming that it is one molecule thick, and each moelcule occupies a cube .50 um (micrometers) on a side.
I believe the proper formula for this situation is Area=Volume/Depth. So then I would do Area=300 meters cubed/.5 micrometers. Convert the micrometers to meters and get Area=300 meters cubed/.0000005 meters and have my answer being 600,000,000 meters squared.
Is that right though? For some reason I think its wrong, and I hadn't any idea what to do at all until i found the formula but it just seems like...I dont know, a bit too easy and like maybe im missing something. It also appears that the answer is ridiculously large, but maybe its supposed to be like that? Any help on this would be appreciated, I would be shocked if what I did was correct.
Last edited: Aug 8, 2006
2. Aug 8, 2006
### Staff: Mentor
Maybe an easier way (or just a double-check on your answer) would be to figure out how many molecules there are in 300m^3 of oil and do the area math. Do you know how to figure out how many molecules there are in 300m^3 of oil?
3. Aug 8, 2006
### Office_Shredder
Staff Emeritus
Looks right to me.
berkeman, I don't think that's an easier way ;)
4. Aug 9, 2006
Im not sure what you mean by your way but, you say I actually did it right? Thanks for confirming what I did, I was having a lot of trouble on that problem and didn't even get as far as finding the area formula until I foudn this website, which is when I quit guessing what to do and trying to look it up becuase I was so confused. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060786366462708, "perplexity": 867.2630623675151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719960.60/warc/CC-MAIN-20161020183839-00073-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-9-section-9-3-the-parabola-exercise-set-page-996/15 | ## Precalculus (6th Edition) Blitzer
focus $(0,-\frac{1}{8})$, directrix $y=\frac{1}{8}$ see graph.
Step 1. Rewriting the equation as $x^2=-\frac{1}{2}y$, we have $4p=-\frac{1}{2}$ and $p=-\frac{1}{8}$ with the parabola opening downwards and vertex at $(0,0)$. Step 2. We can find the focus at $(0,-\frac{1}{8})$ and directrix as $y=\frac{1}{8}$ Step 3. We can graph the parabola as shown in the figure. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597610592842102, "perplexity": 230.9895069564388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513230.90/warc/CC-MAIN-20200606093706-20200606123706-00369.warc.gz"} |
http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/statug_hpfmm_syntax02.htm | # The HPFMM Procedure
### BAYES Statement
• BAYES bayes-options;
The BAYES statement requests that the parameters of the model be estimated by Markov chain Monte Carlo sampling techniques. The HPFMM procedure can estimate by maximum likelihood the parameters of all models supported by the procedure. Bayes estimation, on the other hand, is available for only a subset of these models.
In Bayesian analysis, it is essential to examine the convergence of the Markov chains before you proceed with posterior inference. With ODS Graphics turned on, the HPFMM procedure produces graphs at the end of the procedure output; these graphs enable you to visually examine the convergence of the chain. Inferences cannot be made if the Markov chain has not converged.
The output produced for a Bayesian analysis is markedly different from that for a frequentist (maximum likelihood) analysis for the following reasons:
• Parameter estimates do not have the same interpretation in the two analyses. Parameters are fixed unknown constants in the frequentist context and random variables in a Bayesian analysis.
• The results of a Bayesian analysis are summarized through chain diagnostics and posterior summary statistics and intervals.
• The HPFMM procedure samples the mixing probabilities in Bayesian models directly, rather than mapping them onto a logistic (or other) scale.
The HPFMM procedure applies highly specialized sampling algorithms in Bayesian models. For single-component models without effects, a conjugate sampling algorithm is used where possible. For models in the exponential family that contain effects, the sampling algorithm is based on Gamerman (1997). For the normal and t distributions, a conjugate sampler is the default sampling algorithm for models with and without effects. In multi-component models, the sampling algorithm is based on latent variable sampling through data augmentation (Frühwirth-Schnatter 2006) and the Gamerman or conjugate sampler. Because of this specialization, the options for controlling the prior distributions of the parameters are limited.
Table 51.3 summarizes the bayes-options available in the BAYES statement. The full assortment of options is then described in alphabetical order.
Table 51.3: BAYES Statement Options
Option
Description
Options Related to Sampling
Specifies how to construct initial values
Specifies the number of burn-in samples
Specifies the number of samples after burn-in
Forces a Metropolis-Hastings sampling algorithm even if conjugate sampling is possible
Generates a data set that contains the posterior estimates
Controls the thinning of the Markov chain
Specification of Prior Information
Specifies the prior parameters for the Dirichlet distribution of the mixing probabilities
Specifies the parameters of the normal prior distribution for individual parameters in the vector
Specifies the parameters of the prior distribution for the means in homogeneous mixtures without effects
Specifies the parameters of the inverse gamma prior distribution for the scale parameters in homogeneous mixtures
Specifies additional options used in the determination of the prior distribution
Posterior Summary Statistics and Convergence Diagnostics
Displays convergence diagnostics for the Markov chain
Displays posterior summary information for the Markov chain
Other Options
Specifies which estimate is used for the computation of OUTPUT statistics and graphics
Specifies the time interval to report on sampling progress (in seconds)
You can specify the following bayes-options in the BAYES statement.
BETAPRIORPARMS=pair-specification
BETAPRIORPARMS(pair-specificationpair-specification)
specifies the parameters for the normal prior distribution of the parameters that are associated with model effects (s). The pair-specification is of the form , and the values a and b are the mean and variance of the normal distribution, respectively. This option overrides the PRIOROPTIONS option.
The form of the BETAPRIORPARMS with an equal sign and a single pair is used to specify one pair of prior parameters that applies to all components in the mixture. In the following example, the two intercepts and the two regression coefficients all have a prior distribution:
proc hpfmm;
model y = x / k=2;
bayes betapriorparms=(0,100);
run;
You can also provide a list of pairs to specify different sets of prior parameters for the various regression parameters and components. For example:
proc hpfmm;
model y = x/ k=2;
bayes betapriorparms( (0,10) (0,20) (.,.) (3,100) );
run;
The simple linear regression in the first component has a prior for the intercept and a prior for the slope. The prior for the intercept in the second component uses the HPFMM default, whereas the prior for the slope is .
DIAGNOSTICS=ALL | NONE | (keyword-list)
DIAG=ALL | NONE | (keyword-list)
controls the computation of diagnostics for the posterior chain. You can request all posterior diagnostics by specifying DIAGNOSTICS=ALL or suppress the computation of posterior diagnostics by specifying DIAGNOSTICS=NONE. The following keywords enable you to select subsets of posterior diagnostics; the default is DIAGNOSTICS=(AUTOCORR).
AUTOCORR <(LAGS= numeric-list)>
computes for each sampled parameter the autocorrelations of lags specified in the LAGS= list. Elements in the list are truncated to integers, and repeated values are removed. If the LAGS= option is not specified, autocorrelations are computed by default for lags 1, 5, 10, and 50. See the section Autocorrelations in Chapter 7: Introduction to Bayesian Analysis Procedures, for details.
ESS
computes an estimate of the effective sample size (Kass et al. 1998), the correlation time, and the efficiency of the chain for each parameter. See the section Effective Sample Size in Chapter 7: Introduction to Bayesian Analysis Procedures, for details.
GEWEKE <(geweke-options)>
computes the Geweke spectral density diagnostics (Geweke 1992), which are essentially a two-sample t test between the first portion and the last portion of the chain. The default is and , but you can choose other fractions by using the following geweke-options:
FRAC1=value
specifies the fraction for the first window.
FRAC2=value
specifies the fraction for the second window.
See the section Geweke Diagnostics in Chapter 7: Introduction to Bayesian Analysis Procedures, for details.
HEIDELBERGER <(Heidel-options)>
HEIDEL <(Heidel-options)>
computes the Heidelberger and Welch diagnostic (which consists of a stationarity test and a half-width test) for each variable. The stationary diagnostic test tests the null hypothesis that the posterior samples are generated from a stationary process. If the stationarity test is passed, a half-width test is then carried out. See the section Heidelberger and Welch Diagnostics in Chapter 7: Introduction to Bayesian Analysis Procedures, for more details.
These diagnostics are not performed by default. You can specify the DIAGNOSTICS=HEIDELBERGER option to request these diagnostics, and you can also specify suboptions, such as DIAGNOSTICS=HEIDELBERGER(EPS=0.05), as follows:
SALPHA=value
specifies the level for the stationarity test. By default, SALPHA=0.05.
HALPHA=value
specifies the level for the half-width test. By default, HALPHA=0.05.
EPS=value
specifies a small positive number such that if the half-width is less than times the sample mean of the retaining iterates, the half-width test is passed. By default, EPS=0.1.
MCERROR
MCSE
computes an estimate of the Monte Carlo standard error for each sampled parameter. See the section Standard Error of the Mean Estimate in Chapter 7: Introduction to Bayesian Analysis Procedures, for details.
MAXLAG=n
specifies the largest lag used in computing the effective sample size and the Monte Carlo standard error. Specifying this option implies the ESS and MCERROR options. The default is MAXLAG=250.
RAFTERY <(Raftery-options)>
RL <(Raftery-options)>
computes the Raftery and Lewis diagnostics, which evaluate the accuracy of the estimated quantile ( for a given Q ) of a chain. can achieve any degree of accuracy when the chain is allowed to run for a long time. The algorithm stops when the estimated probability reaches within of the value Q with probability S; that is, . See the section Raftery and Lewis Diagnostics in Chapter 7: Introduction to Bayesian Analysis Procedures, for more details. The Raftery-options enable you to specify Q, R, S, and a precision level for a stationary test.
These diagnostics are not performed by default. You can specify the DIAGNOSTICS=RAFERTY option to request these diagnostics, and you can also specify suboptions, such as DIAGNOSTICS=RAFERTY(QUANTILE=0.05), as follows:
QUANTILE=value
Q=value
specifies the order (a value between 0 and 1) of the quantile of interest. By default, QUANTILE=0.025.
ACCURACY=value
R=value
specifies a small positive number as the margin of error for measuring the accuracy of estimation of the quantile. By default, ACCURACY=0.005.
PROB=value
S=value
specifies the probability of attaining the accuracy of the estimation of the quantile. By default, PROB=0.95.
EPS=value
specifies the tolerance level (a small positive number between 0 and 1) for the stationary test. By default, EPS=0.001.
MIXPRIORPARMS=K
MIXPRIORPARMS(value-list)
specifies the parameters used in constructing the Dirichlet prior distribution for the mixing parameters. If you specify MIXPRIORPARMS=K, the parameters of the k-dimensional Dirichlet distribution are a vector that contains the number of components in the model (k), whatever that might be. You can specify an explicit list of parameters in value-list. If the MIXPRIORPARMS option is not specified, the default Dirichlet parameter vector is a vector of length k of ones. This results in a uniform prior over the unit simplex; for k=2, this is the uniform distribution. See the section Prior Distributions for the distribution function of the Dirichlet as used by the HPFMM procedure.
ESTIMATE=MEAN | MAP
determines which overall estimate is used, based on the posterior sample, in the computation of OUTPUT statistics and certain ODS graphics. By default, the arithmetic average of the (thinned) posterior sample is used. If you specify ESTIMATE=MAP, the parameter vector is used that corresponds to the maximum log posterior density in the posterior sample. In any event, a message is written to the SAS log if postprocessing results depend on a summary estimate of the posterior sample.
INITIAL=DATA | MLE | MODE | RANDOM
determines how initial values for the Markov chain are obtained. The default when a conjugate sampler is used is INITIAL=DATA, in which case the HPFMM procedure uses the same algorithm to obtain data-dependent starting values as it uses for maximum likelihood estimation. If no conjugate sampler is available or if you use the METROPOLIS option to explicitly request that it not be used, then the default is INITIAL=MLE, in which case the maximum likelihood estimates are used as the initial values. If the maximum likelihood optimization fails, the HPFMM procedure switches to the default INITIAL=DATA.
The options INITIAL=MODE and INITIAL=RANDOM use the mode and random draws from the prior distribution, respectively, to obtain initial values. If the mode does not exist or if it falls on the boundary of the parameter space, the prior mean is used instead.
METROPOLIS
requests that the HPFMM procedure use the Metropolis-Hastings sampling algorithm based on Gamerman (1997), even in situations where a conjugate sampler is available.
MUPRIORPARMS=pair-specification
MUPRIORPARMS(pair-specificationpair-specification)
specifies the parameters for the means in homogeneous mixtures without regression coefficients. The pair-specification is of the form , where a and b are the two parameters of the prior distribution, optionally delimited with a comma. The actual distribution of the parameter is implied by the distribution selected in the MODEL statement. For example, it is a normal distribution for a mixture of normals, a gamma distribution for a mixture of Poisson variables, a beta distribution for a mixture of binary variables, and an inverse gamma distribution for a mixture of exponential variables. This option overrides the PRIOROPTIONS option.
The parameters correspond as follows:
Beta:
The parameters correspond to the and parameters of the beta prior distribution such that its mean is and its variance is .
Normal:
The parameters correspond to the mean and variance of the normal prior distribution.
Gamma:
The parameters correspond to the and parameters of the gamma prior distribution such that its mean is and its variance is .
Inverse gamma:
The parameters correspond to the and parameters of the inverse gamma prior distribution such that its mean is and its variance is .
The two techniques for specifying the prior parameters with the MUPRIORPARMS option are as follows:
• Specify an equal sign and a single pair of values:
proc hpfmm seed=12345;
model y = / k=2;
bayes mupriorparms=(0,50);
run;
• Specify a list of parameter pairs within parentheses:
proc hpfmm seed=12345;
model y = / k=2;
bayes mupriorparms( (.,.) (1.4,10.5));
run;
If you specify an invalid value (outside of the parameter space for the prior distribution), the HPFMM procedure chooses the default value and writes a message to the SAS log. If you want to use the default values for a particular parameter, you can also specify missing values in the pair-specification. For example, the preceding list specification assigns default values for the first component and uses the values 1.4 and 10.5 for the mean and variance of the normal prior distribution in the second component. The first example assigns a prior distribution to the means in both components.
NBI=n
specifies the number of burn-in samples. During the burn-in phase, chains are not saved. The default is NBI=2000.
NMC=n
SAMPLE=n
specifies the number of Monte Carlo samples after the burn-in. Samples after the burn-in phase are saved unless they are thinned with the THIN= option. The default is NMC=10000.
OUTPOST<(outpost-options)>=data-set
requests that the posterior sample be saved to a SAS data set. In addition to variables that contain log likelihood and log posterior values, the OUTPOST data set contains variables for the parameters. The variable names for the parameters are generic (Parm_1, Parm_2, , Parm_p). The labels of the parameters are descriptive and correspond to the "Parameter Mapping" table that is produced when the OUTPOST= option is in effect.
You can specify the following outpost-options in parentheses:
LOGPRIOR
adds the value of the log prior distribution to the data set.
NONSINGULAR | NONSING | COMPRESS
eliminates parameters that correspond to singular columns in the design matrix (and were not sampled) from the posterior data set. This is the default.
SINGULAR | SING
adds columns of zeros to the data set in positions that correspond to singularities in the model or to parameters that were not sampled for other reasons. By default, these columns of zeros are not written to the posterior data set.
PHIPRIORPARMS=pair-specification
PHIPRIORPARMS( pair-specificationpair-specification)
specifies the parameters for the inverse gamma prior distribution of the scale parameters (’s) in the model. The pair-specification is of the form , and the values are chosen such that the prior distribution has mean and variance .
The form of the PHIPRIORPARMS with an equal sign and a single pair is used to specify one pair of prior parameters that applies to all components in the mixture. For example:
proc hpfmm seed=12345;
model y = / k=2;
bayes phipriorparms=(2.001,1.001);
run;
The form with a list of pairs is used to specify different prior parameters for the scale parameters in different components. For example:
proc hpfmm seed=12345;
model y = / k=2;
bayes phipriorparms( (.,1.001) (3.001,2.001) );
run;
If you specify an invalid value (outside of the parameter space for the prior distribution), the HPFMM procedure chooses the default value and writes a message to the SAS log. If you want to use the default values for a particular parameter, you can also specify missing values in the pair-specification. For example, the preceding list specification assigns default values for the first component a prior parameter and uses the value 1.001 for the b prior parameter. The second pair assigns 3.001 and 2.001 for the a and b prior parameters, respectively.
PRIOROPTIONS <=>(prior-options)
PRIOROPTS <=>(prior-options)
specifies options related to the construction of the prior distribution and the choice of their parameters. Some prior-options apply only in particular models. The BETAPRIORPARMS= and MUPRIORPARMS= options override this option.
You can specify the following prior-options:
CONDITIONAL | COND
chooses a conditional prior specification for the homogeneous normal and t distribution response components. The default prior specification in these models is an independence prior where the mean of the hth component has prior . The conditional prior is characterized by .
DEPENDENT | DEP
chooses a data-dependent prior for the homogeneous models without effects. The prior parameters a and b are chosen as follows, based on the distribution in the MODEL statement:
Binary and binomial:
, b=1, and the prior distribution for the success probability is .
Poisson:
, , and the prior distribution for is . See Frühwirth-Schnatter (2006, p. 280) and Viallefont, Richardson, and Greene (2002).
Exponential:
, , and the prior distribution for is inverse gamma with parameters a and b.
Normal and t:
Under the default independence prior, the prior distribution for is where f is the variance factor from the VAR= option and
Under the default conditional prior specification, the prior for is where and . The prior for the scale parameter is inverse gamma with parameters 1.28 and . For further details, see Raftery (1996) and Frühwirth-Schnatter (2006, p. 179).
VAR=f
specifies the variance for normal prior distributions. The default is VAR=1000. This factor is used, for example, in determining the prior variance of regression coefficients or in determining the prior variance of means in homogeneous mixtures of t or normal distributions (unless a data-dependent prior is used).
MLE<=r>
specifies that the prior distribution for regression variables be based on a multivariate normal distribution centered at the MLEs and whose dispersion is a multiple r of the asymptotic MLE covariance matrix. The default is MLE=10. In other words, if you specify PRIOROPTIONS(MLE), the HPFMM procedure chooses the prior distribution for the regression variables as where is the vector of maximum likelihood estimates. The prior for the scale parameter is inverse gamma with parameters 1.28 and where
For further details, see Raftery (1996) and Frühwirth-Schnatter (2006, p. 179). If you specify PRIOROPTIONS(MLE) for the regression parameters, then the data-dependent prior is used for the scale parameter; see the PRIOROPTIONS(DEPENDENT) option above.
The MLE option is not available for mixture models in which the parameters are estimated directly on the data scale, such as homogeneous mixture models or mixtures of distributions without model effects for which a conjugate sampler is available. By using the METROPOLIS option, you can always force the HPFMM procedure to abandon a conjugate sampler in favor of a Metropolis-Hastings sampling algorithm to which the MLE option applies.
STATISTICS <(global-options)> = ALL | NONE | keyword | (keyword-list)
SUMMARIES <(global-options)> = ALL | NONE | keyword | (keyword-list)
controls the number of posterior statistics produced. Specifying STATISTICS=ALL is equivalent to specifying STATISTICS=(SUMMARY INTERVAL). To suppress the computation of posterior statistics, specify STATISTICS=NONE. The default is STATISTICS=(SUMMARY INTERVAL). See the section Summary Statistics in Chapter 7: Introduction to Bayesian Analysis Procedures, for more details.
The global-options include the following:
ALPHA=numeric-list
controls the coverage levels of the equal-tail credible intervals and the credible intervals of highest posterior density (HPD) credible intervals. The ALPHA= values must be between 0 and 1. Each ALPHA= value produces a pair of % equal-tail and HPD credible intervals for each sampled parameter. The default is ALPHA=0.05, which results in 95% credible intervals for the parameters.
PERCENT=numeric-list
requests the percentile points of the posterior samples. The values in numeric-list must be between 0 and 100. The default is PERCENT=(25 50 75), which yields for each parameter the 25th, 50th, and 75th percentiles, respectively.
The list of keywords includes the following:
SUMMARY
produces the means, standard deviations, and percentile points for the posterior samples. The default is to produce the 25th, 50th, and 75th percentiles; you can modify this list with the global PERCENT= option.
INTERVAL
produces equal-tail and HPD credible intervals. The default is to produce the 95% equal-tail credible intervals and 95% HPD credible intervals, but you can use the ALPHA= global-option to request credible intervals for any probabilities.
THIN=n
THINNING=n
controls the thinning of the Markov chain after the burn-in. Only one in every k samples is used when THIN=k, and if NBI= and NMC=n, the number of samples kept is
where [a] represents the integer part of the number a. The default is THIN=1—that is, all samples are kept after the burn-in phase.
TIMEINC=n
specifies a time interval in seconds to report progress during the burn-in and sampling phase. The time interval is approximate, since the minimum time interval in which the HPFMM procedure can respond depends on the multithreading configuration. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937881350517273, "perplexity": 890.0937035790649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00303.warc.gz"} |
https://tex.stackexchange.com/questions/427394/appendices-in-tables-of-contents-under-appendices-section | # Appendices in tables of contents under Appendices section
I have a very simple article which consists of the following:
\documentclass{article}
\usepackage{appendix}
\begin{document}
\tableofcontents
\appendixtitleon
\appendixtitletocon
\section{Writing}
\begin{appendices}
\section{First appendix}
\subsection{First}
\subsection{Second}
\section{Second appendix}
\subsection{First}
\subsection{Second}
\end{appendices}
\end{document}
This produces the following output:
This is almost what I want, except, I actually want the appendices to come under an appendices section in the table of contents (everything indented a level so to speak). The output would be like:
Add the [toc] option:
\documentclass{article}
\usepackage[toc]{appendix}
\begin{document}
\tableofcontents
\appendixtitleon
\appendixtitletocon
\section{Writing}
\begin{appendices}
\section{First appendix}
\subsection{First}
\subsection{Second}
\section{Second appendix}
\subsection{First}
\subsection{Second}
\end{appendices}
\end{document}
• I have tried this, doesn't this add them as sections? They are not under the Appendices section, they are just listed below them? – Warmley Apr 19 '18 at 12:30
• I dont' really understand what you mean. Appendix A, Appendix B are sections. The [toc] just adds an intertitle, just to say: here begins the appendices part. – Bernard Apr 19 '18 at 12:36
• Yeah that wasn't very clear. What I'm looking for is the appendix section to exist in the table of contents, but with each appendix being a subsection (I'm not sure if that is right). So the indent is something like the provided output. – Warmley Apr 19 '18 at 12:39
• I think it should be possible to leave each appendix as a section, but have a subsection layout in the table of contents with titletoc and apptools. – Bernard Apr 19 '18 at 12:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073651790618896, "perplexity": 1095.045502345009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00343.warc.gz"} |
https://www.physicsforums.com/threads/rotational-mechanics-and-angular-momentum-help-needed-to-check-over-work.269422/ | # Rotational mechanics and Angular Momentum- help needed to check over work
1. Nov 4, 2008
### ultrapowerpie
Please note, this is a series of problems, with a series of answers. I believe that I understand how to do the problems, but for some reason, my answer is wrong, and I can think of no reason why.
1. The problem statement, all variables and given/known data
Problem 1: http://img355.imageshack.us/my.php?image=physics1jw9.png
i) I need to find the speed of the mass as it passes point B in m/s
ii) I need to find the Tension of the string (Which I am sure I can do, but without a correct answer for part i, I'm not going to attempt yet)
Problem 2: http://img186.imageshack.us/my.php?image=physics2pt6.png
Problem 3: http://img355.imageshack.us/my.php?image=physics3el2.png
(Note, the word that is cut off is longitudinal. I had to copy and paste parts of it into one picture, sorry it's a little shoddy <.<)
2. Relevant equations
All the lovely rotational mechanics equations, and their linear counterparts
3. The attempt at a solution
Problem 1
i) Ok, I did not take into account the spoke on teh wheel whatsoever, since the mass of inertia was given to me, and the spokes of the wheel did not seem to have their own mass. Not sure if this would effect the problem.
T=I(angular a)
Angular acceleration = Torque/I
Torque= 46 KG * 9.8 m/s^2 * 3m = 1352.4
I= 3/4* 23 KG * 3^2 = 155.25
Ang A= 8.7111
a = (Ang A)*r
a= 8.711 * 3 = 26.133
Plugging this acceleration into the classic Xf= Xi + Vi +(at^2)*2 (assuming Vi and Xi are 0)
I get t=2.106
Then, using the Vf= Vi +at
I get Vf= 55.05
This answer is wrong though, not sure where I screwed up
[hr]
Problem 2
This one seemed simple enough, just a basic torque= I * ang a
Ang a = T/I
I=(1/12)*(2.7)*(3^2)= 2.025
T= rxF
F= M*G= 9.8*2.7= 26.46
r= 3cos(43)=2.045
T= 54.137
Ang a= 26.734
It wants it in Radians/sec^2. Is it bad that I was using degrees for the cos?
[hr]
Problem 3
However, upon trying the forumulas given there, I still got it wrong. Using the formula ShawnD had at the end of his post, I got the wrong answer.
m= IW/rV
I= 2950
r=2.9
V=755
I got M to be .282 kg
I then divided by .012 kg, and got that time = 21.87 s
This answer is also wrong, not sure what happened.
Thank you in advance for looking over this work.
2. Nov 5, 2008
### ultrapowerpie
Slight bump, sorry, but homework's due by Thursday Night. >.>
Similar Discussions: Rotational mechanics and Angular Momentum- help needed to check over work | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8994573950767517, "perplexity": 2118.404090354939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542588.29/warc/CC-MAIN-20161202170902-00419-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/24849/path-connected-set-of-matrices/24876 | # (Path) connected set of matrices?
Let $N \in \mathfrak{M}_n(\mathbb{C})$ nilpotent, such that there exists $X \in \mathfrak M_n(\mathbb{C})$ with $X^2=N$ (take for instance $n>2$ and $N(1,n)=1$; $N(i,j)=0$ otherwise).
Denote by $\mathcal{S}_N$ the set of $X \in \mathfrak M_n(\mathbb{C})$ such that $X^2=N$. Is $\mathcal{S}_N$ connected or path-connected ? What happens when we change $2$ by $3,4,\ldots$?
-
This is a memorial to an incorrect solution that used to be here. Unfortunately, I can't delete it since it was accepted.
-
thanks, that finishes it off! – Xandi Tuni May 17 '10 at 8:22
The first statement about closure and refining partitions is not clear, can you elaborate? What about the following 6x6 matrix: the upper left and the lower right quarters are standard 3x3 Jordan blocks, and the upper right quarter has $\epsilon$ in its center (everything else is 0). I believe it has Jordan type (4,2) but converges to (3,3) as $\epsilon$ goes to 0. – Sergei Ivanov May 17 '10 at 9:28
The relevant order on partitions is not refinement, but dominance (en.wikipedia.org/wiki/Dominance_order). Thus two elements of $S_N$ can be in the same connected component only if the corresponding partitions can be "linked", using dominance, via partitions whose "square" is the partition of N. It is not clear to me that this is also sufficient: you are only allowed to conjugate elements of $S_N$ by matrices commuting with N, so even if the "poset connectedness" condition holds, you may not be able to conclude that the various elements of $S_N$ are in the same connected component. – damiano May 17 '10 at 10:44
Sergei -- I don't know about that particular example, but yes, I was too hasty with the first statement. – algori May 17 '10 at 11:50
Edit: This is just half an answer: I can only show that the sets matrices with $X^2=N$ and fixed Jordan type are path connected.
Every nilpotent matrix is conjugate to a nilpotent matrix in Jordan form, which is unique up to permutation of Jordam blocks. So we have a bijection
$$\mathrm{Nilp}_n(\mathbb C)/\mathrm{conjugation} \quad \cong \quad \mbox{integer partitions of }n$$
associating with a conjugacy class of a nilpotent matrix $X$ the sizes of its Jordan blocks $(a_1, \ldots, a_r)$ which sum up to $n$. The max of the $a_i$'s is the nilpotency-degree of $X$. To the class of $X^2$ is associated the partition $$(\lfloor (a_1 +1)/2 \rfloor, \lfloor a_1/2\rfloor ,\lfloor (a_2 +1)/2 \rfloor ,\lfloor a_2/2\rfloor, \ldots, \lfloor a_r/2 \rfloor)$$
From here we can derive a necessary and sufficient condition on a nilpotent matrix to be a square.
Now fix your preferred nilpotent matrix $N$. Let $X$ be a matrix with $X^2=N$ and Jordan type $(a_1, \ldots, a_r)$. Conjugating the whole setup, we may assume $X$ is in Jordan form.
Let $Y$ be a matrix with $Y^2=X^2 = N$ having the same Jordan type as $X$, and let us construct a path from $X$ to $Y$. Since $X$ and $Y$ have the same Jordan type, there exists an invertible matrix $S$ with $Y=SXS^{-1}$. Because $Y^2=X^2=N$ the matrix $S$ commutes with $N$. It is enough to construct a path from the identity matrix to $S$ in the set $\mathcal C_N$ of invertible matrices that commute with $N$.
I claim $\mathcal C_N$ is path connected (for just any $N$). Indeed, the set $[N]$ of all commutators of $N$ is linear subspace of the vector space $\mathrm M_n(\mathbb C)$. The determinant, as a function on $[N]$ is a polynomial function which is not identically zero since the identity matrix belongs to $[N]$. Thus, the zero set of the determinant is Zariski closed, so $\mathcal C_N$ is Zariski open in $[N]$. Any Zariski-open in a complex vector space is path connected.
What remains is to connect different Jordan types. We certainly can connect $(5,2)$ with $(5,1,1)$ by the 1 in the $2\times 2$--block with $t$ and vary $t$ from $1$ to zero. The problem that remains is to connect, say, type $(4,2)$ with type $(3,3)$ as pointed out in the comments.
-
How do you connect Jordan type (3,3) to (4,2)? They have the same square. – Sergei Ivanov May 16 '10 at 11:45
Jordan type $(3,3)$ has square $(2,1,2,1) = (2,2,1,1)$ and Jordan type $(4,2)$ has square $(3,1,1,1)$. Is that not so? – Xandi Tuni May 16 '10 at 11:57
No, the square of Jordan type (4) is (2,2). – Sergei Ivanov May 16 '10 at 12:02
Oops, my fault. I will look if I can fix that. – Xandi Tuni May 16 '10 at 12:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890880823135376, "perplexity": 244.32887160203643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673439.5/warc/CC-MAIN-20151001215753-00166-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/orbital-motion.883553/ | # I Orbital Motion
1. Aug 29, 2016
### Phys_Boi
So I'm really interested in orbital mechanics. I'm only 16 so my knowledge of physics is restricted to an intermediate level. If there is a planet with large mass and a planet with small mass they are attracted to each other... So imagine a system where the large mass is fixed and the small object has a constant velocity. How can I model (I'm a big programmer and would like to create a script for this) the path the object takes around the plant. I know this has to do with the balance between the Fg and the velocity...
Thank you for any help.
2. Aug 29, 2016
### Phys_Boi
I would like to also point out that i don't want only circular orbits. I would like for the program to be able to compute all orbits easily computable
3. Aug 29, 2016
### Staff: Mentor
Have fun!
https://en.wikipedia.org/wiki/Orbital_mechanics
4. Aug 30, 2016
### Phys_Boi
Thank you!
5. Aug 30, 2016
### nasu
If the velocity is constant there is no path around the planet. Constant velocity means that the trajectory is a straight line. So there is nothing to program about it.
It is not enough to be good at programming to solve physics problems. It helps a lot if you spend some time to understand the physics and the meaning of the terms used in physics.
Draft saved Draft deleted
Similar Discussions: Orbital Motion | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653655052185059, "perplexity": 489.7459390808163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107487.10/warc/CC-MAIN-20170821022354-20170821042354-00607.warc.gz"} |
https://zbmath.org/?q=an:0863.60052 | ×
# zbMATH — the first resource for mathematics
Large deviations for solutions of stochastic equations. (English. Russian original) Zbl 0863.60052
Theory Probab. Appl. 40, No. 4, 660-678 (1995); translation from Teor. Veroyatn. Primen. 40, No. 4, 764-785 (1995).
Let $$D([0,T]; R^d)$$ be a Skorokhod space, $$\mu^\varepsilon(A)=P\{\xi^\varepsilon(\cdot)\in A\}$$, $$A\in {\mathcal B}(D([0,T]; R^d))$$, $$\varepsilon > 0$$, be a family of probability measures, corresponding to the $$d$$-dimensional locally infinitely divisible processes $$\xi^\varepsilon(t)$$, $$t\geq 0$$, $$\varepsilon > 0$$, defined on some filtered probability space. A general principle of large deviations is proved for the family $$\{\mu^\varepsilon$$, $$\varepsilon>0\}$$ in terms of the local characteristics of $$\xi^\varepsilon$$, $$\varepsilon>0$$. Some special cases are discussed in detail.
##### MSC:
60H10 Stochastic ordinary differential equations (aspects of stochastic analysis) 60G48 Generalizations of martingales 60F10 Large deviations | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524187207221985, "perplexity": 401.94886595382724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00453.warc.gz"} |
https://www.physicsforums.com/threads/statistical-geometry.81287/ | # Statistical Geometry
1. Jul 5, 2005
### BicycleTree
You have two tools:
1. A straight edge
2. The ability to judge any distance to within 20%, or any angle to within 20 degrees.
What can you construct with these tools, with an unlimited? In particular, is there a method for constructing an approximate circle that tends towards perfect accuracy as the number of steps you use tends towards infinity?
For an example of what you can do, here is how to construct an approximate parallel line to a given line m through a point P:
First prepare a length x = 100 light years.
1. draw a line n through P that seems about parallel to your angle judgment. This line is either parallel to m (statistically impossible) or it intersects m at point Q.
2. Does PQ appear, to your 20% accurate length judgment, to be at least 21% greater than x? If not, return to step 1. Otherwise, let x = PQ, and either return to step 1 or decide you are finished and n is your nearly parallel line.
Given enough time, this method will produce an asymptotically accurate parallel line. It will never be exactly accurate. The fact that it may take a huge number of steps to become accurate is not important. Also, please disregard the fact that by some statistical fluke it might take a vast number of steps yet still not be any more accurate than it was to begin with--under reasonable expectations of chance, its accuracy will increase.
So, can you find a similar method for creating a circle? Feel free to share any interesting sub-algorithms you come across. One that I am working on is how to, given an angle, reflect it over one of its rays.
2. Jul 6, 2005
### Jimmy Snyder
How do you know this won't result in an infinite loop?
3. Jul 7, 2005
### Rahmuss
Mathematically I'm not so hot; but could I...
1 - Measure out a square as close as possible to exactl right angles.
2 - Measure out another square at 45 degrees from the angles of that square; but on top of it.
3 - Keep bisecting the angles of the squares with another square on top of that and so on until you get a fairly roundish circle.
In fact, if you use the straight edge and make all the lines exactly that long, then you won't be off on that distance by 20% making it more accurate. As the number of steps approaches infinity it becomes more of a circle.
In fact, the more squares you put, the greater accuracy you'll have because the angel gets smaller and smaller so that 20 degrees is smaller visually.
http://img242.imageshack.us/img242/1171/makingcircles9em.png [Broken]
Last edited by a moderator: May 2, 2017
4. Jul 8, 2005
### BicycleTree
Jimmy, I said:
And here is a more exact description of just what you can do.
You may name a point in any region you want, or along any line or line segment you want, or at any intersection you want.
Given two points A and B, you may draw line AB.
Given an angle ABC, you may name a degree amount x so that the difference between the measure of ABC and x is less than or equal to 20 degrees.
Given a line segment AB, you may name a distance in inches x so that the difference between AB and x is less than or equal to 0.2 times the true length of AB.
Assume nothing about the randomness or non-randomness of the two guessing methods, unless it turns out you absolutely can't proceed unless you assume they are random. However, since I don't think you can proceed otherwise, assume that when you select points from a region or along a line or line segment, the selection is random.
Rahmuss, you have not shown that the steps you are describing are possible. For example, is it even possible to bisect an angle using the tools I have described? Is it possible to create a right angle?
Last edited: Jul 8, 2005
5. Jul 8, 2005
### NateTG
It appears to be possible to create an arbitrary angle using your current rules. (Consider the possibility, for example, of starting with a 180 angle, and stepping down in increments of 20 degrees.)
That means, that the following is possible:
Pick the number of steps you want to take minimum 19 or so . Call that number $n$.
In 5 steps you can construct a 90 degree angle.
In (at most) another 5 steps you can construct a $\frac{540}{n-10}$ degree angle.
Now, construct $\frac{n-10}{2}$ lines that meet at a point so that the smallest angles are all $\frac{540}{n-10}$.
Choose an arbitrary point along one of the legs, and construct a perpendicular to both adjacent legs. Then reflect this perpendicular across the clockwise adjacent leg. Proceed clockwise around all ${n-10}$ legs (this takes only $\frac{n-10}{2}$ steps, since the reflection is over every other leg.
This constructs a regular $\frac{n-10}{2}$ gon.
Clearly, you can use this construction to build a regular polygon with an arbitrary number of sides - so you can approach a circle.
Last edited: Jul 8, 2005
6. Jul 9, 2005
### BicycleTree
"Stepping down" is not one of the rules.
It is not possible to certainly construct a given angle in any fixed number of steps. It is only possible to approach that construction in a large number of steps.
You have not shown that the steps of your method are possible, so I cannot accept this as a solution. You need to show how to construct any given angle, how to construct a perpendicular to a given line through a given point, and how to perform a reflection.
7. Jul 11, 2005
### BicycleTree
Well, the only way I know of to make the circle is by using a kind of "trick," that's not explicitly allowed in the rules (though, one might argue, it is a not unreasonable extrapolation of those rules). Without using that trick, the best I can come up with is an ellipse.
Warm-up Question: How do you approximate an ellipse using these tools?
8. Jul 11, 2005
### NateTG
Seems pretty obvious that if I start with a 180 degree angle, then I can name x at 160, and so on.
Do you mean:
Given an angle ABC it is possible to construct another angle DEF so that the difference in the measures of ABC and DEF is less than 20 degrees?
9. Jul 11, 2005
### BicycleTree
The rule is, "Given an angle ABC, you may name a degree amount x so that the difference between the measure of ABC and x is less than or equal to 20 degrees."
You don't get to choose how much less than or equal to 20 degrees it is. The only thing you know when you name x is that the error of x as an approximation to the measure of ABC is less than or equal to 20 degrees.
For example, let's say I have an angle ABC and I use the rule to name the degree amount "fifty degrees." Now I know that the measure of angle ABC lies somewhere between thirty and seventy degrees, inclusive.
I suppose the rule is slightly oddly worded.
Last edited: Jul 11, 2005
10. Jul 11, 2005
### NateTG
I don't think your rules are sufficiently clear.
11. Jul 11, 2005
### BicycleTree
I'll try again:
1. You may name a point in any region you want, or along any line or line segment you want, or at any intersection you want. If you name a point in a region or along a line or line segment, the point has equal probability of being in any arbitrary subset of of the region, line, or line segment. (Relying on this working for infinite regions is, by the way, the sort of "cheat")
2. Given two points A and B, you may draw line (or line segment) AB.
3. Given an angle ABC, you have the function angleguess(ABC) that returns an arbitrary degree measure x so that the difference between x and the measure of ABC is less than or equal to 20 degrees.
4. Given a line segment AB, you have the function lengthguess(AB) that returns an arbitrary length x so that the difference between x and the length of AB is less than or equal to 0.2 times the true length of AB.
All points and lines lie in the Euclidean plane.
Last edited: Jul 11, 2005
12. Jul 11, 2005
### NateTG
So there is no way to transfer angles or lengths except for picking at random, and seeing whether you're close?
13. Jul 11, 2005
### BicycleTree
Yes; an inefficient method, but you have unlimited (though finite) steps to make your approximate circle in.
I got to thinking about this when I was drawing somewhat inaccurate circles by hand and thinking, I can draw a straight line pretty well, and I can approximately guess distances and angles, so if you idealize those procedures as I did here, is there any algorithm to get as close to a circle as you like?
I figured out the answer a little while ago: yes, there is, and without using the "cheat." Hint in white: The key to the method I found is finding a way to use the approximate distance guessing to get nearly exact distance guessing.
Last edited: Jul 11, 2005
14. Jul 11, 2005
### NateTG
It's got to be pretty tricky unless you make assumptions about the distribution - otherwise the potential for malicious randomness will almost certainly make it impossible.
15. Jul 12, 2005
### mustafa
Hey, I am able to construct an ellipse.
Using BicycleTree's algorithm for parallel lines we can construct a parallelogram:
Draw a straight line m.
Take two points A and B on the line some distance apart.
Draw a line n through B at any angle to m.
Take a point C on n.
Draw a line parallel to m through C.
Draw a line parallel to n through A.
Suppose the above two lines intersect in D. Then ABCD is a parallelogram.
Since opposite sides of a parallelogram are congruent, the length AB is transferred to CD. Drawing another parallelogram, say BDCE, the length AB is further transferred to BE on the same line m. In this way we can get any no. of congruent segments on a given line. Also drawing parallel lines we can use n congruent segments on a line to divide a line segment into n equal parts.
So we have developed two more tools: we can draw parallelograms and we can divide a given line segment in any no of equal parts.
Therefore an ellipse can be easily constructed using the parallelogram method.
16. Jul 17, 2005
### BicycleTree
Congratulations, Mustafa.
Actually, I was in error about my original scheme for measuring lengths exactly. The method I had in mind was first squaring a given length, then squaring the result, and so on to a power of 2^n (n integer), and then measuring the final length and using that measurement to find the original length to much greater accuracy. But the method was flawed because it required an original estimation of a short length to perform the first squaring procedure, and the error in that estimation becomes amplified with every squaring so it nullifies the benefit from the power operation. Trying to use some exact technique like letting the small estimated length be 1/10 of the length to be squared does not result in an actual squaring.
I was typing a further reply detailing a mistake I had made in my original solution, the "cheating" method for creating a nearly exact circle, and a correct solution for creating a nearly exact circle, but then I realized that the "randomness" guarantee of rule 1 is impossible. Corrected version of rule 1 (changes underlined):
1. You may name a point in any region you want, or along any named line or line segment you want, or at any intersection you want. If you name a point in a region or along a line, ray or line segment, the point has equal probability of being in any two arbitrary subsets of equal area or length of the region, line, ray, or line segment. If the region, ray, or line has infinite area or length and other points have already been named, then the new named point is named only finitely far away from other named points (I don't think I really need to state this, except to be clear)
Now, hopefully, you can do the problem. The "cheating" method makes extensive use of this rule with infinite regions, and can reasonably be argued not to work, but there is at least one method that doesn't stretch the rules.
There are no guarantees on the randomness or nonrandomness of the two estimation functions, though that wasn't what was wrong with my earlier idea.
17. Jul 24, 2005
### BicycleTree
Well, what I had in mind for the solution there was also flawed, and I have not been able to come up with one that is not flawed. I think that it is not possible for an intuitive but incomplete reason: if you imagine that you have constructed a circle, stretch the entire plane by a factor of 15% in one direction, and any length measurement from the original plane could also be chosen to apply to the stretched plane, due to the measurement error. I believe that the angle measurement provision does not change the problem since angles can also be measured by measuring the length of sides of a triangle. Not that thinking about it wasn't interesting.
My original idea involved producing a distance proportional to the square of a given distance, and then squaring that derived distance, and so on to produce a distance proportional to the original distance taken to a power of 2^n. Then measure that final distance to within 20%, and you will know the original distance far more accurately. This doesn't work because taking the square of the original distance in the method I could think of necessarily involves introducing a small line segment of "known" length parallel to the distance to be squared. The device of taking the small line segment to be 1/10 of the distance to be squared do not work because they do not result in squaring the length. The inaccuracy in the small "known" line segment becomes amplified with successive squarings so that no advantage results.
The method I had in mind later on had to do with measuring densities of randomly plotted points within a triangle to find equivalent lengths. There's no need to go into exactly what the idea was, but the problem was similar--no way around having a small initial segment of known length. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429268598556519, "perplexity": 466.7025033243562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747665.82/warc/CC-MAIN-20181121092625-20181121114625-00257.warc.gz"} |
https://www.physicsforums.com/threads/problem-regarding-friction.676600/ | # Homework Help: Problem regarding friction
1. Mar 6, 2013
### sankalpmittal
1. The problem statement, all variables and given/known data
See the image: http://postimage.org/image/4ukoszng1/
Look at question 28 there.
If you feel uncomfortable, you can click on the image to increase its size.
Note, T:Tension in string
a: acceleration in big block
R1: reaction force by small block on big block..
2. Relevant equations
f≤fssN
fkkN
3. The attempt at a solution
This problem is different from what I have done. Now, I think that the acceleration of small block down must be twice the acceleration of big block, because small block is connected by one segment while the big block by two.
Considering FBD of big block:
T-µ2Mg-R1=Ma
Considering FBD of small block:
mg-T-µ1R1=2ma
But, there are two equations and three unknowns. How can I solve for "a" ?
Last edited: Mar 6, 2013
2. Mar 6, 2013
### MostlyHarmless
The acceleration of the two blocks will be the same because they are part of the same system. If one block moves with the same direction and acceleration.
EDIT: also since your problem gives no numbers, I imagine they answer your looking for isnt going be numerical, so the number of unknowns isn't relevant
Last edited: Mar 6, 2013
3. Mar 6, 2013
### sankalpmittal
No, I think you are not correct. Small block is connected by one segment. When big block, connected by 2 segments will move say x/2 units right, each segment will shorten by x/2. But the small block is connected by one segment, so that single segment has to lengthen by x units downwards. So small block will move downward by x units. Now differentiating distance, twice we get,
asmall = d2x/dt= a
abig=0.5d2x/dt=a/2
Q.E.D..
By distance, I meant of course displacement.
Anyone ? Any hints?
Edit:
Answer given is in terms of µ1, µ2, m and M..
4. Mar 6, 2013
### SammyS
Staff Emeritus
The vertical component of the acceleration of the small block is equal to (the negative of) twice the horizontal component of the acceleration of the large block. The large block's motion being purely horizontal.
The horizontal component of the acceleration of the small block is equal the horizontal component of the acceleration of the large block.
Also:
You're missing some forces exerted on the big block.
The Normal force on the big block is not Mg. To calculate this, you should have an equation involving the vertical components of the forces on the big block.
I inserted the word "twice" above.
Last edited: Mar 6, 2013
5. Mar 6, 2013
### MostlyHarmless
I'm not sure what you are doing to justify, the accelerations of the blocks being different, but if they are connected, the acceleration is going to the same for both masses. Take newtons second law. We have a system of masses Fnet=(M+m)a. Also, just think about holding a stick with one hand on either end of the stick. If you move one hand the the other will move in exactly the same waY.
6. Mar 6, 2013
### sankalpmittal
Huh ? I do not understand. The directions of forces perpendicular to each other, do not have effect on each other. So how can downward motion of small block be negative of...
Oh !! Sorry !! I understand. So you are taking right as positive and downward as negative.
But hold on ! I do not agree.The vertical component of the acceleration of the small block should be twice the (the negative of) the horizontal component of the acceleration of the large block. This is because small block is connected by one segment but large block by two. See the reasoning in my previous post.
Uhhh, Yes.
So the friction µ1R1 (kinetic) acts on the big block in upward direction also ? Other forces also act? I cannot think of one?
Edit:
@Jesse H.
That is not what I say. That is what my textbook say.
Well,
Suppose one block is connected by a single segment from pulley. Other block of same mass is connected by the two segments, tension in each segment being T. Will the acceleration(downward) of two blocks be same ? Think it yourself.
Last edited: Mar 6, 2013
7. Mar 6, 2013
### SammyS
Staff Emeritus
Right. It should be TWICE. After putting in all those other adjectives, I left the word "twice" out.
I edited that post.
Last edited: Mar 6, 2013
8. Mar 6, 2013
### Maiq
I'm not 100% sure but i would think that the tension for the M block would be 2T. Also since the reaction force between the two blocks is the only force acting on the m block in the horizontal direction, you could say R1=ma.
9. Mar 6, 2013
### MostlyHarmless
Sorry, taking a second look at the diagram. I see where I was mistaken. Apologies.
10. Mar 6, 2013
### sankalpmittal
Why you say,R1=ma ?
@Jesse H.
You are welcome.
Ok thanks.
Now let me write my solution comprehensively so that you can freely look and analyze it.
FBD for small block:
Forces acting are:
1.Downward force mg
2.Upward force tension T
3.Friction upward: µ1R1
4.Pseudo force of Ma on it towards left.
5.Reaction force R1 by big block on it towards right for its horizontal equilibrium.
So, my equation becomes:
R1=Ma ..(i)
mg-T- µ1R1=2ma
On using (i),
mg-T- µ1Ma=2ma .... (A)
FBD of big block:
Forces acting on it are:
1.Downward force Mg
2.Tension T due right
3.Friction by ground of µ2R due left
4.Reaction force or third law pair of friction, exerted by small block downward:µ1R1
5.Reaction force R by floor on big block
So accounting for vertical equilibrium of big block,
R=Mg+µ1R1=Mg+µ1Ma ....(ii)
T-µ2R=Ma
On using (ii)
T-µ2Mg-µ2µ1Ma=Ma ...(B)
On adding A and B, Tension gets cancelled out, and I get,
a=(mg-µ2Mg)/(µ2µ1M+µ1M+2m+M)
This is not the correct answer as per the book. Where did I go wrong ?
11. Mar 6, 2013
### Maiq
Why do you say R1=Ma? The net force acting an object is equal to its mass times its acceleration. R1 is the only force acting on block m so R1 would be equal to block m times the acceleration a.
12. Mar 6, 2013
### SammyS
Staff Emeritus
How did you get that Pseudo force?
Looking at the components of the acceleration of the small block, the block travels in a straight line, the slope of which is -2.
It looks like you have let a = the acceleration of the big block. I would do that too.
I would say that R1 = ma .
For each FBD , you should have one equation for horizontal components and one for vertical components.
13. Mar 6, 2013
### sankalpmittal
Yes. Thank you. I get it now. Now please see my solution and tell where I went wrong. Addresses to Maiq's below post.
Dang it !! Again thanks. I forgot to include R1 in horizontal motion of big block.
Ok, sorry.
Again with this correction, and maiq's correction, I will continue it tomorrow. For now, I am off to sleep.
Thinking back. Why did you say tension on big block be 2T ? Again, only one segment is connected directly to big block. Though its acceleration will be halved of smaller's downward one, but force will be still T.
Last edited: Mar 6, 2013
14. Mar 6, 2013
### Maiq
Also R1 affects block M as well, because of Newton's third law.
Last edited: Mar 6, 2013
15. Mar 6, 2013
### Maiq
After thinking about it i realized the last part of my last post was incorrect. It would actually equal -R1 if all other forces equaled 0. Sorry I didn't think of the direction of R1 when I thought of that explanation. Either way I believe everything else I said is true.
16. Mar 6, 2013
### sankalpmittal
Ok, so after all the correction and including other forces:
FBD for small block:
Forces acting are:
1.Downward force mg
2.Upward force tension T
3.Friction upward: µ1R1
4.Pseudo force of ma on it towards left.
5.Reaction force R1 by big block on it towards right for its horizontal equilibrium.
So, my equation becomes:
R1=ma ..(i)
mg-T- µ1R1=2ma
On using (i),
mg-T- µ1ma=2ma .... (A)
FBD of big block:
Forces acting on it are:
1.Downward force Mg
2.Tension T due right
3.Friction by ground of µ2R due left
4.Reaction force or third law pair of friction, exerted by small block downward:µ1R1
5.Reaction force R by floor on big block
6.Reaction force R1 on it due left by small block as pointed out by maiq.
So accounting for vertical equilibrium of big block,
R=Mg+µ1R1=Mg+µ1ma ....(ii)
T-µ2R-R1=Ma
On using (ii)
T-µ2Mg-µ2µ1ma-ma=Ma ...(B)
On adding A and B, Tension gets cancelled out, and I get,
a=(mg-µ2Mg)/(3m+M+µ1µ2m+µ1m)
Now also the answer is incorrect!
Where did I go wrong? What forces I missed in the FBDs ?
Last edited: Mar 6, 2013
17. Mar 6, 2013
### SammyS
Staff Emeritus
The above looks good.
The string also passes over the pulley, which is attached to the bog block.
Therefore, the string has two strands pulling to the right on the big block: That's 2T . AND one strand pulling down on the big block.
So the stuff below is incorrect.
18. Mar 6, 2013
### haruspex
I agree down to there, but I don't get the result below:
I get (mg-Ma-Mgµ2)/[m(3+µ1µ21)]
19. Mar 6, 2013
### sankalpmittal
Thanks SammyS !! I got the correct answer !!
And of course thanks to maiq!!
And thanks Jesse H. for at least trying....
@ haruspex:
Yes, when you isolate "a", you will get the wrong answer as I was. But on following SammyS's hint, I got the correct answer. Thanks SammyS, once again. And thanks haruspex for replying. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937286734580994, "perplexity": 2198.5909813983653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594790.48/warc/CC-MAIN-20180723012644-20180723032644-00098.warc.gz"} |
https://en.wikipedia.org/wiki/Butterfly_diagram | # Butterfly diagram
Signal-flow graph connecting the inputs x (left) to the outputs y that depend on them (right) for a "butterfly" step of a radix-2 Cooley–Tukey FFT. This diagram resembles a butterfly (as in the Morpho butterfly shown for comparison), hence the name.
In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below.[1] The earliest occurrence in print of the term is thought to be in a 1969 MIT technical report.[2][3] The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states.
Most commonly, the term "butterfly" appears in the context of the Cooley–Tukey FFT algorithm, which recursively breaks down a DFT of composite size n = rm into r smaller transforms of size m where r is the "radix" of the transform. These smaller DFTs are then combined via size-r butterflies, which themselves are DFTs of size r (performed m times on corresponding outputs of the sub-transforms) pre-multiplied by roots of unity (known as twiddle factors). (This is the "decimation in time" case; one can also perform the steps in reverse, known as "decimation in frequency", where the butterflies come first and are post-multiplied by twiddle factors. See also the Cooley–Tukey FFT article.)
## Contents
In the case of the radix-2 Cooley–Tukey algorithm, the butterfly is simply a DFT of size-2 that takes two inputs (x0x1) (corresponding outputs of the two sub-transforms) and gives two outputs (y0y1) by the formula (not including twiddle factors):
$y_0 = x_0 + x_1 \,$
$y_1 = x_0 - x_1. \,$
If one draws the data-flow diagram for this pair of operations, the (x0x1) to (y0y1) lines cross and resemble the wings of a butterfly, hence the name (see also the illustration at right).
A decimation-in-time radix-2 FFT breaks a length-N DFT into two length-N/2 DFTs followed by a combining stage consisting of many butterfly operations.
More specifically, a decimation-in-time FFT algorithm on n = 2 p inputs with respect to a primitive n-th root of unity $\omega^k_n = e^{-\frac{2\pi i k}{n}}$ relies on O(n log n) butterflies of the form:
$y_0 = x_0 + x_1 \omega^k_n \,$
$y_1 = x_0 - x_1 \omega^k_n, \,$
where k is an integer depending on the part of the transform being computed. Whereas the corresponding inverse transform can mathematically be performed by replacing ω with ω−1 (and possibly multiplying by an overall scale factor, depending on the normalization convention), one may also directly invert the butterflies:
$x_0 = \frac{1}{2} (y_0 + y_1) \,$
$x_1 = \frac{\omega^{-k}_n}{2} (y_0 - y_1), \,$
corresponding to a decimation-in-frequency FFT algorithm.
## Other uses
The butterfly can also be used to improve the randomness of large arrays of partially random numbers, by bringing every 32 or 64 bit word into causal contact with every other word through a desired hashing algorithm, so that a change in any one bit has the possibility of changing all the bits in the large array.[4] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293667435646057, "perplexity": 1289.950850426049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165484.60/warc/CC-MAIN-20160205193925-00219-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://science.sciencemag.org/content/161/3848/1338.abstract | Reports
Potassium-Argon Ages and Spreading Rates on the Mid-Atlantic Ridge at 45° North
See allHide authors and affiliations
Science 27 Sep 1968:
Vol. 161, Issue 3848, pp. 1338-1339
DOI: 10.1126/science.161.3848.1338
Abstract
Potassium-argon dates obtained from extrusives collected on a traverse across the Mid-Atlantic Ridge at 45°N are consistent with the hypothesis of ocean-floor spreading. The dates suggest a spreading rate in the range of 2.6 to 3.2 centimeters per year near the axis of the ridge; the rate agrees with that computed from fission-track dating of basalt glasses. Additional data for a basalt collected 62 kilometers west of the axis gives a spreading rate of 0.8 centimeter per year, which is similar to the rate inferred from magnetic anomaly patterns in the area. Reasons for the difference in calculated spreading rates are discussed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8047419786453247, "perplexity": 3116.2235682939145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152236.64/warc/CC-MAIN-20210727041254-20210727071254-00044.warc.gz"} |
https://www.talkstats.com/tags/matrix-algebra/ | # matrix algebra
1. ### Trouble understanding univariate logistic regression using categorical data
Hello, I have a cancer dataset of 98 observations. Cancer detection rate was determined for 2 detection modalities (C and S). One of the independent variables of interest was a 3 tiered scoring system (possible scores: 3, 4, and 5). On univariate logistic regression, the score was...
2. ### Probability limit of ridge estimator mu hat
I have been working on the following question for a while but I stucked at a certain point. I have done part a and b and I found = (I_T + D'D)y where I_T is the identity matrix with dimension T, lambda is tuning parameter and D is the second order difference matrix which is a tridiagonal and...
3. ### Correct Cross Validation. How to calculate the projected R Squared or Residual Sum Sq
Hi, I have read into the subject of finding good estimators to determine the goodness of fit when the regression on a trainingset is projected on a testset (unseen data). I have found a lot of scientific papers but I get completely lost in terminologie and very complex equations I do not...
4. ### Showing Left Side to Right Side.
Let \mathbf x is a (p\times 1) vector, \mathbf\mu_1 is a (p\times 1) vector, \mathbf\mu_2 is a (p\times 1) vector, and \Sigma is a (p\times p) matrix. Now I have to show -\frac{1}{2}(\mathbf x-\mathbf\mu_1)'\Sigma^{-1}(\mathbf x-\mathbf\mu_1)+\frac{1}{2}(\mathbf...
5. ### Why does the determinant always equal zero for a matrix of consecutive numbers?
Hi. Why does the determinant always equal zero for a matrix of consecutive numbers? This applies whether the consecutive numbers are in the matrix starting from smallest to largest, or vice versa. It also applies irrespective of whether they are entered row then column or vice versa, which... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101984858512878, "perplexity": 1044.9981936985198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00013.warc.gz"} |
https://www.esaral.com/q/the-strength-of-an-aqueous-naoh-solution-is-most-accurately-determined-by-titrating-51549 | # The strength of an aqueous NaOH solution is most accurately determined by titrating:
Question:
The strength of an aqueous $\mathrm{NaOH}$ solution is most accurately determined by titrating: (Note: consider that an appropriate indicator is used)
1. Aq. $\mathrm{NaOH}$ in a pipette and aqueous oxalic acid in a burette
2. Aq. $\mathrm{NaOH}$ in a burette and aqueous oxalic acid in a conical flask
3. Aq. $\mathrm{NaOH}$ in a burette and concentrated $\mathrm{H}_{2} \mathrm{SO}_{4}$ in a conical flask
4. Aq. $\mathrm{NaOH}$ in a volumetric flask and concentrated $\mathrm{H}_{2} \mathrm{SO}_{4}$ in a conical flask
Correct Option: , 2
Solution:
Oxalic acid is a primary standard solution whereas $\mathrm{H}_{2} \mathrm{SO}_{4}$ is a secondary standard solution. So it does not matter whether oxalic acid is taken in a burette or in conical flask. Therefore accurate measurement of concentration by titration depends on the nature of the solution. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333795666694641, "perplexity": 3859.2932749666265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00146.warc.gz"} |
http://crypto.stackexchange.com/tags/rabin-cryptosystem/hot | # Tag Info
## Hot answers tagged rabin-cryptosystem
7
A is acting as a square-root oracle in that protocol. We can use that oracle to factor $n$ and break the scheme. Suppose you are an attacker that wants to impersonate A. You: Pick a random $m$; Send $m^2$ to A; Compute $p = \gcd(m_1 - m, n)$, thus factoring $n$. This works with probability $1/2$ for each attempt.
7
Unless they did something wrong (either accidentally, or deliberately to make it easy), there is no practical way. It's well known that, if you're able to compute the squareroot of an arbitrary number modulo a composite, you can efficiently factor that composite. And, solving $e=4$ is equivalent for solving the RSA problem twice with $e=2$. Now, it's ...
5
Rabin-Williams signature verification with 3072 bit keys is much faster than EdDSA signature verification of comparable security (when done in software). How much depends on care of coding, hardware, EdDSA parameters. Two data points: in the eBATS benchmarks for a skylake CPU, ronald3072 signature verification (RSA with $e=3$ as an OpenSSL wrapper, by ...
5
Adding some more information to fkraiem's answer: The encryption in the Rabin cryptsystem is basically textbook RSA with an exponent of $2$. 1) Neither p nor q are equal to 2. This means they are odd. The product of (p−1)(q−1) would be even i.e. not coprime with 2. Well, yes. That is one of the basic problems in Rabin's cryptosystem. If we want that $$... 5 The modulus 77 leads to a short period. 5 Since n = pq, then when an integer modulo n is a square, then it has (in general) four square roots. This can be seen by reasoning modulo p and modulo q: a square has two roots modulo p, and two roots modulo q, which makes for four combinations. More precisely, modulo a prime p, if y has a square root x, it also has another square root which is -x. The same ... 4 Because r is not guaranteed to be a Quadratic Residue, so for random r there wouldn't be m_1 such that r \equiv m_1^2(\mod n), therefore authentication will be impossible in this case. 4 Nightcracker's method works fine. There also are deterministic solutions to select the correct ciphertext that require very few additional bits. One very useful ingredient is the use of the Jacobi symbol. For example, you might look at The Rabin cryptosystem revisited by M. Elia, M. Piva and D. Schipani (http://arxiv.org/pdf/1108.5935.pdf). 4 This is a solution that should work with very high probability, but possibly can fail. As a bonus it also resists tampering with the ciphertext. As encrypter generate a random key (say a 128-bit key for AES128-CTR) and encrypt the plaintext using that key. Then compute a MAC over the ciphertext (for example using HMAC-SHA1) using the same key. Finally you ... 3 As first step to compute the four square roots of c \pmod N one can compute the two square roots \mod p and the two square roots \mod q and then using the Chinese Reminder Theorem combine them to the four square roots \mod N where N = p \cdot q. Let's start computing the square root of ciphertext c \mod p. Usually p \equiv q \equiv 3 \pmod 4. ... 3 At first I want to cite Lindell and Katz book: A "plain Rabin" encryption scheme, constructed in a manner analogous to plain RSA encryption, is vulnerable to a chosen-ciphertext attack that enables an adversary to learn the entire private key. Although plain RSA is not CCA-secure either, known chosen-ciphertext attacks on plain RSA are less damaging ... 3 After another 5 minutes of thought, I think I solved my own problem. Choose an arbitrary message m, compute c=m^2 % n and submit c and n to the Rabin oracle. If you repeat this enough times (by which I mean probably within 2 iterations) you will choose m in such a way that the oracle gives you ± the other root, which you can then use to factor n. 3 Here's how the attack works: Select a random value y Compute a = y^2 \bmod n Ask for the signature of a, that is x with x^2 = a If x \ne y and x + y \ne n, then gcd(n, x+y) is a proper factor of n The last step will succeed with probability \approx 0.5. You can make it probability 1 if you select a y with Jacobi symbol -1. 2 In short words: when you compute things modulo n = pq, you are really computing things simultaneously modulo p and modulo q. That's the gist of the Chinese Remainder Theorem. So to prove that a = b \pmod n, you just have to prove that a = b \pmod p and a = b \pmod q. Modulo p, for any x that is not a multiple of p, x^{p-1} = 1 \pmod p (... 2 By following the above advice (taking the equations for r and s given in the article and writing r-s) you will notice that q is a divisor, therefore GCD(|r-s|,n) cannot be 1. There are only two options left since n is only divisible by q and p. 2 Both Rabin and RSA rely on padding for security. Proper padding prevents chosen-ciphertext attacks since modified ciphertext has a negligible chance of producing valid padding. If you claim Rabin (or RSA) is vulnerable to CCA attacks, you should limit that to the unpadded/textbook variants. Most deployed implementations use padding, though some paddings are ... 2 RSA with e = 2 is Rabin, it works a bit differently and is slightly more mathematically involved, but it is a valid cryptosystem. 2 The equation a = x^2 \bmod N has at most 4 solutions x. There are solutions if a is a square modulo both p and q. This can be checked by computing the Legendre of symbol of x modulo p and modulo q. Assuming that the two Legendre symbols are +1, when p \equiv 3 \pmod 4, a square-root of a modulo p is given by x_p = a^{(p+1)/4} \... 1 Consider two numbers a and b that square to the same value modulo n and don't just differ by the sign.$$a^2 \equiv b^2 \pmod n2(a-b)(a+b) \equiv 0 \pmod n Neither of the factors on the left is 0 (or equivalently a multiple of $n$), thus each of them must contain one of the prime factors of $n$. Thus you can use $\operatorname{GCD}(a-b, n)$ ...
1
Let $\mathcal A$ be the hypothetical algorithm in the question, with input $(n,q)$, output $r$, such that $r^2\equiv q\pmod n$ for a $1/(\log(n))$ fraction of the quadratic residues $q\pmod n$, running in random polynomial time w.r.t. $\log(n)$, restricting to $n$ product of two large distinct primes. Let $\mathcal F$ be the following algorithm with input $... 1 That practice of replacing the result of$y=x^d\bmod N$(or$y=x^e\bmod N$) by$\hat y=\min(y,N-y)$is also in ISO/IEC 9796-2:2010 (paywalled) and ancestors; I first met that in [INCITS/ANSI]/ISO/IEC 9796:1991, also given in the Handbook of Applied Cryptography, see in particular note 11.36. ISO/IEC 9796 was a broken and now withdrawn ... 1 An older copy of P1363 Public Key Cryptography was used below. In may (or may not) reflect the current state of affairs. It also uses Bernstein's RSA signatures and Rabin–Williams signatures: the state of the art. Do tweaked roots violate P1363? What I might be really asking is, does an exponent of 2 run afoul of P1363, but I'm not sure at the moment. ... 1 Blinding is usually applied on the whole modulus, and I see no incentive to do otherwise; random is cheap. In RSA, blinding is not always applied as described in the question and article, for efficiency and security reasons: the technique described requires computing$r^d\bmod N$, which is just as costly as the$m^d\bmod N\$ operation being protected, and ...
1
Your question is related to the well known RABIN Cryptosystem which is similar to RSA, except the public exponent is 2. As fgrieu mentioned, decipherment can be easily processed by the CRT algorithm, but some precautions must beforehand be observed during the key generation. In fact the solution of the equation gives 4 roots, which means that the solution ...
1
The tricky point is that modulo a Blum integer (the product n = pq of two primes p and q that are equal to 3 modulo 4), in general, a quadratic residue (a value that is a square of something) has four square roots, not two. Consider the "normal" Rabin algorithm. Message m is encrypted into c = m2 mod n. To decrypt, you work modulo p and ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498569369316101, "perplexity": 980.5424714514513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00126-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://insti.physics.sunysb.edu/latex-help/ltx-295.html | # picture
\begin{picture}(width,height)(x offset,y offset)
.
picture commands
.
\end{picture}
The picture environment allows you to create just about any kind of picture you want containing text, lines, arrows and circles. You tell LaTeX where to put things in the picture by specifying their coordinates. A coordinate is a number that may have a decimal point and a minus sign - a number like 5, 2.3 or -3.1416. A coordinate specifies a length in multiples of the unit length \unitlength, so if \unitlength has been set to 1cm, then the coordinate 2.54 specifies a length of 2.54 centimeters. You can change the value of \unitlength anywhere you want, using the \setlength command, but strange things will happen if you try changing it inside the picture environment.
A position is a pair of coordinates, such as (2.4,-5), specifying the point with x-coordinate 2.4 and y-coordinate -5. Coordinates are specified in the usual way with respect to an origin, which is normally at the lower-left corner of the picture. Note that when a position appears as an argument, it is not enclosed in braces; the parentheses serve to delimit the argument.
The picture environment has one mandatory argument, which is a position. It specifies the size of the picture. The environment produces a rectangular box with width and height determined by this argument's x- and y-coordinates.
The picture environment also has an optional position argument, following the size argument, that can change the origin. (Unlike ordinary optional arguments, this argument is not contained in square brackets.) The optional argument gives the coordinates of the point at the lower-left corner of the picture (thereby determining the origin). For example, if \unitlength has been set to 1mm, the command
\begin{picture}(100,200)(10,20)
produces a picture of width 100 millimeters and height 200
millimeters, whose lower-left corner is the point (10,20) and whose
upper-right corner is therefore the point (110,220). When you first
draw a picture, you will omit the optional argument, leaving the
origin at the lower-left corner. If you then want to modify your
picture by shifting everything, you just add the appropriate optional
argument.
The environment's mandatory argument determines the nominal size of
the picture. This need bear no relation to how large the picture
really is; LaTeX will happily allow you to put things outside the
picture, or even off the page. The picture's nominal size is used by
TeX in determining how much room to leave for it.
Everything that appears in a picture is drawn by the \put command. The
command
\put (11.3,-.3){ ... }
puts the object specified by "..." in the picture, with its reference
point at coordinates (11.3,-.3). The reference points for various
objects will be described below.
The \put command creates an LR box. You can put anything in the text
argument of the \put command that you'd put into the argument of an
\mbox and related commands. When you do this, the reference point
will be the lower left corner of the box. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426071047782898, "perplexity": 1336.8041603864715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822145.14/warc/CC-MAIN-20171017163022-20171017183022-00155.warc.gz"} |
https://freakonometrics.hypotheses.org/tag/optimization | # Classification from scratch, linear discrimination 8/8
Eighth post of our series on classification from scratch. The latest one was on the SVM, and today, I want to get back on very old stuff, with here also a linear separation of the space, using Fisher’s linear discriminent analysis.
## Bayes (naive) classifier
Consider the follwing naive classification rule$$m^\star(\mathbf{x})=\text{argmin}_y\{\mathbb{P}[Y=y\vert\mathbf{X}=\mathbf{x}]\}$$or$$m^\star(\mathbf{x})=\text{argmin}_y\left\{\frac{\mathbb{P}[\mathbf{X}=\mathbf{x}\vert Y=y]}{\mathbb{P}[\mathbf{X}=\mathbf{x}]}\right\}$$(where $\mathbb{P}[\mathbf{X}=\mathbf{x}]$ is the density in the continuous case).
In the case where $y$ takes two values, that will be standard $\{0,1\}$ here, one can rewrite the later as$$m^\star(\mathbf{x})=\begin{cases}1\text{ if }\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})>\displaystyle{\frac{1}{2}}\\0\text{ otherwise}\end{cases}$$and the set$$\mathcal{D}_S =\left\{\mathbf{x},\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})=\frac{1}{2}\right\}$$is called the decision boundary.
Assume that$$\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})$$and$$\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})$$then explicit expressions can be derived.$$m^\star(\mathbf{x})=\begin{cases}1\text{ if }r_1^2< r_0^2+2\displaystyle{\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}+\log\frac{\vert\mathbf{\Sigma}_0\vert}{\vert\mathbf{\Sigma}_1\vert}}\\0\text{ otherwise}\end{cases}$$where $r_y^2$ is the Manalahobis distance, $$r_y^2 = [\mathbf{X}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[\mathbf{X}-\mathbf{\mu}_y]$$
Let $\delta_y$be defined as$$\delta_y(\mathbf{x})=-\frac{1}{2}\log\vert\mathbf{\Sigma}_y\vert-\frac{1}{2}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]+\log\mathbb{P}(Y=y)$$the decision boundary of this classifier is $$\{\mathbf{x}\text{ such that }\delta_0(\mathbf{x})=\delta_1(\mathbf{x})\}$$which is quadratic in ${\color{blue}{\mathbf{x}}}$. This is the quadratic discriminant analysis. This can be visualized bellow.
The decision boundary is here
But that can’t be the linear discriminant analysis, right? I mean, the frontier is not linear… Actually, in Fisher’s seminal paper, it was assumed that $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$.
In that case, actually, $$\delta_y(\mathbf{x})={\color{blue}{\mathbf{x}}}^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y-\frac{1}{2}\mathbf{\mu}_y^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y+\log\mathbb{P}(Y=y)$$ and the decision frontier is now linear in ${\color{blue}{\mathbf{x}}}$. This is the linear discriminant analysis. This can be visualized bellow
Here the two samples have the same variance matrix and the frontier is
## Link with the logistic regression
Assume as previously that$$\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})$$and$$\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})$$then$$\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}$$is equal to $$\mathbf{x}^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_y]-\frac{1}{2}[\mathbf{\mu}_1-\mathbf{\mu}_0]^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]+\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}$$which is linear in $\mathbf{x}$$$\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}=\mathbf{x}^{\text{{T}}}\mathbf{\beta}$$Hence, when each groups have Gaussian distributions with identical variance matrix, then LDA and the logistic regression lead to the same classification rule.
Observe furthermore that the slope is proportional to $\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]$, as stated in Fisher’s article. But to obtain such a relationship, he observe that the ratio of between and within variances (in the two groups) was$$\frac{\text{variance between}}{\text{variance within}}=\frac{[\mathbf{\omega}\mathbf{\mu}_1-\mathbf{\omega}\mathbf{\mu}_0]^2}{\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_1\mathbf{\omega}+\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_0\mathbf{\omega}}$$which is maximal when $\mathbf{\omega}$ is proportional to $\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]$, when $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$.
## Homebrew linear discriminant analysis
To compute vector $\mathbf{\omega}$
m0 = apply(myocarde[myocarde$PRONO=="0",1:7],2,mean) m1 = apply(myocarde[myocarde$PRONO=="1",1:7],2,mean) Sigma = var(myocarde[,1:7]) omega = solve(Sigma)%*%(m1-m0) omega [,1] FRCAR -0.012909708542 INCAR 1.088582058796 INSYS -0.019390084344 PRDIA -0.025817110020 PAPUL 0.020441287970 PVENT -0.038298291091 REPUL -0.001371677757
For the constant – in the equation $\omega^T\mathbf{x}+b=0$ – if we have equiprobable probabilities, use
b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2
## Application (on the small dataset)
In order to visualize what’s going on, consider the small dataset, with only two covariates,
x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) z = c(1,1,1,1,1,0,0,1,0,0) df = data.frame(x1=x,x2=y,y=as.factor(z)) m0 = apply(df[df$y=="0",1:2],2,mean) m1 = apply(df[df$y=="1",1:2],2,mean) Sigma = var(df[,1:2]) omega = solve(Sigma)%*%(m1-m0) omega [,1] x1 -2.640613174 x2 4.858705676
Using R regular function, we get
library(MASS) fit_lda = lda(y ~x1+x2 , data=df) fit_lda Coefficients of linear discriminants: LD1 x1 -2.588389554 x2 4.762614663
which is the same coefficient as the one we got with our own code. For the constant, use
b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2
If we plot it, we get the red straight line
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")]) abline(a=b/omega[2],b=-omega[1]/omega[2],col="red") As we can see (with the blue points), our red line intersects the middle of the segment of the two barycenters points(m0["x1"],m0["x2"],pch=4) points(m1["x1"],m1["x2"],pch=4) segments(m0["x1"],m0["x2"],m1["x1"],m1["x2"],col="blue") points(.5*m0["x1"]+.5*m1["x1"],.5*m0["x2"]+.5*m1["x2"],col="blue",pch=19) Of course, we can also use R function predlda = function(x,y) predict(fit_lda, data.frame(x1=x,x2=y))$class==1 vv=outer(vu,vu,predlda) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5)
One can also consider the quadratic discriminent analysis since it might be difficult to argue that $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$
fit_qda = qda(y ~x1+x2 , data=df)
The separation curve is here
plot(df$x1,df$x2,pch=19, col=c("blue","red")[1+(df$y=="1")]) predqda=function(x,y) predict(fit_qda, data.frame(x1=x,x2=y))$class==1 vv=outer(vu,vu,predlda) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5)
# Classification from scratch, SVM 7/8
Seventh post of our series on classification from scratch. The latest one was on the neural nets, and today, we will discuss SVM, support vector machines.
## A formal introduction
Here $y$ takes values in $\{-1,+1\}$. Our model will be $$m(\mathbf{x})=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b]$$ Thus, the space is divided by a (linear) border$$\Delta:\lbrace\mathbf{x}\in\mathbb{R}^p:\mathbf{\omega}^T\mathbf{x}+b=0\rbrace$$
The distance from point $\mathbf{x}_i$ to $\Delta$ is $$d(\mathbf{x}_i,\Delta)=\frac{\mathbf{\omega}^T\mathbf{x}_i+b}{\|\mathbf{\omega}\|}$$If the space is linearly separable, the problem is ill posed (there is an infinite number of solutions). So consider
$$\max_{\mathbf{\omega},b}\left\lbrace\min_{i=1,\cdots,n}\left\lbrace\text{distance}(\mathbf{x}_i,\Delta)\right\rbrace\right\rbrace$$
The strategy is to maximize the margin. One can prove that we want to solve $$\max_{\mathbf{\omega},m}\left\lbrace\frac{m}{\|\mathbf{\omega}\|}\right\rbrace$$
subject to $y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=m$, $\forall i=1,\cdots,n$. Again, the problem is ill posed (non identifiable), and we can consider $m=1$: $$\max_{\mathbf{\omega}}\left\lbrace\frac{1}{\|\mathbf{\omega}\|}\right\rbrace$$
subject to $y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=1$, $\forall i=1,\cdots,n$. The optimization objective can be written$$\min_{\mathbf{\omega}}\left\lbrace\|\mathbf{\omega}\|^2\right\rbrace$$
## The primal problem
In the separable case, consider the following primal problem,$$\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R}}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2\right\rbrace$$subject to $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1$, $\forall i=1,\cdots,n$.
In the non-separable case, introduce slack (error) variables $\mathbf{\xi}$ : if $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1$, there is no error $\xi_i=0$.
Let $C$ denote the cost of misclassification. The optimization problem becomes$$\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R},{\color{red}{\mathbf{\xi}}}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2 + C\sum_{i=1}^n\xi_i\right\rbrace$$subject to $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1-{\color{red}{\xi_i}}$, with ${\color{red}{\xi_i}}\geq 0$, $\forall i=1,\cdots,n$.
Let us try to code this optimization problem. The dataset is here
n = length(myocarde[,"PRONO"]) myocarde0 = myocarde myocarde0$PRONO = myocarde$PRONO*2-1 C = .5
and we have to set a value for the cost $C$. In the (linearly) constrained optimization function in R, we need to provide the objective function $f(\mathbf{\theta})$ and the gradient $\nabla f(\mathbf{\theta})$.
f = function(param){ w = param[1:7] b = param[8] xi = param[8+1:nrow(myocarde)] .5*sum(w^2) + C*sum(xi)} grad_f = function(param){ w = param[1:7] b = param[8] xi = param[8+1:nrow(myocarde)] c(2*w,0,rep(C,length(xi)))}
and (linear) constraints are written as $\mathbf{U}\mathbf{\theta}-\mathbf{c}\geq \mathbf{0}$
U = rbind(cbind(myocarde0[,"PRONO"]*as.matrix(myocarde[,1:7]),diag(n),myocarde0[,"PRONO"]), cbind(matrix(0,n,7),diag(n,n),matrix(0,n,1))) C = c(rep(1,n),rep(0,n))
Then we use
constrOptim(theta=p_init, f, grad_f, ui = U,ci = C)
Observe that something is missing here: we need a starting point for the algorithm, $\mathbf{\theta}_0$. Unfortunately, I could not think of a simple technique to get a valid starting point (that satisfies those linear constraints).
Let us try something else. Because those functions are quite simple: either linear or quadratic. Actually, one can recognize in the separable case, but also in the non-separable case, a classic quadratic program$$\min_{\mathbf{z}\in\mathbb{R}^d}\left\lbrace\frac{1}{2}\mathbf{z}^T\mathbf{D}\mathbf{z}-\mathbf{d}\mathbf{z}\right\rbrace$$subject to $\mathbf{A}\mathbf{z}\geq\mathbf{b}$.
library(quadprog) eps = 5e-4 y = myocarde[,"PRONO"]*2-1 X = as.matrix(cbind(1,myocarde[,1:7])) n = length(y) D = diag(n+7+1) diag(D)[8+0:n] = 0 d = matrix(c(rep(0,7),0,rep(C,n)), nrow=n+7+1) A = Ui b = Ci sol = solve.QP(D+eps*diag(n+7+1), d, t(A), b, meq=1, factorized=FALSE) qpsol = sol$solution (omega = qpsol[1:7]) [1] -0.106642005446 -0.002026198103 -0.022513312261 -0.018958578746 -0.023105767847 -0.018958578746 -1.080638988521 (b = qpsol[n+7+1]) [1] 997.6289927 Given an observation $\mathbf{x}$, the prediction is $$y=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b]$$ y_pred = 2*((as.matrix(myocarde0[,1:7])%*%omega+b)>0)-1 Observe that here, we do have a classifier, depending if the point lies on the left or on the right (above or below, etc) the separating line (or hyperplane). We do not have a probability, because there is no probabilistic model here. So far. ## The dual problem The Lagrangian of the separable problem could be written introducing Lagrange multipliers $\mathbf{\alpha}\in\mathbb{R}^n$, $\mathbf{\alpha}\geq \mathbf{0}$ as$$\mathcal{L}(\mathbf{\omega},b,\mathbf{\alpha})=\frac{1}{2}\|\mathbf{\omega}\|^2-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1\big)$$Somehow, $\alpha_i$ represents the influence of the observation $(y_i,\mathbf{x}_i)$. Consider the Dual Problem, with $\mathbf{G}=[G_{ij}]$ and $G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i$ $$\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace$$ subject to $\mathbf{y}^T\mathbf{\alpha}=\mathbf{0}$ and $\mathbf{\alpha}\geq\mathbf{0}$. The Lagrangian of the non-separable problem could be written introducing Lagrange multipliers $\mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\in\mathbb{R}^n$, $\mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\geq \mathbf{0}$, and define the Lagrangian $\mathcal{L}(\mathbf{\omega},b,{\color{red}{\mathbf{\xi}}},\mathbf{\alpha},{\color{red}{\mathbf{\beta}}})$ as$$\frac{1}{2}\|\mathbf{\omega}\|^2+{\color{blue}{C}}\sum_{i=1}^n{\color{red}{\xi_i}}-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1+{\color{red}{\xi_i}}\big)-\sum_{i=1}^n{\color{red}{\beta_i}}{\color{red}{\xi_i}}$$ Somehow, $\alpha_i$ represents the influence of the observation $(y_i,\mathbf{x}_i)$. The Dual Problem become with $\mathbf{G}=[G_{ij}]$ and $G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i$$$\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace$$ subject to $\mathbf{y}^T\mathbf{\alpha}=\mathbf{0}$, $\mathbf{\alpha}\geq\mathbf{0}$ and $\mathbf{\alpha}\leq {\color{blue}{C}}$. As previsouly, one can also use quadratic programming library(quadprog) eps = 5e-4 y = myocarde[,"PRONO"]*2-1 X = as.matrix(cbind(1,myocarde[,1:7])) n = length(y) Q = sapply(1:n, function(i) y[i]*t(X)[,i]) D = t(Q)%*%Q d = matrix(1, nrow=n) A = rbind(y,diag(n),-diag(n)) C = .5 b = c(0,rep(0,n),rep(-C,n)) sol = solve.QP(D+eps*diag(n), d, t(A), b, meq=1, factorized=FALSE) qpsol = sol$solution
The two problems are connected in the sense that for all $\mathbf{x}$$$\mathbf{\omega}^T\mathbf{x}+b = \sum_{i=1}^n \alpha_i y_i (\mathbf{x}^T\mathbf{x}_i)+b$$
To recover the solution of the primal problem,$$\mathbf{\omega}=\sum_{i=1}^n \alpha_iy_i \mathbf{x}_i$$thus
omega = apply(qpsol*y*X,2,sum) omega 1 FRCAR INCAR INSYS 0.0000000000000002439074265 0.0550138658687635215271960 -0.0920163239049630876653652 0.3609571899422952534486342 PRDIA PAPUL PVENT REPUL -0.1094017965288692356695677 -0.0485213403643276475207813 -0.0660058643191372279579454 0.0010093656567606212794835
while $b=y-\mathbf{\omega}^T\mathbf{x}$ (but actually, one can add the constant vector in the matrix of explanatory variables).
More generally, consider the following function (to make sure that $D$ is a definite-positive matrix, we use the nearPD function).
svm.fit = function(X, y, C=NULL) { n.samples = nrow(X) n.features = ncol(X) K = matrix(rep(0, n.samples*n.samples), nrow=n.samples) for (i in 1:n.samples){ for (j in 1:n.samples){ K[i,j] = X[i,] %*% X[j,] }} Dmat = outer(y,y) * K Dmat = as.matrix(nearPD(Dmat)$mat) dvec = rep(1, n.samples) Amat = rbind(y, diag(n.samples), -1*diag(n.samples)) bvec = c(0, rep(0, n.samples), rep(-C, n.samples)) res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1) a = res$solution bomega = apply(a*y*X,2,sum) return(bomega) }
On our dataset, we obtain
M = as.matrix(myocarde[,1:7]) center = function(z) (z-mean(z))/sd(z) for(j in 1:7) M[,j] = center(M[,j]) bomega = svm.fit(cbind(1,M),myocarde$PRONO*2-1,C=.5) y_pred = 2*((cbind(1,M)%*%bomega)>0)-1 table(obs=myocarde0$PRONO,pred=y_pred) pred obs -1 1 -1 27 2 1 9 33
i.e. 11 misclassification, out of 71 points (which is also what we got with the logistic regression).
## Kernel Based Approach
In some cases, it might be difficult to “separate” by a linear separators the two sets of points, like below,
It might be difficult, here, because which want to find a straight line in the two dimensional space $(x_1,x_2)$. But maybe, we can distort the space, possible by adding another dimension
That’s heuristically the idea. Because on the case above, in dimension 3, the set of points is now linearly separable. And the trick to do so is to use a kernel. The difficult task is to find the good one (if any).
A positive kernel on $\mathcal{X}$ is a function $K:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ symmetric, and such that for any $n$, $\forall\alpha_1,\cdots,\alpha_n$ and $\forall\mathbf{x}_1,\cdots,\mathbf{x}_n$,$$\sum_{i=1}^n\sum_{j=1}^n\alpha_i\alpha_j k(\mathbf{x}_i,\mathbf{x}_j)\geq 0.$$
For example, the linear kernel is $k(\mathbf{x}_i,\mathbf{x}_j)=\mathbf{x}_i^T\mathbf{x}_j$. That’s what we’ve been using here, so far. One can also define the product kernel $k(\mathbf{x}_i,\mathbf{x}_j)=\kappa(\mathbf{x}_i)\cdot\kappa(\mathbf{x}_j)$ where $\kappa$ is some function $\mathcal{X}\rightarrow\mathbb{R}$.
Finally, the Gaussian kernel is $k(\mathbf{x}_i,\mathbf{x}_j)=\exp[-\|\mathbf{x}_i-\mathbf{x}_j\|^2]$.
Since it is a function of $\|\mathbf{x}_i-\mathbf{x}_j\|$, it is also called a radial kernel.
linear.kernel = function(x1, x2) { return (x1%*%x2) } svm.fit = function(X, y, FUN=linear.kernel, C=NULL) { n.samples = nrow(X) n.features = ncol(X) K = matrix(rep(0, n.samples*n.samples), nrow=n.samples) for (i in 1:n.samples){ for (j in 1:n.samples){ K[i,j] = FUN(X[i,], X[j,]) } } Dmat = outer(y,y) * K Dmat = as.matrix(nearPD(Dmat)$mat) dvec = rep(1, n.samples) Amat = rbind(y, diag(n.samples), -1*diag(n.samples)) bvec = c(0, rep(0, n.samples), rep(-C, n.samples)) res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1) a = res$solution bomega = apply(a*y*X,2,sum) return(bomega) }
## Link to the regression
To relate this duality optimization problem to OLS, recall that $y=\mathbf{x}^T\mathbf{\omega}+\varepsilon$, so that $\widehat{y}=\mathbf{x}^T\widehat{\mathbf{\omega}}$, where $\widehat{\mathbf{\omega}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}$
But one can also write $$y=\mathbf{x}^T\widehat{\mathbf{\omega}}=\sum_{i=1}^n \widehat{\alpha}_i\cdot \mathbf{x}^T\mathbf{x}_i$$
where $\widehat{\mathbf{\alpha}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\widehat{\mathbf{\omega}}$, or conversely, $\widehat{\mathbf{\omega}}=\mathbf{X}^T\widehat{\mathbf{\alpha}}$.
## Application (on our small dataset)
One can actually use a dedicated R package to run a SVM. To get the linear kernel, use
library(kernlab) df0 = df df0$y = 2*(df$y=="1")-1 SVM1 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , type="C-svc")
Since the dataset is not linearly separable, there will be some mistakes here
table(df0$y,predict(SVM1)) -1 1 -1 2 2 1 1 5 The problem with that function is that it cannot be used to get a prediction for other points than those in the sample (and I could neither extract $\omega$ nor $b$ from the 24 slots of that objet). But it’s possible by adding a small option in the function SVM2 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , prob.model=TRUE, type="C-svc") With that function, we convert the distance as some sort of probability. Someday, I will try to replicate the probabilistic version of SVM, I promise, but today, the goal is just to understand what is done when running the SVM algorithm. To visualize the prediction, use pred_SVM2 = function(x,y){ return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])} plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM2(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,nlevels = .5,col="red")
Here the cost is $C$=.5, but of course, we can change it
SVM2 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "vanilladot" , prob.model=TRUE, type="C-svc") pred_SVM2 = function(x,y){ return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])} plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM2(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red") As expected, we have a linear separator. But slightly different. Now, let us consider the “Radial Basis Gaussian kernel” SVM3 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "rbfdot" , prob.model=TRUE, type="C-svc") Observe that here, we’ve been able to separare the white and the black points table(df0$y,predict(SVM3)) -1 1 -1 4 0 1 0 6
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM3(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red") Now, to be completely honest, if I understand the theory of the algorithm used to compute $\omega$ and $b$ with linear kernel (using quadratic programming), I do not feel confortable with this R function. Especially if you run it several times… you can get (with exactly the same set of parameters) or (to be continued…) # Traveling Salesman In the second part of the course on graphs and networks, we will focus on economic applications, and flows. The first series of slides are on the traveling salesman problem. Slides are available online. # Simple and heuristic optimization This week, at the Rmetrics conference, there has been an interesting discussion about heuristic optimization. The starting point was simple: in complex optimization problems (here we mean with a lot of local maxima, for instance), we do not necessarily need extremely advanced algorithms that do converge extremly fast, if we cannot ensure that they reach the optimum. Converging extremely fast, with a great numerical precision to some point (that is not the point we’re looking for) is useless. And some algorithms might be much slower, but at least, it is much more likely to converge to the optimum. Wherever we start from. We have experienced that with Mathieu, while we were looking for maximum likelihood of our MINAR process: genetic algorithm have performed extremely well. The idea is extremly simple, and natural. Let us consider as a starting point the following algorithm, 1. Start from some 2. At step , draw a point in a neighborhood of • either then • or then This is simple (if you do not enter into details about what such a neighborhood should be). But using that kind of algorithm, you might get trapped and attracted to some local optima if the neighborhood is not large enough. An alternative to this technique is the following: it might be interesting to change a bit more, and instead of changing when we have a maximum, we change if we have almost a maximum. Namely at step , • either then • or then for some . To illustrate the idea, consider the following function > f=function(x,y) { r <- sqrt(x^2+y^2); + 1.1^(x+y)*10 * sin(r)/r } (on some bounded support). Here, by picking noise and values arbitrary, we have obtained the following scenarios > x0=15 > MX=matrix(NA,501,2) > MX[1,]=runif(2,-x0,x0) > k=.5 > for(s in 2:501){ + bruit=rnorm(2) + X=MX[s-1,]+bruit*3 + if(X[1]>x0){X[1]=x0} + if(X[1]<(-x0)){X[1]=-x0} + if(X[2]>x0){X[2]=x0} + if(X[2]<(-x0)){X[2]=-x0} + if(f(X[1],X[2])+k>f(MX[s-1,1], + MX[s-1,2])){MX[s,]=X} + if(f(X[1],X[2])+k<=f(MX[s-1,1], + MX[s-1,2])){MX[s,]=MX[s-1,]} +} It does not always converge towards the optimum, and sometimes, we just missed it after being extremely unlucky Note that if we run 10,000 scenarios (with different random noises and starting point), in 50% scenarios, we reach the maxima. Or at least, we are next to it, on top. What if we compare with a standard optimization routine, like Nelder-Mead, or quasi gradient ?Since we look for the maxima on a restricted domain, we can use the following function, > g=function(x) f(x[1],x[2]) > optim(X0, g,method="L-BFGS-B", + lower=-c(x0,x0),upper=c(x0,x0))$par
In that case, if we run the algorithm with 10,000 random starting point, this is where we end, below on the right (while the heuristic technique is on the left),
In only 15% of the scenarios, we have been able to reach the region where the maximum is.
So here, it looks like an heuristic method works extremelly well, if do not need to reach the maxima with a great precision. Which is usually the case actually.
# EM and mixture estimation
Following my previous post on optimization and mixtures (here), Nicolas told me that my idea was probably not the most clever one (there).
So, we get back to our simple mixture model,
In order to describe how EM algorithm works, assume first that both and are perfectly known, and the mixture parameter is the only one we care about.
• The simple model, with only one parameter that is unknown
Here, the likelihood is
so that we write the log likelihood as
which might not be simple to maximize. Recall that the mixture model can interpreted through a latent variate (that cannot be observed), taking value when is drawn from , and 0 if it is drawn from . More generally (especially in the case we want to extend our model to 3, 4, … mixtures), and .
With that notation, the likelihood becomes
and the log likelihood
the term on the right is useless since we only care about p, here. From here, consider the following iterative procedure,
Assume that the mixture probability is known, denoted . Then I can predict the value of (i.e. and ) for all observations,
So I can inject those values into my log likelihood, i.e. in
having maximum (no need to run numerical tools here)
that will be denoted . And I can iterate from here.
Formally, the first step is where we calculate an expected (E) value, where is the best predictor of given my observations (as well as my belief in ). Then comes a maximization (M) step, where using , I can estimate probability .
• A more general framework, all parameters are now unkown
So far, it was simple, since we assumed that and were perfectly known. Which is not reallistic. An there is not much to change to get a complete algorithm, to estimate . Recall that we had which was the expected value of Z_{1,i}, i.e. it is a probability that observation i has been drawn from .
If , instead of being in the segment was in , then we could have considered mean and standard deviations of observations such that =0, and similarly on the subset of observations such that =1.
But we can’t. So what can be done is to consider as the weight we should give to observation i when estimating parameters of , and similarly, 1-would be weights given to observation i when estimating parameters of .
So we set, as before
and then
and for the variance, well, it is a weighted mean again,
and this is it.
• Let us run the code on the same data as before
Here, the code is rather simple: let us start generating a sample
> X1 = rnorm(n,0,1)
> X20 = rnorm(n,0,1)
> Z = sample(c(1,2,2),size=n,replace=TRUE)
> X2=4+X20
> X = c(X1[Z==1],X2[Z==2])
then, given a vector of initial values (that I called and then before),
> s = c(0.5, mean(X)-1, var(X), mean(X)+1, var(X))
I define my function as,
> em = function(X0,s) {
+ Ep = s[1]*dnorm(X0, s[2], sqrt(s[4]))/(s[1]*dnorm(X0, s[2], sqrt(s[4])) +
+ (1-s[1])*dnorm(X0, s[3], sqrt(s[5])))
+ s[1] = mean(Ep)
+ s[2] = sum(Ep*X0) / sum(Ep)
+ s[3] = sum((1-Ep)*X0) / sum(1-Ep)
+ s[4] = sum(Ep*(X0-s[2])^2) / sum(Ep)
+ s[5] = sum((1-Ep)*(X0-s[3])^2) / sum(1-Ep)
+ return(s)
+ }
Then I get , or . So this is it ! We just need to iterate (here I stop after 200 iterations) since we can see that, actually, our algorithm converges quite fast,
> for(i in 2:200){
+ s=em(X,s)
+ }
Let us run the same procedure as before, i.e. I generate samples of size 200, where difference between means can be small (0) or large (4),
Ok, Nicolas, you were right, we’re doing much better ! Maybe we should also go for a Gibbs sampling procedure ?… next time, maybe….
# Optimization and mixture estimation
Recently, one of my students asked me about optimization routines in R. He told me he that R performed well on the estimation of a time series model with different regimes, while he had trouble with a (simple) GARCH process, and he was wondering if R was good in optimization routines. Actually, I always thought that mixtures (and regimes) was something difficult to estimate, so I was a bit surprised…
Indeed, it reminded me some trouble I experienced once, while I was talking about maximum likelihooh estimation, for non standard distribution, i.e. when optimization had to be done on the log likelihood function. And even when generating nice samples, giving appropriate initial values (actually the true value used in random generation), each time I tried to optimize my log likelihood, it failed. So I decided to play a little bit with standard optimization functions, to see which one performed better when trying to estimate mixture parameter (from a mixture based sample). Here, I generate a mixture of two gaussian distributions, and I would like to see how different the mean should be to have a high probability to estimate properly the parameters of the mixture.
The density is here proportional to
The true model is , and being a parameter that will change, from 0 to 4.
The log likelihood (actually, I add a minus since most of the optimization functions actually minimize functions) is
> logvraineg <- function(param, obs) {
+ p <- param[1]
+ m1 <- param[2]
+ sd1 <- param[3]
+ m2 <- param[4]
+ sd2 <- param[5]
+ -sum(log(p * dnorm(x = obs, mean = m1, sd = sd1) + (1 – p) *
+ dnorm(x = obs, mean = m2, sd = sd2)))
+ }
The code to generate my samples is the following,
>X1 = rnorm(n,0,1)
> X20 = rnorm(n,0,1)
> Z = sample(c(1,2,2),size=n,replace=TRUE)
> X2=m+X20
> X = c(X1[Z==1],X2[Z==2])
Then I use two functions to optimize my log likelihood, with identical intial values,
> O1=nlm(f = logvraineg, p = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)), obs = X)
> logvrainegX <- function(param) {logvraineg(param,X)}
> O2=optim( par = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)),
+ fn = logvrainegX)
Actually, since I might have identification problems, I take either or , depending whether or is the smallest parameter.
On the graph above, the x-axis is the difference between means of the mixture (as on the animated grap above). Then, the red point is the median of estimated parameter I have (here ), and I have included something that can be interpreted as a confidence interval, i.e. where I have been in 90% of my scenarios: theblack vertical segments. Obviously, when the sample is not enough heterogeneous (i.e. and rather different), I cannot estimate properly my parameters, I might even have a probability that exceed 1 (I did not add any constraint). The blue plain horizontal line is the true value of the parameter, while the blue dotted horizontal line is the initial value of the parameter in the optimization algorithm (I started assuming that the mixture probability was around 0.2).
The graph below is based on the second optimization routine (with identical starting values, and of course on the same generated samples),
(just to be honest, in many cases, it did not converge, so the loop stopped, and I had to run it again… so finally, my study is based on a bit less than 500 samples (times 15 since I considered several values for the mean of my second underlying distribution), with 200 generated observations from a mixture).
The graph below compares the two (empty circles are the first algorithm, while plain circles the second one),
On average, it is not so bad…. but the probability to be far away from the tru value is not small at all… except when the difference between the two means exceeds 3…
If I change starting values for the optimization algorithm (previously, I assumed that the mixture probability was 1/5, here I start from 1/2), we have the following graph
which look like the previous one, except for small differences between the two underlying distributions (just as if initial values had not impact on the optimization, but it might come from the fact that the surface is nice, and we are not trapped in regions of local minimum).
Thus, I am far from being an expert in optimization routines in R (see here for further information), but so far, it looks like R is not doing so bad… and the two algorithm perform similarly (maybe the first one being a bit closer to the trueparameter). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 120, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084352850914001, "perplexity": 2018.0863430272657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592475.84/warc/CC-MAIN-20180721090529-20180721110529-00371.warc.gz"} |
http://mathhelpforum.com/calculus/144153-find-length-curve-over-interval.html | # Math Help - find the length of the curve over the interval
1. ## find the length of the curve over the interval
i kinda got stuck on this problem and im not sure where to go with it??? any suggestions? thx in advanceIMG_0002.pdf
2. Originally Posted by slapmaxwell1
i kinda got stuck on this problem and im not sure where to go with it??? any suggestions? thx in advanceIMG_0002.pdf
1 + cos(θ) = 2cos^2(θ/2)
Now proceed with integration. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167394638061523, "perplexity": 596.099084800922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678695509/warc/CC-MAIN-20140313024455-00055-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/41451-ratio.html | # Math Help - Ratio
1. ## Ratio
Hello all,
I need some help on this problem:
If $\frac{y+z}{pb+qc} = \frac{z+x}{pc+qa} = \frac{x+y}{pa+qb}$, then show that $\frac{2(x+y+z)}{a+b+c} = \frac{(b+c)x + (c+a)y + (a+b)z}{bc+ ca + ab}$
2. Hi!
You can easily get it by using the property
if a/b = c/d
then a/b = c/d = (a+c)/(b+d)
Try it. If you can't get it, tell me.
Keep Smiling
Malay
3. I tried ...
The sum of $\frac{y+z}{pb+qc} = \frac{z+x}{pc+qa} =\frac{x+y}{pa+qb} = \frac{2(x+y+z)}{(p+q)(a+b+c)}$
It is close to the proof except for $(p+b)$ in the denominator. What should I do next? It can disappear if I assume $(p+b) = 1$
4. $\frac{y+z}{pq+bc}=\frac{ay+az}{pab+qac}$
$\frac{z+x}{pc+qa}=\frac{bz+bx}{pbc+qab}$
$\frac{x+y}{pa+qb}=\frac{cx+cy}{pac+qbc}$
$\frac{y+z}{pq+bc}+\frac{z+x}{pc+qa}+\frac{x+y}{pa+ qb}=\frac{ay+az}{pab+qac}+\frac{bz+bx}{pbc+qab}+\f rac{cx+cy}{pac+qbc}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123596549034119, "perplexity": 1138.6471413470258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462202.69/warc/CC-MAIN-20150226074102-00146-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/207926-need-help-double-angle-question.html | # Math Help - Need help with a double angle question
1. ## Need help with a double angle question
If cos2x=12/13 and 2x is in the first quadrant, evaluate the six trigonometric functions for x, without your calculator. The answer is:
cos x= 5/√26; sec x=√26/5
sinx=1/
√26; csc x=√26
tanx=1/5; cot x=5
How do I get these answers? Any help would be greatly appreciated.
2. ## Re: Need help with a double angle question
Do you know either the "double angle formulas" or the "half angle formulas"?
3. ## Re: Need help with a double angle question
since $0 < 2x < \frac{\pi}{2}$ , then $0 < x < \frac{\pi}{4}$
$\cos{x} = \sqrt{\frac{1 + \cos(2x)}{2}}$
$\sin{x} = \sqrt{\frac{1 - \cos(2x)}{2}}$
sub in the value given for $\cos(2x)$ ...
4. ## Re: Need help with a double angle question
Thanks guys, I got it now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130949378013611, "perplexity": 2476.0961646159835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435376073161.33/warc/CC-MAIN-20150627033433-00100-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://3dprinting.stackexchange.com/questions/9994/no-extrusion-but-manual-extrusion-works | No extrusion, but manual extrusion works
I just bought and build my first 3d printer (HE3D K280 with Marlin) and I'm encountering some problems with Cura 4 and Repetier. When I load and slice a part, the printer does not extrude anything during printing. However, when I manually extrude like 100mm (G1 F100 E100) it does work. Now I'm suspecting the problem lies with the gcode file which is generated with Cura since it contains very small values for E:
;Layer height: 0.2
;Generated with Cura_SteamEngine 4.0.0
M140 S60
M105
M190 S60
M104 S200
M105
M109 S200
M82 ;absolute extrusion mode
G28 ;Home
G1 Z15.0 F6000 ;Move the platform down 15mm
;Prime the extruder
G92 E0
G1 F200 E3
G92 E0
G92 E0
G1 F1500 E-6.5
;LAYER_COUNT:250
;LAYER:0
M107
G0 F3600 X-7.753 Y4.378 Z0.3
;TYPE:SKIRT
G1 F1500 E0
G1 F1800 X-8.127 Y3.918 E0.01115
G1 X-8.35 Y3.57 E0.01893
G1 X-9.088 Y2.287 E0.04677
G1 X-9.348 Y1.754 E0.05792
G1 X-9.483 Y1.376 E0.06547
G1 X-11.413 Y-4.956 E0.18999
G1 X-11.547 Y-5.534 E0.20115
G1 X-11.602 Y-6.124 E0.2123
Etc...
Does anyone know how to fix this?
• How many heads is your printer capable of supporting concurrently? Have you tried entering a line with only T0 before your first G1 move? Do you know if you're slicing for linear or volumetric E values, and which your printer requires? – Davo May 21 '19 at 13:29
• I does supprt 2 heads, however I'm using just 1. I just added T0 but unfortunately this did not work.. I'm slicing for linear but i tried with both and it did not extrude with either option – Mikelo May 21 '19 at 13:38
• First thought volumetric flow, but on second thoughts: "What is the filament diameter in the slicer?" It looks as if the diameter is too large. – 0scar May 21 '19 at 13:53
I think that you have the incorrect diameter specified (e.g. 2.85 mm instead of 1.75 mm) in your slicer; this also appears from a calculation, see below. Note that you can calculate from extruded volume entering the hotend, or deposited volume. For the first you could calculate the line width of the deposited line and verify that with the settings; from the second you can verify if the volume for the extruded filament equals filament volume based on extruded filament going into the hotend for an assumed line width. Do note that (certainly for first layers!) modifiers may be in place. This is merely to get a ballpark feeling for the chosen filament diameter.
If you look at the first move from:
G0 F3600 X-7.753 Y4.378 Z0.3
to:
G1 F1800 X-8.127 Y3.918 E0.01115
You can calculate the travelled distance $$s = \sqrt{{\Delta X}^2+{\Delta Y}^2} = 0.59\ mm$$. Also, from these moves you can see that $$0.01115\ mm$$ of filament enters the extruder $$(E)$$.
The deposited volume ($$V_{extruded_filament}$$) of the printed line equals the cross sectional area $$\times$$ length of the deposited filament path. Area could be defined as taken from e.g. the Slic3r reference manual to be:
Basically (as we apply conservation of mass) the filament volume $$(V_{filament})$$ entering the hotend need to be the same as the extruded filament volume $$(V_{extruded_filament})$$ leaving the nozzle; so $$A_{filament}\times E = A_{extruded\ filament}\times s$$.
This latter equation can be solved for $$w$$ by filling out the known parameters. From this calculation follows that for $$1.75\ mm$$ filament you get a calculated line width of about $$0.22\ mm$$, and respectively for $$2.85\ mm$$ filament you get $$0.46\ mm$$ line widths.
As the nozzle diameter has not been specified in the question, but most commonly used nozzle diameter often is $$0.4\ mm$$, and modifiers for the first layer are at play to print thicker lines; you most probably have the have the wrong filament diameter set if you have a $$1.75\ mm$$ extruder setup. Basically it under-extrudes.
• I'd have thought that at least a little material would get pushed out in this case -- perhaps the retraction commands (for the wrong diameter again) keep material from ever getting to the nozzle output? – Carl Witthoft May 21 '19 at 15:59
• @CarlWitthoft That would be in case the g-code is in relative coordinates, this is absolute. Not always after retraction, the nozzle is fully primed with the same length extrusion. – 0scar May 21 '19 at 16:27
• This was indeed one of the problems! Another problem was that it didnt save the option that I wanted to slice volumetric (which was needed for my printer after all, manual of K280 sucks). Thanks for your help! :) – Mikelo May 22 '19 at 14:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115177154541016, "perplexity": 3740.833328700412}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00234.warc.gz"} |
http://mathoverflow.net/questions/97857/partial-backups | # Partial backups
Suppose you have some storage medium of a given size M, and can make some kind of backup on another medium of size B with M > B. You can choose the scheme to determine the contents of the backup.
After you made that partial backup, an adversary (or a random process) will make a number of changes to your original medium. Given the changed medium and your partial backup, your task is to restore the original state of your medium. How many changes could you undo? What is the theoretical maximum? And how successful are the schemes you can come up with?
I have toyed with this question for a while. Obviously, in general you can not hope to undo more than B changes. Viewed more mathematical, I am looking for a systematic code that works with huge block sizes.
-
This question differs from the ones I've seen discussed in coding theory, because you're given that errors can be introduced only into the original medium, not into the backup. (Note that I wrote "I've seen" --- there's plenty of coding theory that I haven't seen, and questions like this may well have been treated there.) – Andreas Blass May 24 '12 at 18:31
I would start calculations based on Reed-Solomon code, but it has the following obvious drawbacks: 1) it will correct data only in chunks that are multiples of the size of the field element (=symbol), 2) the larger the total package M+B, the larger will the symbols have to be: with $r$ bit symbols the maximum size of M+B is $r\cdot 2^r$ bits, 3) the amount of corrupted data is counted in terms of the affected symbols, so if the adversary changes a single bit of a symbol, the entire symbol is corrupted, 4) it won't take advantage of the fact that the errors are all in the original. – Jyrki Lahtonen May 24 '12 at 19:25
...(continue, sorry). So I would like to know a little bit more about what kind of errors the adversary will be able to induce. Do we know anything about that? Will the adversary like make a pass with a magnet over your storage medium (in which case we might reasonably assume that contiguous blocks of data will be affected). A scheme based on an RS-code has the big plus side that with $R=B/r$ check symbols we can correct up to $R/2$ corrupted symbols. You can double this number, if (a big if, but again something I need to ask) we know the locations of the changes. – Jyrki Lahtonen May 24 '12 at 19:31
...(continue, sorry^2). How large can we expect M+B to be? Are we talking kilobytes, megabytes or gigabytes? At some point the granularity of RS-codes may become an issue. Another idea that comes to mind is to "waste" some of the storage space of the original copy by adding 32-bit CRCs to chunks of data (or some error-detection scheme like that). Then we can encode/decode on a chunk-by-chunk basis, and we shall automatically know which chunks are corrupted (in which case R extra chunks in B allow the recovery of R corrupted chunks in M). But again, a single flipped bit will ruin a chunk. – Jyrki Lahtonen May 24 '12 at 19:41
Without some restrictions on the backup, it seems to be a red herring. You want to be able to extract some number of bits out of $M$, a standard problem. If you can store $B$ bits reliably in the backup, this lowers the number of bits you need to store in the medium by $B$. – Douglas Zare May 24 '12 at 20:43
Your question is very similar to the extended idea of erasure-resilient codes discussed here:
Originally, erasure-resilient codes were introduced for RAID (redundant array of independent disks) and similar storage systems. They are systematic codes, and Chee, Colbourn, and Ling's version is good for the type of problem you described. As is often the case with studies on reliability of storage, the focus of erasure-resilient codes is on data corruptions, unreadable bits, disk failure, and the like (which are all "erasures" in math) rather than bit flips. But if we forget about more practical issues and focus on math, erasures and bit flips can both be treated the same way by the notion of minimum distance, so here's some little things that are known about such codes in the math literature.
The idea is basically the same as systematic linear codes. For the sake of simplicity, we only consider the binary case here. Assume that we have a linear $[n,k,d]$ code of length $n$, dimension $k$, and minimum distance $d$. Here, the dimension $k$ and the number $n-k$ will be your $M$ and $B$ respectively. Because it's systematic, we use a parity-check matrix $H$ in standard form:
\begin{align*} H &= \left[\begin{array}{cc}I & A\end{array}\right]\\ &= \left[\begin{array}{ccccccc} 1&0&\dots&0 & a_{0,0} & a_{0,1} & \dots &a_{0,k-1}\\ 0&1&\dots&0 & a_{1,0} & a_{1,1} & \dots &a_{1,k-1}\\ \vdots&\vdots&\ddots&\vdots&&&\vdots&\\ 0&0&\dots&1 & a_{k-1,0} & a_{k-1,1} & \dots &a_{k-1,k-1} \end{array}\right] \end{align*},
where $I$ is the $(n-k)\times(n-k)$ identity matrix and $A = (a_{i,j})$ is a $k \times k$ matrix with $a_{i,j} \in \mathbb{F}_2$. The rows of $H$ are indexed by the $n-k$ bits for "some kind of backup" in your question (or any kind of storage medium of size $B = n-k$ for that matter) and columns of $A$ are indexed by the $k$ data bits we want to protect (i.e., the original data of size $M = k$).
The backup scheme is that on the $i$th backup bit, we write the sum of the data bits according to whether $a_{i,j}$ is $0$ ("ignore") or $1$ ("add"), so that the $i$th backup bit $\beta_i$ is
$$\beta_i = \sum_{x \in \{j \ \mid\ a_{i,j} = 1\} } \delta_x \pmod{2},$$
where $\delta_j$ is the $j$th unreliable data bit we are going to protect.
It is straightforward to see that the standard syndrome decoding will detect errors on $\delta_i$ as long as the number of affected data bits are fewer than or equal to $\lfloor\frac{d-1}{2}\rfloor$; we just compare each $\beta_i$ with the sum of the corresponding data bits and see if they add up, which will give us the error syndrome.
Now, we have the assumption that the backups $\beta_i$ are more reliable than the original data $\delta_j$. (Chee, Colbourn, and Ling's view is a bit different. But in situations we consider, both views coincide.) In your case, all $\beta_i$ are assumed to be immune to errors.
The question is whether we can correct more than $\lfloor\frac{d-1}{2}\rfloor$ errors if $\beta_i$ are all reliable. In general, the answer should be yes. So, your question boils down to how much this assumption can increase the maximum number of tolerable errors and how we can construct systematic linear codes that take full advantage of the reliable backups.
Unfortunately, these questions appear to be open in general. Basically, we need to understand how the minimum distance changes when the $n-k$ check bits of systematic linear $[n,k,d]$ codes are chopped off to form new non-systematic linear codes of length $k$. But it can be proved that some nice combinatorial structure, when used as the $A$ part, roughly doubles the minimum distance compared to when $\beta_i$ can be erroneous while having a huge block size as you requested. For instance, the following paper proved that the incidence matries of the Steiner $2$-designs forming the points and lines of affine geometry $\text{AG}(m,q)$ with $q$ odd are of this "almost doubling" kind:
M. Müller, M. Jimbo, Erasure-resilient codes from affine spaces, Discrete Appl. Math., 143 (2004) 292–297.
The code parameters in the case of affine geometry $\text{AG}(m,q)$ with $q$ odd and $m \geq 2$ is
\begin{align*} n &= q^{m-1}\frac{q^m-1}{q-1}+q^m,\\ k &= q^{m-1}\frac{q^m-1}{q-1},\\ d' &= 2q\ \quad \text{ if backups are reliable},\\ (d &= q+1\ \text{ if backups are as unreliable}). \end{align*}
Actually, their assumption is slightly more pessimistic in the sense that $\beta_i$ are "less prone" to errors, not completely reliable like your case. So they proved something stronger than the almost doubled minimum distance. In fact, their codes are asymptotically optimal under the pessimistic assumption as well as one more assumption that the column weights of $A$ are assumed to be uniform $w$ (which happens to be a reasonable thing to assume for the original, main intended purpose of erasure-resilient codes). Note that such $H = \left[\begin{array}{cc}I & A\end{array}\right]$ can be of minimum distance at most $d = w+1$ because a column of $A$ and a set of $w$ columns from $I$ can form a linearly dependent set. All else being equal, such error patters are much less likely than those that involve fewer backup bits. (And in the case of perfectly reliable backups, they don't happen.)
As in your question, let $B$ be the number of backup bits. Assume that we require our erasure-resilient code to detect all errors on $d'-1$ bits or fewer except the very unlikely ones that involve one data bit and $w=d+1$ backups. Define $\text{ex}(B,d,d')$ to be the maximum number of data bits for such an erasure-resilient code. Chee, Colbourn, and Ling proved that
$$\text{ex}(B,d,d') \leq c\cdot B^{d-\lfloor\frac{d'-1}{2}\rfloor}$$
for some constant $c$. Because codes from $\text{AG}(m,q)$ is of uniform column weight $q$, they asymptotically attain the upper bound on the block size $c\cdot B^2$.
A friend of mine proved the same thing for projective geometry, although it's not published yet. (If you're curious about the exact statement, you can find it in the language of design theory as Theorem 3.16 in our preprint http://arxiv.org/pdf/1309.5587.pdf)
So, while the case of completely reliable backups doesn't seem to have directly been studied, there are similar studies that give infinitely many examples of codes that significantly improve error correction capabilities when backup data are free from errors. And they are designed to support extremely large numbers of data bits, which translates to huge block sizes as requested. But constructions for codes and general bounds on improved minimum distance $d'$ for when backup bits $\beta_i$ are perfectly reliable seem to be wide open.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605336546897888, "perplexity": 406.8458531514193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121985.80/warc/CC-MAIN-20160428161521-00091-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/309980/which-zero-diagonal-matrices-contain-the-all-one-vector-in-their-columns-conic | # Which zero-diagonal matrices contain the all-one vector in their columns' conic hull?
Let $A$ be a non-negative zero-diagonal invertible matrix. Which $A$ make the following assertions true, which are all equivalent:
1. The all-one vector $j$ is contained in the conic hull of $col(A)$.
2. The row sums of $A^{-1}$ are non-negative.
3. $ADj > 0$, where $D$ is any diagonal matrix with trace $1$.
4. The affine hull of $col(-A)$ does not intersect the non-negative orthant.
The equivalency of $(1)$ and $(2)$ follows from the equation $$Ax = j.$$ Assertion $(3)$ is deduced from Farkas' Lemma, as the existence of a positive solution to the above equation implies that there cannot exist a vector $y$ with $y'j = -1$ such that $Ay \geq 0$ (I normalized $y$ without loss of generality). The set of $y$ with sum-of-entries $-1$ is given by $\{y\;|\;y=-Dj: tr(D)=1 \;\text{and}\; D \; \text{diagonal}\}$, the affine combinations of the negative standard basis vectors. This leads to $-ADj <0$.
Finally, the matrix of images of the negative standard basis under $A$ is simply $-A$. Hence, requiring the affine hull of these images not to contain any non-negative vector should be equivalent to $(3)$.
Two sufficient conditions are that $A$ be positive monomial with zero diagonal (as at least one of the entries of $y$ must be negative), or the adjacency matrix of a regular graph. What can be said in general?
• What is a conic hull, please? – Gerry Myerson Sep 6 '18 at 11:49
• The (strict) conic hull of a set of real vectors $V=\{v_1, \dots, v_n\}$ is defined as $\left\{\sum_{i = 1}^n \alpha_i v_i \;|\; \alpha_i > 0 \right\}$. – bodhisat Sep 6 '18 at 11:57
• No. Consider the matrix \begin{pmatrix} 0&1&0 \\ 2&0&2 \\ 1&0&0 \end{pmatrix} Vector $j$ lies in the span of the columns with weights $1,1,-1/2$. But it does not lie in their conic hull. – bodhisat Sep 6 '18 at 17:19
• The set of matrices $M$ with $j$ in their conic hull is a (closed?) convex cone. The set of non-negative matrices is a pointed, closed, convex cone, and the set of matrices with zero diagonals is a linear subspace. The set of matrices we are interested in is the intersection of all these, therefore it is also a (closed?) pointed convex cone. My intuition is that this whose set should be closed but I will have to double check this. Then we can apply theorem 2.55 from ams.jhu.edu/~abasu9/AMS_550-465/notes-without-frills.pdf to get a reduced version of the problem. – Pushpendre Sep 9 '18 at 19:22
• The cone is not convex. Consider the following counter-example: $$\begin{pmatrix} 0&9&1 \\ 3 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} = \begin{pmatrix} 0&9&0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix} + \begin{pmatrix} 0&0&1 \\ 3 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$$ The matrix is a positive linear combination of two monomial matrices which are part of the cone. Yet their sum yields a matrix whose inverse has the row-sums $.75, .25, -1.25$. – bodhisat Sep 10 '18 at 17:30
Some further thoughts approaching the solution.
I found a connection of the question to potential theory. Let $A$ be an invertible non-negative matrix, and $j$ the all-one vector. The (right) signed potential $\nu$ of $A$ is the solution to $$A\nu = j.$$ Additionally requiring $\nu > 0$ leads to a strict equilibrium potential. Hence, my question can be rephrased as follows:
Which are the necessary and sufficient conditions on $A$ for the existence of a strict equilibrium potential?
Of relevance is the definition of a strict potential matrix. $A$ is a potential iff its inverse is a strictly row-diagonally dominant $M$-matrix. This subsumes, for example, the strictly ultrametric matrices whose inverses are strictly row-diagonally dominant Stieltjes, and generalized strictly ultrametric matrices for $A$ asymmetric.
Yet none of these classes allows for zero diagonals, and all of this is just sufficent. For instance, positive monomial matrices obviously have strict equilibrium potentials as well. Strikingly, if $A$ has the potential $\nu > 0$, then $SA$, where $S$ is pseudo-stochastic (i.e. row-sums equal one) must have the exact same potential. This can be seen by noting that $SA\nu = Sj = j$.
Let $A = DS_A$, where $D_{ii} = \sum_j A_{ij}$. Interpreting $A$ as a graph, $S_A$ is the Markov chain on $A$. We now have $$DS_A\nu = j = S_A^{-1}DS_A\nu.$$
Hence, $A$ has the same potential as its degree matrix, expressed in the basis given by the columns of its Markov chain. Is this useful to think about the existence of equilibrium potentials in general?
Nabben, Reinhard, and Richard S. Varga. "Generalized ultrametric matrices—a class of inverse M-matrices." Linear Algebra and its Applications 220 (1995): 365-390.
• The zero-diagonal requirement can be dispensed with, by the way. I found a way to get rid of it for my purposes. – bodhisat Sep 18 '18 at 13:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9733424186706543, "perplexity": 267.662722254626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00229.warc.gz"} |
http://mathhelpforum.com/pre-calculus/218891-radicals-5-a.html | 1. ## Radicals #5
imgur: the simple image sharer
d)
The dividing 4 part is confusing me, I'm not sure what to do:
imgur: the simple image sharer
I was pretty close, I just got 7 instead of 13/4...
3. ## Re: Radicals #5
Are you sure you did question d) ? I do not follow.
4. ## Re: Radicals #5
You are right i did question (d) of the upper question. Q 9 ( d ) is attached herewith;
Thank you! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630640745162964, "perplexity": 4745.030558467633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541907.82/warc/CC-MAIN-20161202170901-00148-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/calculating-molarity.428474/ | # Homework Help: Calculating Molarity
1. Sep 12, 2010
### Jim4592
1. The problem statement, all variables and given/known data
Eighty-six proof whiskey is 43 percent ethyl alcohol, CH3CH2OH, by volume. If the density of ethyl alcohol is 0.79 kg/L, what is the molarity in whiskey.
2. Relevant equations
Molar Mass of CH3CH2OH = 46.07 g
3. The attempt at a solution
0.79 kg/L * 1000 g/kg * 1 mole / 46.07 g = 17.15 M
I was just looking for a check on this particular problem since I haven't taken a chemistry course since my freshman year ha!
2. Sep 13, 2010
### Staff: Mentor
Not bad - you are on the right track - but wrong. You have not used 43% in your calculations and this is an important information.
Last edited by a moderator: Aug 13, 2013
3. Sep 13, 2010
### Jim4592
I thought you would have to use that 43% in there somewhere, but I'm not sure how to use it. It would be nice if I still owned my chem book.
UPDATE:
Ok I tried re-working the problem again, here's what I came up with:
0.79 kg/L * 0.43 L / 1 L * 1000 g / 1 kg * 1 mol / 46.07 g = 7.37 M
how does that look?
Last edited: Sep 13, 2010
4. Sep 14, 2010
Much better. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442716598510742, "perplexity": 2287.649673096604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831334.97/warc/CC-MAIN-20181219045716-20181219071716-00151.warc.gz"} |
http://www.webdeveloper.com/forum/showthread.php?275311-Image-and-text-on-same-line&p=1257687&mode=threaded | ## Image and text on same line
I'm trying to have an image of fixed height/width on the left, and text on the right, in the same line of course. The overall container has a dynamic width of 90% of viewport, meaning that the text on the right will also have a dynamic width (90% - image width) since the image on the left is fixed. The text needs to be aligned left, so "float:right" won't work. I've tried countless combinations of floats, aligns, table cells, etc, nothing works... closest I've got to was they were in the same line, but the text was forced aligned to the right.
Image of what I mean: http://i.imgur.com/QRDhLro.png
Code:
```#container {
overflow:hidden;
position:relative;
width:90%;
min-width:800px;
margin-bottom:20px;
margin-top:20px;
margin-left:auto;
margin-right:auto;
}
.leftimage {
width:600px;
height:100px;
}
.righttext {
float:right;
}```
Code:
```<div id="container">
<div class="righttext">lorem ipsum lorem ipsum <br> lorem ipsum lorem ipsum </div>
<div class="leftimage"><img src="../pictures/test.png"></div>
</div>```
Thanks. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693152666091919, "perplexity": 961.9121096848368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823947.97/warc/CC-MAIN-20160723071023-00140-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/25215/combining-tikz-foreach-and-let-operation?answertab=votes | # Combining TikZ foreach and let operation
I'm not having any luck using the TikZ let operation inside a foreach loop. Is there anything I'm missing?
Sample code (that doesn't work):
\begin{tikzpicture}
\foreach \y in {1,2,3}
{\draw (0,0) -- (3,\y);
\draw let
\p1 = (3,\y),
\n1 = {atan2(\x1,\y1)} in
(\y,0) arc [start angle = 0, end angle = \n1, radius=\y];
}
\end{tikzpicture}
-
The let synax is perfectly valid inside a \foreach. You do however have a clash of variable names: The \y from the loop conflicts with the \y⟨n⟩ from let. Simply renaming the loop counter solves the problem:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{calc}
\begin{document}
\begin{tikzpicture}
\foreach \a in {1,2,3}
{\draw (0,0) -- (3,\a);
\draw let
\p1 = (3,\a),
\n1 = {atan2(\x1,\y1)} in
(\a,0) arc [start angle = 0, end angle = \n1, radius=\a];
}
\end{tikzpicture}
\end{document}
The underlying problem is that TeX macro names cannot contain numbers. So let has to define a macro called \y that reads the 1 (or other number) as a parameter and then redirects to the correct value. This of course overrides the \y coming from the loop. So you (presumably) get an error on (\y,0), because the new \y (inside the let) expects to be followed by a number, not a ,.
-
Brilliant, thanks! – Simon Byrne Aug 8 '11 at 17:05
This also made me suffer! This incompatibility should be mentioned in TikZ manual. – Gonzalo Jan 8 '13 at 2:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904711842536926, "perplexity": 3908.3740363898537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://demo7.dspace.org/items/5625edff-5ff1-46f2-a702-01c413748082 | ## On certain permutation representations of the braid group
##### Authors
Iliev, Valentin Vankov
##### Description
This paper is devoted to the proof of a structural theorem, concerning certain homomorphic images of Artin braid group on $n$ strands in finite symmetric groups. It is shown that any one of these permutation groups is an extension of the symmetric group on $n$ letters by an appropriate abelian group, and in "half" of the cases this extension splits.
Comment: 10 pages, modified theorem, corrected typos
##### Keywords
Mathematics - Group Theory, Mathematical Physics, 20F36, 20E22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678399920463562, "perplexity": 680.5202147565223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00204.warc.gz"} |
http://www.thespectrumofriemannium.com/tag/classical-statistical-mechanics/ | ## LOG#156. Superstatistics (I).
This post is the first of three dedicated to some of my followers. Those readers from Mexico (a nice country, despite the issues and particularities it has, as the one I live in…), ;). Why? Well, …Firstly, they have proved … Continue reading
## LOG#146. Path integral (I).
My next thematic thread will cover the Feynman path integral approach to Quantum Mechanics! The standard formulation of Quantum Mechanics is well known. It was built and created by Schrödinger, Heisenberg and Dirac, plus many others, between 1925-1931. Later, it … Continue reading | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331322908401489, "perplexity": 3263.1625169722474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00547.warc.gz"} |
https://newproxylists.com/tag/zeta/ | ## Less basic applications of Zeta regularization:
As we all know, zeta regularization is used in quantum field theory and computation regarding the Casimir effect.
Are there less fundamental applications of regularization? By less fundamental, I mean
It appears "naturally" in more than one artificially / purely mathematically ideal constructed scenario.
Thank you!
## nt.number theory – Does this series, linked to the Hasse / Ser series for \$ zeta (s) \$, converge for all \$ s in mathbb {C} \$?
I asked this question at the math stack swap, but it didn't get traction. Always curious to know the answer.
Numerical evidence suggests that:
$$lim_ {N to + infty} sum_ {n = 1} ^ N frac {1} {n} sum_ {k = 0} ^ n (-1) ^ k {n choose k } frac {1} {(k + 1) ^ {s}} = s$$
or equivalent
$$lim_ {N to + infty} H left (N right) + sum _ {n = 1} ^ {N} left ({ frac {1} {n} sum _ {k = 1} ^ {n} { left (-1 right) ^ {k} {n choose k} frac {1 } { left (k + 1 right) ^ {s}}}} right) = s$$
with $$H (N)$$ = the $$N$$-th Harmonic Number.
Convergence is quite slow, but clearly goes faster for negatives $$s$$. In addition, calculations for non-integer values of $$s$$ require high precision parameters (I used Maple, bet / gp and ARB).
However, according to Mathematica, the series diverges according to the "harmonic series test", although $$s$$ as an integer, it agrees on convergence.
Does this series converge for $$s in mathbb {C}$$ ?
Some numerical results below:
``````s=0.5
0.497702121, N = 100
0.499804053, N = 1000
0.499905919, N = 2000
s=-3.1415926535897932385
-3.14160222, N = 100
-3.14159284, N = 1000
-3.14159272, N = 2000
s=2.3-2.1i
2.45310498 - 1.94063637i, N = 100
2.33501943 - 2.09308517i, N = 1000
2.31996958 - 2.09923503i, N = 2000
``````
## referral request – Double sum for \$ zeta (3) \$ and \$ zeta (5) \$
I found the following double sum representations for $$zeta (3)$$ and $$zeta (5)$$
$$zeta (3) = frac {1} {2} sum_ {i, j geq 1} frac { beta (i, j)} {ij}$$
$$zeta (5) = frac {1} {4} sum_ {i, j geq 1} frac {H_ {i} H_ {j} , beta (i, j)} {ij}$$
or $$beta ( cdot, cdot)$$ represents the beta function, and $$H_ {i}$$ represents the $$i$$e harmonic number.
Are these results known in the literature? If yes, please provide some references / evidence for the same.
## Asymptotic of the Hurwitz zeta function
Can anyone please help me with a reference to Hurwitz asymptotics (or just the upper limits) $$zeta (s, z)$$ as $$| t | rightarrow infty$$ with $$Re (z)> 0,$$ $$s = sigma + i t$$ and $$sigma <0$$? I only found limits for the real z
## nt.number theory – Doubt on the proof of the irrationality of \$ zeta (3) \$
I read this article by Henri Cohen on the proof of the irrationality of Apery $$zeta (3)$$ but I don't really have the details of "THEOREM 1".
My first doubt concerns the relationship $$a_n sim A alpha ^ n n ^ {- 3/2}$$.
I know if $$a_n$$ filled the relationship $$a_n-34a_ {n-1} + 1 = 0$$ then as its characteristic polynomial is $$x ^ 2-34x + 1$$ and like $$alpha$$ is one of its roots, if we note by $$bar { alpha}$$ the second root, then we would have $$a_n = A_1 alpha ^ n + A_2 bar { alpha} ^ n$$.
Then, like $$0 < bar { alpha} <1$$ we have that $$a_n / alpha ^ n longrightarrow A_1$$.
However, the relationship for $$a_n$$ East
$$a_n- (34-51n ^ {1} + 27n ^ {- 2} -5n ^ {- 3}) + (n-1) ^ 3n ^ {- 3} a_ {n-2} = 0$$
and I don't know how we can extravagantly deal with additional terms.
Also, how to get the supplement $$n ^ {- 3/2}$$ term?
Second, why does this relationship imply that $$zeta (3) -a_n / b_n = O ( alpha ^ {- 2n})?$$
After that, it remains that it can be shown that from the prime number theorem, we have that $$log d_n sim n$$ or $$d_n = text {lcm} (1,2, cdots, n)$$.
I managed to prove that
$$dfrac { log d_n} {n} leq pi (n) dfrac { log n} {n}$$
but I am unable to prove that $$log d_n / n$$ converges to $$1$$.
Finally, I don't know how this last result is true that for everything $$varepsilon> 0$$ we have
$$zeta (3) – dfrac {p_n} {q_n} = O (q_n ^ {- r- varepsilon})$$
I'm not really good at asymptotic behavior and big-O scoring, so I would really appreciate it if someone could respond with rotting and detailed explanations.
Thank you so much.
## limits and convergence – Does this sum \$ (1- frac {1} {2 ^ s}) ^ {(1- frac {1} {3 ^ s}) ^ {… … ^ {(1- frac {1} {p ^ s})}}} \$ also linked to the Riemann zeta function for \$ Re (s)> 0 \$?
I asked this question a month ago in SE but I don't have an answer. I want help for MO researchers
I'm interesting for the amount of repeated exemption from the form $$(z_1) ^ {z_2) ^ {… ^ {z_k}}}$$ such as $$z_1$$ and $$z_2$$, $$z_k$$ are different real exponents, this type of sum has been studied by many authors such as: Barraw ((exponential infinite, monthly 28 (1921) pp 141-143), For the Euler product which is linked to the Riemann zeta function, we have this well-known identity:$$prod_ {p in mathbb {P}} (1-p ^ {- s}) = frac {1} { zeta (s)}$$ , for $$s = 2$$ This product gives $$frac {6} { Pi ^ 2}$$ Now I am thinking of making this product as sum of exposure as follows:$$S_p = (1- frac {1} {2 ^ s}) ^ {(1- frac {1} {3 ^ s}) ^ {… ^ {(1- frac {1} { p ^ s})}}}$$ , I want to know if this iterated exponential sum linked to Riemann's zeta function as an Euler product for a sufficiently large prime number
I have a connection log function on this sum and increases exponentially I have the following identity:
$$S_p = exp ((1-2 ^ {- s}) sum_ {p geq 3} (1-p ^ {- s}))$$ , from this identity, I don't know the relationship between the sum of the exhibitor on $$p geq3$$ and Riemann's zeta function, part of the observation that I have has is this sum for $$s = 2$$ and $$s = 1$$ is this sum delimited by $$frac {1} { zeta (s)}$$ for $$p to infty$$ as for $$s = 1$$ ,$$S_p leq frac {1} { zeta (2)}$$ and for $$s = 2$$ We have $$S_p leq frac {1} { zeta ^ 2 (2)}$$ .
## nt.number theory – Can this quantity be expressed in \$ x cdot zeta (k) + y, x, y in mathbb {Q} \$?
For each natural number $$a$$ consider the sequence $$l (a): = left ( frac { gcd (a, b)} {a + b} right) _ {b in mathbb {N}}$$.
Then I calculated it for $$k ge 2, k in mathbb {R}$$ and $$p$$ first, we have:
$$| l (1) | _k ^ k = zeta (k) -1$$
$$| l (p) | _k ^ k = frac {2 p ^ k-1} {p ^ k} zeta (k) – left (1+ sum_ {j = 1} ^ {p-1} frac {1} {j ^ k} right)$$
I also calculated for $$n = 4$$ this:
$$| l (4) | _k ^ k = zeta (k) left (3- frac {1} {4 ^ k} – frac {2} {2 ^ k} + frac {1} {2 ^ {2k}} droite ) -3- frac {1} {3 ^ k}$$
My question is, if $$| l (a) | _k ^ k = x zeta (k) + y$$ or $$x, y in mathbb {Q}$$?
$$langle l (1), l (2) rangle = sum_ {k = 1} ^ infty frac {3k + 1} {2k (k + 1) (2k + 1)}$$
Is this last quantity equal to $$log (2)$$?
## riemann zeta function – A counterexample to the conjecture below?
This question is the clarification of my recent closed question, an example satisfying this conjecture $$n = 180$$ is mentioned here and the smallest integer $$k$$ satisfying this guess is for $$n = 3$$ Which one is $$60,480$$ look in this example.
Conjecture: Is$$k$$ and$$alpha$$ to be prime positive integers and$$alpha such as:
if $$4 ^ {- n} zeta (2n) = frac { pi ^ {2n} alpha} {k}$$ then $$k$$ always divisible by the first integers of $$1$$ at $$9$$ for each $$n> 2$$.
Now my question here: a counterexample for this conjecture? And if it is true how can I prove it?
## analytical number theory – Contour integration involving the Zeta function
I am trying to calculate the integral of the contour
$$frac {1} {2 pi i} int_ {c – i infty} ^ {c + i infty} zeta ^ 2 ( omega) frac {8 ^ omega} { omega} omega$$
or $$c> 1$$, $$zeta (s)$$ is Riemann's zeta function.
Use Perron's formula and define $$D (x) = sum_ {k leq x} sigma_0 (n)$$, or $$sigma_0$$ is the usual function of counting the divisors, we can show that
$$D (x) = frac {1} {2 pi i} int_ {c – i infty} ^ {c + i infty} zeta ^ 2 ( omega) frac {x ^ omega } { omega} d omega.$$
So for this purpose, we can just calculate $$D (8)$$ and call it a day. However, for my own needs, I want to redefine $$D (x)$$ by the full above instead. Therefore, why I state the problem for a specific case $$x = 8$$, for example.
Considering a modified Bromwich contour which avoids branch cutting and $$z = 0$$, let's call it $$mathcal {B}$$, we can apply the Cauchy residue theorem:
$$oint _ { mathcal {B}} zeta ^ 2 ( omega) frac {8 ^ omega} { omega} d omega = 2 pi i operatorname * {Res} ( zeta ^ 2 ( omega) frac {8 ^ { omega}} { omega}; 1) = 8 (-1 + 2 gamma + ln 8)$$
or $$gamma$$ is the Euler-Mascheroni constant. I got this by extending $$zeta ^ 2 ( omega) frac {8 ^ omega} { omega}$$ in his Laurent series. To obtain the desired integral, it would then be necessary to subtract from this value the parts of the contour which are not the vertical line $$c – iR$$ at $$c + iR$$, subtract them from the residual value obtained, then take the limit as $$R to infty$$ and $$r to 0$$ or $$C_r$$ is the circle of radius $$r$$ where the $$mathcal {B}$$ dodges the origin.
Feel free to modify this outline in any shape or form, or consider a positive integer value different from $$x$$.
When I tried to set limits $$zeta (0.5 + it)$$ using certain transformations on the Gamma function using the function $$f (x) = exp (-n x)$$ all over the beach $$(0, + infty)$$ , For $$Re (s) = frac12$$ and $$t> 0$$ I'm coming to the final limits for $$zeta (0.5 + it)$$ which is represented by the following formal: For $$t geq 1.22$$: $$| zeta (0.5 + it) | leq 0,5 frac {| Gamma (0.5 + it) |} {| Gamma (-0,5 + it) |} tag {1}$$, For the limits of $$Gamma (s)$$ we find that the monotonic increasing function for $$| t | geq 5/4$$ with respect for the real part of $$s$$ and it was wrong with $$| t | leq 1$$ in this article called On the horizontal monotony of $$| Gamma (s) |$$ by Gopala Krishna Srinivasan and P. Zvengrowski, |$$Gamma (s)$$| is given in the introduction to this document for $$s = sigma + i t$$ by this formula:
$$| Gamma ( sigma + it) | = lambda frac { Gamma (1+ sigma)} { sqrt { sigma ^ 2 + t ^ 2}} sqrt { frac {2 pi t} { exp ( pi t) – exp (- pi t)}} tag {2}$$, it seems that the right side of this formal linked to the hyperbolic function cos, now when I tried to plug this formal into the RHS of my limits, it gives me a complicated form such as no simple formula for simplification, my question here how I can simplify RHS OF $$1$$ if this is true? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 122, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769974708557129, "perplexity": 1707.1304780296002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00061.warc.gz"} |
https://math.iitm.ac.in/event/view/226 | Department of Mathematics
Indian Institute Of Technology Madras , Chennai
OPEN BOOKS FOR CLOSED NON-ORIENTABLE 3-MANIFOLDS
Abstract :
In this talk, I am going to give a proof of the existence of an open book decomposition for a closed non-orientable 3-manifold. This open book decomposition is analogous to a planner open book decomposition for a closed orientable 3-manifold. More precisely, we will give an open book decomposition of a given closed non-orientable 3-manifold with the pages punctured Mobius bands. If time permits, I also give an algorithm to determine the monodromy of this open book. This is a joint work with Suhas Pandit and Abhijeet Ghanwat.
Key Speaker Mr. Selvam A.
Place NAC 522
Start Time 3:00 PM
Finish Time 4:00 PM | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8342143893241882, "perplexity": 748.3750038763204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00366.warc.gz"} |
http://www.zora.uzh.ch/34365/ | The orbital evolution induced by baryonic condensation in triaxial haloes
Valluri, M; Debattista, V P; Quinn, T; Moore, B (2010). The orbital evolution induced by baryonic condensation in triaxial haloes. Monthly Notices of the Royal Astronomical Society, 403(1):525-544.
Abstract
Using spectral methods, we analyse the orbital structure of prolate/triaxial dark matter (DM) haloes in N-body simulations in an effort to understand the physical processes that drive the evolution of shapes of DM haloes and elliptical galaxies in which central masses are grown. A longstanding issue is whether the change in the shapes of DM haloes is the result of chaotic scattering of the major family of box orbits that serves as the backbone of a triaxial system, or whether they change shape adiabatically in response to the evolving galactic potential. We use the characteristic orbital frequencies to classify orbits into major orbital families, to quantify orbital shapes and to identify resonant orbits and chaotic orbits. The use of a frequency-based method for distinguishing between regular and chaotic N-body orbits overcomes the limitations of Lyapunov exponents which are sensitive to numerical discreteness effects. We show that regardless of the distribution of the baryonic component, the shape of a DM halo changes primarily due to changes in the shapes of individual orbits within a given family. Orbits with small pericentric radii are more likely to change both their orbital type and shape than orbits with large pericentric radii. Whether the evolution is regular (and reversible) or chaotic (and irreversible), it depends primarily on the radial distribution of the baryonic component. The growth of an extended baryonic component of any shape results in a regular and reversible change in orbital populations and shapes, features that are not expected for chaotic evolution. In contrast, the growth of a massive and compact central component results in chaotic scattering of a significant fraction of both box and long-axis tube orbits, even those with pericentre distances much larger than the size of the central component. Frequency maps show that the growth of a disc causes a significant fraction of halo particles to become trapped by major global orbital resonances. We find that despite the fact that shape of a DM halo is always quite oblate following the growth of a central baryonic component, a significant fraction of its orbit population has the characteristics of its triaxial or prolate progenitor.
Using spectral methods, we analyse the orbital structure of prolate/triaxial dark matter (DM) haloes in N-body simulations in an effort to understand the physical processes that drive the evolution of shapes of DM haloes and elliptical galaxies in which central masses are grown. A longstanding issue is whether the change in the shapes of DM haloes is the result of chaotic scattering of the major family of box orbits that serves as the backbone of a triaxial system, or whether they change shape adiabatically in response to the evolving galactic potential. We use the characteristic orbital frequencies to classify orbits into major orbital families, to quantify orbital shapes and to identify resonant orbits and chaotic orbits. The use of a frequency-based method for distinguishing between regular and chaotic N-body orbits overcomes the limitations of Lyapunov exponents which are sensitive to numerical discreteness effects. We show that regardless of the distribution of the baryonic component, the shape of a DM halo changes primarily due to changes in the shapes of individual orbits within a given family. Orbits with small pericentric radii are more likely to change both their orbital type and shape than orbits with large pericentric radii. Whether the evolution is regular (and reversible) or chaotic (and irreversible), it depends primarily on the radial distribution of the baryonic component. The growth of an extended baryonic component of any shape results in a regular and reversible change in orbital populations and shapes, features that are not expected for chaotic evolution. In contrast, the growth of a massive and compact central component results in chaotic scattering of a significant fraction of both box and long-axis tube orbits, even those with pericentre distances much larger than the size of the central component. Frequency maps show that the growth of a disc causes a significant fraction of halo particles to become trapped by major global orbital resonances. We find that despite the fact that shape of a DM halo is always quite oblate following the growth of a central baryonic component, a significant fraction of its orbit population has the characteristics of its triaxial or prolate progenitor.
Citations
41 citations in Web of Science®
39 citations in Scopus®
Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Institute for Computational Science 530 Physics English March 2010 02 Mar 2011 16:04 05 Apr 2016 14:09 Wiley-Blackwell 0035-8711 The definitive version is available at www.blackwell-synergy.com 10.1111/j.1365-2966.2009.16192.x http://arxiv.org/abs/0906.4784
Permanent URL: http://doi.org/10.5167/uzh-34365
Preview
Content: Accepted Version
Filetype: PDF (Accepted manuscript, Version 2)
Size: 13MB
View at publisher
Preview
Content: Accepted Version
Filetype: PDF (Accepted manuscript, Version 1)
Size: 10MB
TrendTerms
TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8511408567428589, "perplexity": 1278.7726442414596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660746.32/warc/CC-MAIN-20160924173740-00046-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/187/euclidean-tilings-that-are-uniform-but-not-vertex-transitive | # Euclidean Tilings that are Uniform but not Vertex-Transitive
Basic definitions: a tiling of d-dimensional Euclidean space is a decomposition of that space into polyhedra such that there is no overlap between their interiors, and every point in the space is contained in some one of the polyhedra.
A vertex-uniform tiling is a tiling such that each vertex figure is the same: each vertex is contained in the same number of k-faces, etc: the view of the tiling is the same from every vertex.
A vertex-transitive tiling is one such that for every two vertices in the tiling, there exists an element of the symmetry group taking one to the other.
Clearly all vertex-transitive tilings are vertex-uniform. For n=2, these notions coincide. However, Grunbaum, in his book on tilings, mentions but does not explain that for n >= 3, there exist vertex uniform tilings that are not vertex transitive. Can someone provide an example of such a tiling, or a reference that explains this?
-
Could you clarify vertix-transitive a bit more? – Casebash Jul 21 '10 at 6:15
I'm not sure I understand your definition of vertex-uniform. Could you clarify? – Qiaochu Yuan Jul 28 '10 at 7:45
ok, sorry guys; closed this question because I was being stupid and, as asked, it's not a real question. – Jamie Banks Aug 5 '10 at 23:08
Transitive action on the vertices is usually the definition of "looking the same from each vertex". So maybe you have two different groups in mind:
1. The automorphism group of the combinatorics of the tiling; the abstract structure of vertices, faces, edges, etc.
2. The group of Euclidean motions that leave the tiling invariant. Motions meaning isometries of space, where one should also specify whether orientation-reversal is allowed or not. Any instance of this is also an automorphism of the tiling combinatorics.
If tilings that are vertex-transitive under group #1 are "vertex-uniform" and those that are vertex-transitive in the stricter sense of group #2 are "vertex transitive", then it is easy to give examples where some of the polyhedra are deformations of others (so not isometrically transitive in sense #2). For example, a one-dimensional periodic tiling with several different sizes of interval will have all vertices equivalent combinatorially but a finite number of distinct vertex types under geometric equivalence. This is maybe too simple to be what Grunbaum had in mind, can you quote the book?
-
Let me discuss an analogous situation with regard to convex polyhedra.
Archimedes, generalizing what are often called the Platonic Solids or the convex regular polyhedra (there are five of them), seems to have discovered 13 polyhedra with the property that the pattern of faces around each vertex was the same for every vertex of the polyhedron, and all of the faces of the polyhedron were regular polygons. I say seems to have done this because the manuscript that describes what he did is lost. Pappus who lived many years after Archimedes describes these polyhedra explicitly and mentions 13 convex solids. Years latter many artists and mathematicians talked about these polyhedra. Kepler explicitly mentions two infinite families of polyhedra which have this property: the prisms (consisting of two regular n-gons and n squares) and the anti-prisms (consisting of two regular n-gons and 2n equilateral triangles). Kepler purports to give a proof a proof that there 13 such solids (with at least two types of faces). I say purports because there are in fact 14 convex polyhedra which meet the local symmetry condition described above. The one Archimedes, Pappus, Kepler and others missed is often called the pseudo-rhombicuboctahedron. (Kepler created confusion in referring to 14 solids in a place other than where he gave his "proof.")
http://en.wikipedia.org/wiki/Elongated_square_gyrobicupola
Many books to this day continue to talk about 13 Archimedean convex solids (other than the prisms and antiprisms). With the definition that Archimedes almost certainly had in mind this is wrong - there are 14 such solids. However, if one considers convex polyhedra with at least two regular polygons as faces, and where the symmetry group of the solid is transitive on the vertices (that is, the symmetry group can take any vertex to any other vertex) then there are only 13 such solids. However, almost certainly Archimedes had no knowledge of the idea of a symmetry group in the modern sense. Depending on whether one uses a local symmetry notion or a global symmetry notion there are either 14 or 13 such convex solids (aside from the prisms and antiprisms). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935979604721069, "perplexity": 575.3463060753332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835119.25/warc/CC-MAIN-20140820021355-00199-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://thethong.wordpress.com/category/write-up/ | ## Archive for the ‘Write up’ Category
### Information Criterion
This post will be my summary about the Akaike Information Criterion(AIC) and the Takeuchi Information Criterion(TIC). In particular, a derivation of AIC and TIC is shown. And if I can understand more about the Generalized Information Criterion, I will cover it too.
### Formulating the PCA
Today I have thought about how one can formulate the Principal Component Analysis (PCA) method. In particular I want to reformulate PCA as a solution for a regression problem. The idea of reformulation PCA as a solution for some regression problem is useful in Sparse PCA , in which a $L_1$ regularization term is inserted into a ridge regression formula to enforce spareness of the coefficients (i.e. elastic net). There are at least two equivalent ways to motivate PCA. In this post I will first give a formulation of PCA based on orthogonal projection, and then discuss a regression-type reformulation of PCA.
### Nonparametric Bayesian Seminar 1 : Notes
(mục đích chính là viết ra để khỏi quên nên sẽ lộn xộn. Không có hình vẽ)
Paper: Introduction to Nonparametric Bayesian Models (Naonori Ueda, Takeshi Yamada)
### Q
(These are some thoughts I got while reading the inspiring book by James D.Watson “Avoid boring people and other lessons from a life in science.” Maybe I will give a full summary on the lessons from the book later.)
### Mathematics and the Unexpected
(Just some thoughts while I read Mathematics and the Unexpected and Innumeracy: Mathematical Illiteracy and its consequence. Maybe some more detailed post later.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391450643539429, "perplexity": 2366.4850203513374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596336.96/warc/CC-MAIN-20180723110342-20180723130342-00435.warc.gz"} |
http://math.stackexchange.com/questions/36483/monotonicity-of-fn-frac1n-sum-i-1n-1-frac1i/36489 | # Monotonicity of $f(n)= \frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i}$
Define $f: \mathbb{N} \rightarrow \mathbb{R}$ as $f(n)= \frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i}$.
I was wondering how to tell if $f$ is a increasing or decreasing function?
Thanks and regards!
-
You are taking the average of things that keep getting smaller so... – Jonas Teuwen May 2 '11 at 19:06
Maybe you take the difference $f(n)-f(n-1)$ and try to figure out whether it is positive or negative... – Fabian May 2 '11 at 19:10
@Tim:Do you really want the upper limit on the sum to be $n-1$? – Chris Leary May 2 '11 at 19:10
@Chris: yes, I do. – Tim May 2 '11 at 19:14
@Tim: Then you take the last term $0$, that doesn't change much about the argument. – Jonas Teuwen May 2 '11 at 19:24
A formal proof would be \begin{align} f(n+1) - f(n) &= \frac{1}{n+1} \sum_{i=1}^{n} \frac{1}{i} - \frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i} \\ &= \frac{n}{n+1}\frac{1}{n} \left(\sum_{i=1}^{n-1} \frac{1}{i} + \frac{1}{n}\right) - \frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i} \\ &= (\frac{n}{n+1} - 1)\frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i} + \frac{1}{n(n+1)} \\ & = \frac{1}{n(n+1)} \left(1 - \sum_{i=1}^{n-1} \frac{1}{i}\right) < 0 \end{align} for $\forall n \geq 3$
-
Actually for $n=2$ the expression is equal to 0. – Fabian May 2 '11 at 19:24
Thanks Fabian, corrected the mistake – Shuhao Cao May 2 '11 at 19:26
your welcome. +1 nice answer – Fabian May 2 '11 at 19:31
A simpler proof would be to notice that $\displaystyle f(n) \gt \frac{1}{n}$ (for $n \gt 2$)
Thus $\displaystyle (n+1)f(n+1) - nf(n) = \frac{1}{n} \lt f(n)$ and so $f(n) \gt f(n+1)$.
-
Nice, simple argument. – Jonas Teuwen May 2 '11 at 22:25
Thanks! Nice indeed. I wish I were able to accept multiple answers. Why isn't it possible on SE sites? – Tim May 3 '11 at 1:13
@Tim/Jonas: Thanks! @Tim: I guess the intent is to have one answer which will make it easier for people who come across this later. We can always edit the accepted answer to have multiple proofs, I guess. – Aryabhata May 3 '11 at 2:13
The following 3 condidions are equivalent:
$$f(n)>f(n+1)$$
$$\frac{1+\frac12+\dots+\frac1{n-1}}n>\frac{1+\frac12+\dots+\frac1{n}}{n+1}$$
$$n+\frac{n+1}2+\dots+\frac{n+1}{n-1}+1>n+\frac n2+\dots+\frac n{n-1}+1$$
In the last inequality, the corresponding terms on the LHS are greater (or equal) as the corresponding terms on the RHS. (They both have the same number of terms.) At least one of these inequalities is strict.
EDIT: (From the comments I see that this was not clear enough.)
There is the same number of terms, since I divided $n+1$ (obtained by multypling the first term in the second inequality) between $n$ and $1$ (the first and the last term in the LHS).
-
Consider: \begin{align*} n(n+1)(f(n)-f(n+1))&=n(n+1)\left(\frac{1}{n}\sum_{i=1}^{n-1}\frac{1}{i}-\frac{1}{n+1}\sum_{i=1}^{n}\frac{1}{i}\right)\\ &=\sum_{i=1}^{n-1}\left(\frac{n+1}{i}-\frac{n}{i}\right) - 1\\ &=\sum_{i=1}^{n-1}\frac{1}{i} - 1\\ &\geq 0\end{align*} for $n\geq2$.
In specific $f(n)-f(n+1)\geq0$ in general.
Edit: Little writing error.
-
Your calculation doesn't seem to be correct... – Fabian May 2 '11 at 19:25
There is a term missing in the secondl ine, since the last sum includes the $n$th term but the first one does not. That is, the second line should be $$\left(\sum_{i=1}^{n-1}\left(\frac{n+1}{i}-\frac{n}{i}\right)\right) - 1.$$ – Arturo Magidin May 2 '11 at 19:27
What you do with the term $i=n$ in the second sum? – Fabian May 2 '11 at 19:28
I do apologize, I forgot the term "-1". Now I edited the answer and it should be correct. – Giovanni De Gaetano May 3 '11 at 9:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886292219161987, "perplexity": 1421.4156842932573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928350.51/warc/CC-MAIN-20150521113208-00014-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/131043/is-there-a-shortcut-for-calculating-summations-such-as-this | # Is there a shortcut for calculating summations such as this? [duplicate]
Possible Duplicate:
Computing $\sum_{i=1}^{n}i^{k}(n+1-i)$
I'm curious in knowing if there's an easier way for calculating a summation such as
$\sum_{r=1}^nr(n+1-r)$
I know the summation $\sum_{r=1}^xr(n+1-r)$ is going to be a cubic equation, which I should be able to calculate by taking the first few values to calculate all the coefficients. Then I can plug in $n$ for $x$ to get the value I'm looking for. But it seems like there must be an easier way.
In the meantime, I'll be calculating this the hard way.
-
## marked as duplicate by Ross Millikan, Pedro Tamaroff♦, Kannappan Sampath, Guess who it is., Zev ChonolesApr 24 '12 at 13:19
If you already know the summations for consecutive integers and consecutive squares, you can do it like this:
\begin{align*} \sum_{r=1}^n r(n+1-r)&=\sum_{r=1}^nr(n+1)-\sum_{r=1}^nr^2\\ &=(n+1)\sum_{r=1}^nr-\sum_{r=1}^nr^2\\ &=(n+1)\frac{n(n+1)}2-\frac{n(n+1)(2n+1)}6\\ &=\frac16n(n+1)\Big(3(n+1)-(2n+1)\Big)\\ &=\frac16n(n+1)(n+2)\;. \end{align*}
Added: Which is $\dbinom{n+2}3$, an observation that suggests another way of arriving at the result. First, $r$ is the number of ways to pick one number from the set $\{0,\dots,r-1\}$, and $n+1-r$ is the number of ways to pick one number from the set $\{r+1,r+2,\dots,n+1\}$. Suppose that I pick three numbers from the set $\{0,\dots,n+1\}$; the middle number of the three cannot be $0$ or $n+1$, so it must be one of the numbers $1,\dots,n$. Call it $r$. The smallest number must be from the set $\{0,\dots,r-1\}$, and the largest must be from the set $\{r+1,r+2,\dots,n+1\}$, so there are $r(n+1-r)$ three-element subsets of $\{0,\dots,n+1\}$ having $r$ as middle number. Thus, the total number of three-element subsets of $\{0,\dots,n+1\}$ is $$\sum_{r=1}^nr(n+1-r)\;.$$ But $\{0,\dots,n+1\}$ has $n+2$ elements, so it has $\dbinom{n+2}3$ three-element subsets, and it follows that
$$\sum_{r=1}^nr(n+1-r)=\binom{n+2}3=\frac{n(n+1)(n+2)}6\;.$$
-
@anon: Yep, I just caught that. Thanks. – Brian M. Scott Apr 12 '12 at 21:02
I know the formulas for the summation of consecutive integers and consecutive cubes, just not consecutive squares. And it seems like the long way got me the wrong answer. Guess I have a mistake to find... – Mike Apr 12 '12 at 21:18
Ugh... Just saw the added part. That I should have figured out sooner and avoided the summation in the first place. – Mike Apr 12 '12 at 21:31
$\binom{n+2}3$ is also the number of ways to choose 3 not necessarily unique elements from the set $\{1,...,n\}$, which turned out to be more or less what I was doing. – Mike Apr 12 '12 at 22:36
Yes, linearity and a few memorized formulas:
$$\begin{array}{c l} \sum_{r=1}^n r(n+1-r) &=(n+1)\left(\sum_{r=1}^n r\right)-\sum_{r=1}^n r^2 \\ & = (n+1)\frac{n(n+1)}{2}-\frac{n(n+1)(2n+1)}{6} \\ & = \frac{n(n+1)(n+2)}{6}.\end{array}$$
-
Note that $$r(n+1-r)=n\binom{r}{1}-2\binom{r}{2}\tag{1}$$ Using the identity $$\sum_r\binom{n-r}{a}\binom{r}{b}=\binom{n+1}{a+b+1}\tag{2}$$ with $a=0$, we can sum $(2)$ for $r$ from $1$ to $n$ and get $$n\binom{n+1}{2}-2\binom{n+1}{3}\tag{3}$$ Formula $(3)$ can be manipulated into more convenient forms, e.g. $$\left((n+2)\binom{n+1}{2}-2\binom{n+1}{2}\right)-2\binom{n+1}{3}\\[18pt] 3\binom{n+2}{3}-\left(2\binom{n+1}{2}+2\binom{n+1}{3}\right)\\[18pt] 3\binom{n+2}{3}-2\binom{n+2}{3}$$ $$\binom{n+2}{3}\tag{4}$$
-
Guessing that k in (2) should be an r? – Mike Apr 13 '12 at 0:48
@Mike: indeed. thanks – robjohn Apr 13 '12 at 2:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746274709701538, "perplexity": 260.00684354075474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094662.41/warc/CC-MAIN-20150627031814-00190-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://pacific.com.vn/archive/square-root-of-numbers-186e77 | x b The particular case of the square root of 2 is assumed to date back earlier to the Pythagoreans, and is traditionally attributed to Hippasus. 3 The Yale Babylonian Collection YBC 7289 clay tablet was created between 1800 BC and 1600 BC, showing What if there is no calculator or a smartphone handy? ; it is denoted To find a definition for the square root that allows us to consistently choose a single value, called the principal value, we start by observing that any complex number x + iy can be viewed as a point in the plane, (x, y), expressed using Cartesian coordinates. This simplifies finding a start value for the iterative method that is close to the square root, for which a polynomial or piecewise-linear approximation can be used. We write it next to the subtracted value already there (which is 4). That was interesting! {\displaystyle x} z When talking of the square root of a positive integer, it is usually the positive square root that is meant. . It was known to the ancient Greeks that square roots of positive integers that are not perfect squares are always irrational numbers: numbers not expressible as a ratio of two integers (that is, they cannot be written exactly as m/n, where m and n are integers). i Last updated at Sept. 11, 2018 by Teachoo. Otherwise, it is a quadratic non-residue. [6] (1;24,51,10) base 60 corresponds to 1.41421296, which is a correct value to 5 decimal points (1.41421356...). θ A cube root of The square roots of the perfect squares (e.g., 0, 1, 4, 9, 16) are integers. Below is the result we got with 13 decimals. {\displaystyle {\sqrt {x}}} I am excited about the idea of helping others acquire high quality resources. n w , But you can also approximate the value of those square roots by hand, and sometimes you can rewrite the square root in a somewhat simpler form. < 1 However, the inequality of arithmetic and geometric means shows this average is always an overestimate of the square root (as noted below), and so it can serve as a new overestimate with which to repeat the process, which converges as a consequence of the successive overestimates and underestimates being closer to each other after each iteration. {\displaystyle {\sqrt[{n}]{x}}. a a a The quadratic residues form a group under multiplication. If you read this far, tweet to the author to show them you care. Learn to code — free 3,000-hour curriculum. This is done by introducing a new number, denoted by i (sometimes j, especially in the context of electricity where "i" traditionally represents electric current) and called the imaginary unit, which is defined such that i = −1. {\displaystyle {\sqrt[{3}]{x}}. n We will divide the space into … 3 . Another useful method for calculating the square root is the shifting nth root algorithm, applied for n = 2. π {\displaystyle {\sqrt {ab}}} ), where r ≥ 0 is the distance of the point from the origin, and It must be the largest possible integer that allows the product to be less than or equal the number on the left. Then, let’s separate the number’s digits into pairs moving from right to left. If the field is finite of characteristic 2 then every element has a unique square root. When we square a negative number we get a positive result.. Just the same as squaring a positive number: (For more detail read Squares and Square Roots in Algebra) . For example, if we choose the number 6, the first number becomes 86 (8 and 6) and we must also multiply it by 6. Figure out the perfect square root using multiplication. {\displaystyle y^{3}=x} {\displaystyle y} For example, the principal square roots of ±i are given by: In the following, the complex z and w may be expressed as: where {\displaystyle {\sqrt {1}}} We also have thousands of freeCodeCamp study groups around the world. e [14][15] When computing square roots with logarithm tables or slide rules, one can exploit the identities. A similar problem appears with other complex functions with branch cuts, e.g., the complex logarithm and the relations logz + logw = log(zw) or log(z*) = log(z)* which are not true in general. The square root of a nonnegative number is used in the definition of Euclidean norm (and distance), as well as in generalizations such as Hilbert spaces. As we have already discussed, the square root of any number is the value which when multiplied by itself gives the original number. Thus in rings where zero divisors do not exist, it is uniquely 0. For example, the principal square root of 9 is 3, which is denoted by Therefore, no negative number can have a real square root. [9] which is positive, and {\displaystyle \varphi } The method uses the same iterative scheme as the Newton–Raphson method yields when applied to the function y = f(x) = x2 − a, using the fact that its slope at any point is dy/dx = f′(x) = 2x, but predates it by many centuries. /
## square root of numbers
Mushroom Text Art, Avantika Express Mumbai To Indore Seat Availability, Infinity Basslink Manual, Betty Crocker Cookie Mix, Jamie Oliver White Cabbage Salad, Ctrl+home Excel Not Working, Roasted Garlic Pasta, Longboat Key Rentals, Fender Acoustasonic 100, | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353798627853394, "perplexity": 428.0501327395977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00306.warc.gz"} |
http://mathhelpforum.com/algebra/65350-write-th-folowing-logarithmic-form.html | Thread: Write th folowing in Logarithmic Form
1. Write th folowing in Logarithmic Form
I am really stuck with this problem, any help would be greatly appreciated
d^5/f^2=h^0.25(c/e-g)^3
Thanks,
JP
2. Hello, JP!
The instructions are rather vague.
I assume we are to take logs ... then "expand" the expression.
$\frac{d^5}{f^2} \:=\:h^{\frac{1}{4}}\left(\frac{c}{e-g}\right)^3$
We have: . $\log\left(\frac{d^5}{f^2}\right) \;=\;\log\left[h^{\frac{1}{4}}\left(\frac{c}{e-g}\right)^3\right]$
. . . $\log(d^5) - \log(f^2) \;=\;\log\left(h^{\frac{1}{4}}\right) + \log\left(\frac{c}{e-g}\right)^3$
. . . $5\log(d) - 2\log(f) \;=\;\frac{1}{4}\log(h) + 3\log\left(\frac{c}{e-g}\right)$
. . . $5\log(d) - 2\log(f) \;=\;\tfrac{1}{4}\log(h) + 3\bigg[\log(c) - \log(e-g)\bigg]$
. . . $5\log(d) - 2\log(f) \;=\;\tfrac{1}{4}\log(h) + 3\log(c) - 3\log(e-g)$
3. thanks very much on that one | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981502294540405, "perplexity": 1463.935745595264}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298875.42/warc/CC-MAIN-20160823195818-00230-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/paper/Multiplicative-Number-Theory-Davenport/85307f488d85a730edacb2f6e27b7778ceb8890e | # Multiplicative Number Theory
@inproceedings{Davenport1967MultiplicativeNT,
title={Multiplicative Number Theory},
author={Harold Davenport},
year={1967}
}
From the contents: Primes in Arithmetic Progression.- Gauss' Sum.- Cyclotomy.- Primes in Arithmetic Progression: The General Modulus.- Primitive Characters.- Dirichlet's Class Number Formula.- The Distribution of the Primes.- Riemann's Memoir.- The Functional Equation of the L Function.- Properties of the Gamma Function.- Integral Functions of Order 1.- The Infinite Products for xi(s) and xi(s,Zero-Free Region for zeta(s).- Zero-Free Regions for L(s, chi).- The Number N(T).- The Number N(T, chi…
2,272 Citations
### The simple zeros of the Riemann zeta-function
The Simple Zeros of the Riemann Zeta-Function by Melissa Miller There have been many tables of primes produced since antiquity. In 348 BC Plato studied the divisors of the number 5040. In 1202
### LIMITING DISTRIBUTIONS AND ZEROS OF ARTIN L-FUNCTIONS
This thesis is concerned with behaviour of some famous arithmetic functions. The first part of the thesis deals with prime number races. Rubinstein-Sarnak [62] developed a technique to study primes
### Small zeros of Dirichlet L-functions of quadratic characters of prime modulus
• Mathematics
• 2018
In this paper, we investigate the distribution of the imaginary parts of zeros near the real axis of Dirichlet $L$-functions associated to the quadratic characters $\chi_{p}(\cdot)=(\cdot |p)$ with
### PRIME POLYNOMIALS IN SHORT INTERVALS AND IN ARITHMETIC PROGRESSIONS
• Mathematics
• 2015
In this paper we establish function field versions of two classical conjectures on prime numbers. The first says that the number of primes in intervals (x,x x(is an element of)] is about x(is an
### Generalized divisor functions in arithmetic progressions: I The k-fold divisor function in arithmetic progressions to large moduli
We prove some distribution results for the k-fold divisor function in arithmetic progressions to moduli that exceed the square-root of length X of the sum, with appropriate constrains and averaging
### On Elementary Proofs of the Prime Number Theorem for Arithmetic Progressions, without Characters
We consider what one can prove about the distribution of prime numbers in arithmetic progressions, using only Selberg's formula. In particular, for any given positive integer q, we prove that either
### AN EXTENSION OF THE PAIR-CORRELATION CONJECTURE AND APPLICATIONS
• Mathematics
• 2016
Abstract. We study an extension of Montgomery’s pair-correlation conjecture and its rele-vance in some problems on the distribution of prime numbers.Keywords. Riemann zeta function, pair correlation
### ON THE IDENTITIES BETWEEN THE ARITHMETIC FUNCTIONS
Abstract. Dirichlet series is a Riemann zeta function attachedwith an arithmetic function. Here, we studied the properties ofDirichlet series and found some identities between arithmetic func-tions.
### Discrete Mean Values of Dirichlet L-functions
• Mathematics
• 2015
In 1911 Landau proved an asymptotic formula for sums of the form ∑ γ≤T x ρ over the imaginary parts of the nontrivial zeros of the Riemann zeta function. The formula provided yet another deep
### A Bombieri-Vinogradov theorem for all number fields
• Mathematics
• 2012
The classical theorem of Bombieri and Vinogradov is generalized to a non-abelian, non-Galois setting. This leads to a prime number theorem of “mixed-type” for arithmetic progressions “twisted” by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962425947189331, "perplexity": 1420.0361755651436}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00053.warc.gz"} |
https://www.physicsforums.com/threads/nomenclature-question.841434/ | # Homework Help: Nomenclature question
1. Nov 4, 2015
Hey guys, quick question.
I got confused in some nomenclature between some books and wanted to clarify somehow. A book I have has I=Mk^2 as defining a radius of gyration. Is this the same as I-mr^2?
Terminology was just confusing me.
Thanks guys.
2. Nov 4, 2015
Oops, I meant I=mr^2.
apologies
3. Nov 4, 2015
### haruspex
It depends what r is supposed to be. Certainly
"I=Mk2, where M is mass, k is the radius of gyration and I is the moment of inertia about the mass centre"
is exactly the same as
"I=mr2, where m is mass, r is the radius of gyration and I is the moment of inertia about the mass centre".
But most writers reserve r to be a directly observable radius, such as the radius of a hoop or of a point mass in a circular orbit. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957755446434021, "perplexity": 877.5305673929757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865702.43/warc/CC-MAIN-20180523180641-20180523200641-00173.warc.gz"} |
https://space.stackexchange.com/questions/12660/delta-v-to-low-mars-orbit | # Delta-V to Low Mars Orbit
I'm making a Mars probe in Kerbal Space Program Real Solar System, and I have 13 km/s of Delta-V in the lifter itself. I have a cryogenic second stage with a KVD-1 engine, it has 7.36 km/s of Delta-V, and about 4 km/s is used for the orbital injection. Does my lifter have enough Delta-V? I can upload a picture with the delta-v overall, if that helps.
• "Does my lifter have enough Delta-V?" I'd have thought that KSP was designed to tell you the answer to that. Launch the combo. to find out! Nov 11, 2015 at 4:45
• I've now revised my answer using patch conics and get a very different answer. I will leave my previous analysis up for a bit in case anyone can comment on it. Nov 12, 2015 at 3:52
## 2 Answers
Let's break down your problem into a very simplified launch phase from the ground to low Earth orbit at 250 km altitude. Then we'll use patch conics to get an estimate of the $\Delta v$ necessary to reach a low Mars orbit of 80 km altitude.
One method that I like to use to get ball-park estimates of required launch $\Delta v$ is to first consider an instantaneous impulse on the ground that will give you enough velocity to reach the target orbit altitude (like throwing up a ball with enough speed that it just reaches the orbit altitude at it's peak), then consider another instantaneous impulse at that point that gives you enough velocity to reach orbit.
Potential energy on the ground and in orbit (where $\mu_{Earth}$ is the gravitational parameter for Earth -- $\mu_{Earth} = 398601.2$ $\frac{\text{km}^3}{s^2}$):
$V_{ground} = -\frac{\mu_{Earth}}{r_{ground}}$
$V_{orbit} = -\frac{\mu_{Earth}}{r_{orbit}}$
We'll assume we start at the mean Earth radius of 6378 km (the actual radius at the Kennedy Space Centre could be substituted but probably won't make a significant difference considering our crude analysis). That gives us a difference in potential energy of 2.357 kJ/kg, which translates to a required initial speed of 2.171 km/s.
Once we reach our peak height of 250 km, we'll be at zero velocity and need to accelerate to an orbit velocity of 7.755 km/s based on the following equation.
$v_{orbit} = \sqrt{\frac{\mu_{Earth}}{r_{orbit}}}$
So that gives us a total launch $\Delta v$ of about 9.926 km/s. If you launch from the equator to the East then you will have an initial speed of 0.465 km/s from the Earth's rotation, so that could reduce your launch $\Delta v$ to 9.461 km/s. This is probably about 5-10% higher than the true value, but a good conservative approximation.
Next we use patch conics to analyze the interplanetary transfer. The figure below shows the interplanetary trajectory using a Hohmann transfer from Earth to Mars in the centre, where we can just assume our heliocentric radii are those of the planets. The spheres of influence for Earth and Mars are expanded on the left and right, respectively, to show the hyperbolic trajectories in each planet's reference frame as well as the planetary orbits.
First we find the semi-major axis of the Mars transfer orbit or MTO, $a_{MTO}$, based on the radii of the orbits of Earth and Mars (which we assume are circular for this analysis).
$a_{MTO} = \frac{1}{2} \left( r_{Earth} + r_{Mars} \right)$
Then we can find the velocities of the spacecraft as it departs Earth and when it reaches Mars.
$v_{MTO,E} = \sqrt{\mu_{Sun} \left( \frac{2}{r_{Earth}} - \frac{1}{a_{MTO}}\right)}$
$v_{MTO,M} = \sqrt{\mu_{Sun} \left( \frac{2}{r_{Mars}} - \frac{1}{a_{MTO}}\right)}$
In the Earth reference frame, we consider the spacecraft to "depart" when it leaves the sphere of influence or SOI (at a radius of about 924000 km). Assume we can set up our hyperbolic escape trajectory so that our velocity in the heliocentric frame is parallel to Earth's. So that means that in the Earth frame, our velocity at the edge of the SOI will be:
$v_{SOI,E} = v_{MTO,E} - v_{Earth}$
Where $v_{Earth} = \sqrt{\frac{\mu_{Sun}}{r_{Earth}}}$
Given the radius and velocity at the SOI we can find the semi-major axis of the escape trajectory as well as the required MTO insertion velocity to leave low Earth orbit:
$a_{MTO,E} = \left( \frac{2}{r_{SOI,E}} - \frac{v^2_{SOI,E}}{\mu_{Earth}}\right)^{-1}$
$v_{Insertion} = \sqrt{\mu_{Earth} \left( \frac{2}{r_{LEO}} - \frac{1}{a_{MTO,E}}\right)}$
Note that our velocity in LEO is:
$v_{LEO} = \sqrt{\frac{\mu_{Earth}}{r_{LEO}}}$
Plugging in our numbers, we get $v_{Insertion} = 11.318$ km/s and $v_{LEO} = 7.755$ km/s, resulting in a required $\Delta v$ of 3.563 km/s to leave LEO and enter MTO.
Next we use the same analysis at Mars, where the right-hand side of the above figure shows the spacecraft hyperbolic rendezvous trajectory as well as the target low Mars orbit.
First we determine the relative velocity that the spacecraft will have at the edge of the sphere of influence (again assuming that the velocity is initially parallel to Mars'):
$v_{SOI,M} = v_{Mars} - v_{MTO,M}$
Next we determine the hyperbolic orbit semi-major axis and rendezvous velocity at the periapsis (which will be at low Mars orbit radius):
$a_{MTO,M} = \left( \frac{2}{r_{SOI,M}} - \frac{v^2_{SOI,M}}{\mu_{Mars}}\right)^{-1}$
$v_{Rendezvous} = \sqrt{\mu_{Mars} \left( \frac{2}{r_{LMO}} - \frac{1}{a_{MTO,M}}\right)}$
Note that our velocity in LMO is:
$v_{LMO} = \sqrt{\frac{\mu_{Mars}}{r_{LMO}}}$
Plugging in our numbers, we get $v_{Rendezvous} = 5.618$ km/s and $v_{LMO} = 3.514$ km/s, resulting in a required $\Delta v$ of 2.105 km/s to enter LMO from the Mars transfer orbit.
With those two manoeuvres, the total $\Delta v$ for interplanetary transfer is 5.668 km/s. Adding the approximated launch $\Delta v$ results in a grand total of 15.129 km/s, which should be within reach of your design.
• The wording of the question is ambiguous, but knowing a bit about the game, i think they meant from Earth to Mars orbit. Nov 11, 2015 at 15:53
• Oh of course, oops! Nov 11, 2015 at 16:41
• I am launching from the Kennedy Space Center, if that helps. Should I dogleg to equatorial or should I do a plane change? Nov 11, 2015 at 18:28
• You should definitely curve your LV over to the correct inclination. You could really just start with an appropriate heading (with the correct timing) so that you can maintain a constant heading and save some fuel. However, real launches have other constraints that might prevent that, like flying over populated areas. But launching to an Equatorial orbit and then using a plane change will be a lot less efficient. Nov 11, 2015 at 18:50
• You might be interested in this MathJax cheat sheet for formatting the formulas properly: meta.math.stackexchange.com/questions/5020/… . Or a bit later i'll format them once i've finished a couple of things. Nov 11, 2015 at 22:23
I've made a spreadsheet for stuff like this. It assumes circular, coplanar orbits so the numbers are ball park.
Here is a screen capture:
I've set ellipse's apoaerion altitude at the Sphere of Influence. This is a capture orbit, the sun's influence won't tear it away any time soon.
I've set ellipse's periaerion altitude at 100 km. The ship passes through Mars upper atmosphere each periaerion. Atmospheric friction will shed velocity each pass serving to lower the apoaerion. When the apoaerion reaches 400 km or so, a tiny burn will serve to raise the periapsis above the atmosphere.
So at Mars arrival it will take a .7 km/s periaerion burn to achieve capture and eventually a low circular orbit.
• I lost the spreadsheet during a disk formatting, i'm glad i have it again now :) Nov 11, 2015 at 15:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445896506309509, "perplexity": 941.9297813963589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00038.warc.gz"} |
https://open.kattis.com/contests/xac94q/problems/easiest | UAPSC Weekly Random Problem Assortment
#### Start
2019-02-15 22:48 UTC
## UAPSC Weekly Random Problem Assortment
#### End
2019-02-22 22:48 UTC
The end is near!
Contest is over.
Not yet started.
Contest is starting in -150 days 19:31:29
168:00:00
0:00:00
# Problem AThe Easiest Problem Is This One
Some people think this is the easiest problem in today’s problem set. Some people think otherwise, since it involves sums of digits of numbers and that’s difficult to grasp.
If we multiply a number $N$ with another number $m$, the sum of digits typically changes. For example, if $m = 26$ and $N=3029$, then $N\cdot m = 78754$ and the sum of the digits is $31$, while the sum of digits of $N$ is 14.
However, there are some numbers that if multiplied by $N$ will result in the same sum of digits as the original number $N$. For example, consider $m = 37, N=3029$, then $N\cdot m = 112073$, which has sum of digits 14, same as the sum of digits of $N$.
Your task is to find the smallest positive integer $p$ among those that will result in the same sum of the digits when multiplied by $N$. To make the task a little bit more challenging, the number must also be higher than ten.
## Input
The input consists of several test cases. Each case is described by a single line containing one positive integer number $N, 1\leq N\leq 100\, 000$. The last test case is followed by a line containing zero.
## Output
For each test case, print one line with a single integer number $p$ which is the minimal number such that $N\cdot p$ has the same sum of digits as $N$ and $p$ is bigger than 10.
Sample Input 1 Sample Output 1
3029
4
5
42
0
37
28
28
25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823126494884491, "perplexity": 348.9250274969567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524685.42/warc/CC-MAIN-20190716180842-20190716202842-00132.warc.gz"} |
https://onepetro.org/OTCONF/proceedings-abstract/18OTC/2-18OTC/D021S021R007/179256 | Predictive equations of normalized shear modulus (G/Gmax) and material damping ratio (D) are presented for calcareous sand, siliceous carbonate sand and carbonate sand of the Bay of Campeche and Tabasco Coastline. This was achieved using a database of 84 resonant column tests and 252 strain-controlled cyclic direct simple shear test that provide data to define the normalized shear modulus, G/Gmax, and material damping ratio, D, versus cyclic shear strain. The range of cyclic shear strains of the database is from 0.0001% to 1%, and the range of carbonate content (Ca2CO3) from 10% to 100%. The curves of normalized modulus reduction and damping ratio were organized in three groups according to the percentage of carbonate content: 1) calcareous sands (10% to 50%), 2) siliceous carbonate sand (50% to 90%) and 3) carbonate sands (90% to 100%). Two independent modified hyperbolic relations for normalized modulus reduction and material damping ratio versus cyclic shear strain were developed for each group. The normalized shear modulus was modeled using two parameters: 1) a reference strain defined as the strain at which G/Gmax is equal to 0.5, and 2) a parameter that controls the curvature of the normalized modulus reduction curve. The material damping ratio was modeled using four parameters: 1) a reference strain γrD defined as the strain at which D/Dmax= 0.5, 2) a curvature parameter αD that controls the curvature of the material damping ratio curve, 3) a maximum material damping ratio Dmax, and 4) a minimum material damping ratio Dmin. The new empirical relationships to predict the normalized modulus reduction and material damping ratio curves as a function of effective confining pressure are easy to apply in practice and can be used when site-specific dynamic laboratory testing is not available. The curves of G/Gmax-γ and D-γ, are similar between silica sand and calcareous sand. The curves of siliceous carbonate sand and carbonate sand are very similar, but show a different shape and width than the curves of silica sand and calcareous sand. This indicates that when the carbonate content is smaller than 50% there is a small effect on the curves of G/Gmax-γ and D-γ, and a considerable effect when the carbonate content is greater than 50%.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137690424919128, "perplexity": 3176.248876115919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00092.warc.gz"} |
https://brilliant.org/problems/first-isomorphism-theorem/ | # First isomorphism theorem
Algebra Level 5
Let $${\mathbb C}^*$$ be the group of nonzero complex numbers under multiplication. Consider the homomorphism $$f \colon {\mathbb C}^* \to {\mathbb C}^*$$ given by $$f(z) = z^2.$$ Then the First Isomorphism Theorem says that $${\mathbb C}^*/\text{ker}(f) \simeq \text{im}(f).$$
Now $$\text{ker}(f) = \{ \pm 1\},$$ and $$\text{im}(f) = {\mathbb C}^*$$ (that is, $$f$$ is onto), so the conclusion is that ${\mathbb C}^*/\{\pm 1\} \simeq {\mathbb C}^*,$ i.e. $${\mathbb C}^*$$ is isomorphic to a nontrivial quotient of itself.
What is wrong with this argument?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991926550865173, "perplexity": 93.5888719496404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189472.3/warc/CC-MAIN-20170322212949-00605-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://en.wikisource.org/wiki/A_Treatise_on_Electricity_and_Magnetism/Part_II/Chapter_V | # A Treatise on Electricity and Magnetism/Part II/Chapter V
## CHAPTER V. ELECTROLYTIC POLARIZATION.
264.] When an electric current is passed through an electrolyte bounded by metal electrodes, the accumulation of the ions at the electrodes produces the phenomenon called Polarization, which consists in an electromotive force acting in the opposite direction to the current, and producing an apparent increase of the resistance.
When a continuous current is employed, the resistance appears to increase rapidly from the commencement of the current, and at last reaches a value nearly constant. If the form of the vessel in which the electrolyte is contained is changed, the resistance is altered in the same way as a similar change of form of a metallic conductor would alter its resistance, but an additional apparent resistance, depending on the nature of the electrodes, has always to be added to the true resistance of the electrolyte.
265.] These phenomena have led some to suppose that there is a finite electromotive force required for a current to pass through an electrolyte. It has been shewn, however, by the researches of Lenz, Neumann, Beetz, Wiedemann[1], Paalzow[2], and recently by those of MM. F. Kohlrausch and W. A. Nippoldt[3], that the conduction in the electrolyte itself obeys Ohm s Law with the same precision as in metallic conductors, and that the apparent resistance at the bounding surface of the electrolyte and the electrodes is entirely due to polarization.
266.] The phenomenon called polarization manifests itself in the case of a continuous current by a diminution in the current, indicating a force opposed to the current. Resistance is also perceived as a force opposed to the current, but we can distinguish between the two phenomena by instantaneously removing or reversing the electromotive force.
The resisting force is always opposite in direction to the current, and the external electromotive force required to overcome it is proportional to the strength of the current, and changes its direction when the direction of the current is changed. If the external electromotive force becomes zero the current simply stops.
The electromotive force due to polarization, on the other hand, is in a fixed direction, opposed to the current which produced it. If the electromotive force which produced the current is removed, the polarization produces a current in the opposite direction.
The difference between the two phenomena may be compared with the difference between forcing a current of water through a long capillary tube, and forcing water through a tube of moderate length up into a cistern. In the first case if we remove the pressure which produces the flow the current will simply stop. In the second case, if we remove the pressure the water will begin to flow down again from the cistern.
To make the mechanical illustration more complete, we have only to suppose that the cistern is of moderate depth, so that when a certain amount of water is raised into it, it begins to overflow. This will represent the fact that the total electromotive force due to polarization has a maximum limit.
267.] The cause of polarization appears to be the existence at the electrodes of the products of the electrolytic decomposition of the fluid between them. The surfaces of the electrodes are thus rendered electrically different, and an electromotive force between them is called into action, the direction of which is opposite to that of the current which caused the polarization.
The ions, which by their presence at the electrodes produce the phenomena of polarization, are not in a perfectly free state, but are in a condition in which they adhere to the surface of the electrodes with considerable force.
The electromotive force due to polarization depends upon the density with which the electrode is covered with the ion, but it is not proportional to this density, for the electromotive force does not increase so rapidly as this density.
This deposit of the ion is constantly tending to become free, and either to diffuse into the liquid, to escape as a gas, or to be precipitated as a solid.
The rate of this dissipation of the polarization is exceedingly small for slight degrees of polarization, and exceedingly rapid near the limiting value of polarization.
268.] We have seen, Art. 262, that the electromotive force acting in any electrolytic process is numerically equal to the mechanical equivalent of the result of that process on one electrochemical equivalent of the substance. If the process involves a diminution of the intrinsic energy of the substances which take part in it, as in the voltaic cell, then the electromotive force is in the direction of the current. If the process involves an increase of the intrinsic energy of the substances, as in the case of the electrolytic cell, the electromotive force is in the direction opposite to that of the current, and this electromotive force is called polarization.
In the case of a steady current in which electrolysis goes on continuously, and the ions are separated in a free state at the electrodes, we have only by a suitable process to measure the intrinsic energy of the separated ions, and compare it with that of the electrolyte in order to calculate the electromotive force required for the electrolysis. This will give the maximum polarization.
But during the first instants of the process of electrolysis the ions when deposited at the electrodes are not in a free state, and their intrinsic energy is less than their energy in a free state, though greater than their energy when combined in the electrolyte. In fact, the ion in contact with the electrode is in a state which when the deposit is very thin may be compared with that of chemical combination with the electrode, but as the deposit increases in density, the succeeding portions are no longer so intimately combined with the electrode, but simply adhere to it, and at last the deposit, if gaseous, escapes in bubbles, if liquid, diffuses through the electrolyte, and if solid, forms a precipitate.
In studying polarization we have therefore to consider
(1) The superficial density of the deposit, which we may call σ. This quantity σ represents the number of electrochemical equivalents of the ion deposited on unit of area. Since each electrochemical equivalent deposited corresponds to one unit of electricity transmitted by the current, we may consider σ as representing either a surface-density of matter or a surface-density of electricity.
(2) The electromotive force of polarization, which we may call p. This quantity p is the difference between the electric potentials of the two electrodes when the current through the electrolyte is so feeble that the proper resistance of the electrolyte makes no sensible difference between these potentials.
The electromotive force p at any instant is numerically equal to the mechanical equivalent of the electrolytic process going on at that instant which corresponds to one electrochemical equivalent of the electrolyte. This electrolytic process, it must be remembered, consists in the deposit of the ions on the electrodes, and the state in which they are deposited depends on the actual state of the surface of the electrodes, which may be modified by previous deposits.
Hence the electromotive force at any instant depends on the previous history of the electrode. It is, speaking very roughly, a function of σ, the density of the deposit, such that p = 0 when σ = 0, but p approaches a limiting value much sooner than σ does. The statement, however, that p is a function of σ cannot be considered accurate. It would be more correct to say that p is a function of the chemical state of the superficial layer of the deposit, and that this state depends on the density of the deposit according to some law involving the time.
269.] (3) The third thing we must take into account is the dissipation of the polarization. The polarization when left to itself diminishes at a rate depending partly on the intensity of the polarization or the density of the deposit, and partly on the nature of the surrounding medium, and the chemical, mechanical, or thermal action to which the surface of the electrode is exposed.
If we determine a time T such that at the rate at which the deposit is dissipated, the whole deposit would be removed in a time T, we may call T the modulus of the time of dissipation. When the density of the deposit is very small, T is very large, and may be reckoned by days or months. When the density of the deposit approaches its limiting value T diminishes very rapidly, and is probably a minute fraction of a second. In fact, the rate of dissipation increases so rapidly that when the strength of the current is maintained constant, the separated gas, instead of contributing to increase the density of the deposit, escapes in bubbles as fast as it is formed.
270.] There is therefore a great difference between the state of polarization of the electrodes of an electrolytic cell when the polarization is feeble, and when it is at its maximum value. For instance, if a number of electrolytic cells of dilute sulphuric acid with platinum electrodes are arranged in series, and if a small electromotive force, such as that of one Daniell's cell, be made to act on the circuit, the electromotive force will produce a current of exceedingly short duration, for after a very short time the electromotive force arising from the polarization of the cell will balance that of the Daniell's cell.
The dissipation will be very small in the case of so feeble a state of polarization, and it will take place by a very slow absorption of the gases and diffusion through the liquid. The rate of this dissipation is indicated by the exceedingly feeble current which still continues to flow without any visible separation of gases.
If we neglect this dissipation for the short time during which the state of polarization is set up, and if we call $Q$ the total quantity of electricity which is transmitted by the current during this time, then if $A$ is the area of one of the electrodes, and $\sigma$ the density of the deposit, supposed uniform,
$Q = A\sigma$.
If we now disconnect the electrodes of the electrolytic apparatus from the Daniell's cell, and connect them with a galvanometer capable of measuring the whole discharge through it, a quantity of electricity nearly equal to $Q$ will be discharged as the polarization disappears.
271.] Hence we may compare the action of this apparatus, which is a form of Ritter's Secondary Pile, with that of a Leyden jar.
Both the secondary pile and the Leyden jar are capable of being charged with a certain amount of electricity, and of being afterwards discharged. During the discharge a quantity of electricity nearly equal to the charge passes in the opposite direction. The difference between the charge and the discharge arises partly from dissipation, a process which in the case of small charges is very slow, but which, when the charge exceeds a certain limit, becomes exceedingly rapid. Another part of the difference between the charge and the discharge arises from the fact that after the electrodes have been connected for a time sufficient to produce an apparently complete discharge, so that the current has completely disappeared, if we separate the electrodes for a time, and afterwards connect them, we obtain a second discharge in the same direction as the original discharge. This is called the residual discharge, and is a phenomenon of the Leyden jar as well as of the secondary pile.
The secondary pile may therefore be compared in several respects to a Leyden jar. There are, however 3 certain important differences. The charge of a Leyden jar is very exactly proportional to the electromotive force of the charge, that is, to the difference of potentials of the two surfaces, and the charge corresponding to unit of electromotive force is called the capacity of the jar, a constant quantity. The corresponding quantity, which may be called the capacity of the secondary pile, increases when the electromotive force increases.
The capacity of the jar depends on the area of the opposed surfaces, on the distance between them, and on the nature of the substance between them, but not on the nature of the metallic surfaces themselves. The capacity of the secondary pile depends on the area of the surfaces of the electrodes, but not on the distance between them, and it depends on the nature of the surface of the electrodes, as well as on that of the fluid between them. The maximum difference of the potentials of the electrodes in each element of a secondary pile is very small compared with the maximum difference of the potentials of those of a charged Leyden jar, so that in order to obtain much electromotive force a pile of many elements must be used.
On the other hand, the superficial density of the charge in the secondary pile is immensely greater than the utmost superficial density of the charge which can be accumulated on the surfaces of a Leyden jar, insomuch that Mr. C. F. Varley[4], in describing the construction of a condenser of great capacity, recommends a series of gold or platinum plates immersed in dilute acid as preferable in point of cheapness to induction plates of tinfoil separated by insulating material.
The form in which the energy of a Leyden jar is stored up is the state of constraint of the dielectric between the conducting surfaces, a state which I have already described under the name of electric polarization, pointing out those phenomena attending this state which are at present known, and indicating the imperfect state of our knowledge of what really takes place. See Arts. 62, 111.
The form in which the energy of the secondary pile is stored up is the chemical condition of the material stratum at the surface of the electrodes, consisting of the ions of the electrolyte and the substance of the electrodes in a relation varying from chemical combination to superficial condensation, mechanical adherence, or simple juxtaposition.
The seat of this energy is close to the surfaces of the electrodes, and not throughout the substance of the electrolyte, and the form in which it exists may be called electrolytic polarization.
After studying the secondary pile in connexion with the Leyden jar, the student should again compare the voltaic battery with some form of the electrical machine, such as that described in Art. 211.
Mr. Varley has lately[5] found that the capacity of one square inch is from 175 to 542 microfarads and upwards for platinum plates in dilute sulphuric acid, and that the capacity increases with the electromotive force, being about 175 for 0.02 of a Daniell's cell, and 542 for 1.6 Daniell's cells.
But the comparison between the Leyden jar and the secondary pile may be carried still farther, as in the following experiment, due to Buff[6]. It is only when the glass of the jar is cold that it is capable of retaining a charge. At a temperature below 100°C the glass becomes a conductor. If a test-tube containing mercury is placed in a vessel of mercury, and if a pair of electrodes are connected, one with the inner and the other with the outer portion of mercury, the arrangement constitutes a Leyden jar which will hold a charge at ordinary temperatures. If the electrodes are connected with those of a voltaic battery, no current will pass as long as the glass is cold, but if the apparatus is gradually heated a current will begin to pass, and will increase rapidly in intensity as the temperature rises, though the glass remains apparently as hard as ever.
This current is manifestly electrolytic, for if the electrodes are disconnected from the battery, and connected with a galvanometer, a considerable reverse current passes, due to polarization of the surfaces of the glass.
If, while the battery is in action the apparatus is cooled, the current is stopped by the cold glass as before, but the polarization of the surfaces remains. The mercury may be removed, the surfaces may be washed with nitric acid and with water, and fresh mercury introduced. If the apparatus is then heated, the current of polarization appears as soon as the glass is sufficiently warm to conduct it.
We may therefore regard glass at 100°C, though apparently a solid body, as an electrolyte, and there is considerable reason to believe that in most instances in which a dielectric has a slight degree of conductivity the conduction is electrolytic. The existence of polarization may be regarded as conclusive evidence of electrolysis, and if the conductivity of a substance increases as the temperature rises, we have good grounds for suspecting that it is electrolytic.
### On Constant Voltaic Elements.
272.] When a series of experiments is made with a voltaic battery in which polarization occurs, the polarization diminishes during the time that the current is not flowing, so that when it begins to flow again the current is stronger than after it has flowed for some time. If, on the other hand, the resistance of the circuit is diminished by allowing the current to flow through a short shunt, then, when the current is again made to flow through the ordinary circuit, it is at first weaker than its normal strength on account of the great polarization produced by the use of the short circuit.
To get rid of these irregularities in the current, which are exceedingly troublesome in experiments involving exact measurements, it is necessary to get rid of the polarization, or at least to reduce it as much as possible.
It does not appear that there is much polarization at the surface of the zinc plate when immersed in a solution of sulphate of zinc or in dilute sulphuric acid. The principal seat of polarization is at the surface of the negative metal. When the fluid in which the negative metal is immersed is dilute sulphuric acid, it is seen to become covered with bubbles of hydrogen gas, arising from the electrolytic decomposition of the fluid. Of course these bubbles, by preventing the fluid from touching the metal, diminish the surface of contact and increase the resistance of the circuit. But besides the visible bubbles it is certain that there is a thin coating of hydrogen, probably not in a free state, adhering to the metal, and as we have seen that this coating is able to produce an electromotive force in the reverse direction, it must necessarily diminish the electromotive force of the battery.
Various plans have been adopted to get rid of this coating of hydrogen. It may be diminished to some extent by mechanical means, such as stirring the liquid, or rubbing the surface of the negative plate. In Smee's battery the negative plates are vertical, and covered with finely divided platinum from which the bubbles of hydrogen easily escape, and in their ascent produce a current of liquid which helps to brush off other bubbles as they are formed.
A far more efficacious method, however, is to employ chemical means. These are of two kinds. In the batteries of Grove and Bunsen the negative plate is immersed in a fluid rich in oxygen, and the hydrogen, instead of forming a coating on the plate, combines with this substance. In Grove's battery the plate is of platinum immersed in strong nitric acid. In Bunsen's first battery it is of carbon in the same acid. Chromic acid is also used for the same purpose, and has the advantage of being free from the acid fumes produced by the reduction of nitric acid.
A different mode of getting rid of the hydrogen is by using copper as the negative metal, and covering the surface with a coat of oxide. This, however, rapidly disappears when it is used as the negative electrode. To renew it Joule has proposed to make the copper plates in the form of disks, half immersed in the liquid, and to rotate them slowly, so that the air may act on the parts exposed to it in turn.
The other method is by using as the liquid an electrolyte, the cation of which is a metal highly negative to zinc.
In Daniell's battery a copper plate is immersed in a saturated solution of sulphate of copper. When the current flows through the solution from the zinc to the copper no hydrogen appears on the copper plate, but copper is deposited on it. When the solution is saturated, and the current is not too strong, the copper appears to act as a true cation, the anion SO4 travelling towards the zinc.
When these conditions are not fulfilled hydrogen is evolved at the cathode, but immediately acts on the solution, throwing down copper, and uniting with SO4 to form oil of vitriol. When this is the case, the sulphate of copper next the copper plate is replaced by oil of vitriol, the liquid becomes colourless, and polarization by hydrogen gas again takes place. The copper deposited in this way is of a looser and more friable structure than that deposited by true electrolysis.
To ensure that the liquid in contact with the copper shall be saturated with sulphate of copper, crystals of this substance must be placed in the liquid close to the copper, so that when the solution is made weak by the deposition of the copper, more of the crystals may be dissolved.
We have seen that it is necessary that the liquid next the copper should be saturated with sulphate of copper. It is still more necessary that the liquid in which the zinc is immersed should be free from sulphate of copper. If any of this salt makes its way to the surface of the zinc it is reduced, and copper is deposited on the zinc. The zinc, copper, and fluid then form a little circuit in which rapid electrolytic action goes on, and the zinc is eaten away by an action which contributes nothing to the useful effect of the battery.
To prevent this, the zinc is immersed either in dilute sulphuric acid or in a solution of sulphate of zinc, and to prevent the solution of sulphate of copper from mixing with this liquid, the two liquids are separated by a division consisting of bladder or porous earthenware, which allows electrolysis to take place through it, but effectually prevents mixture of the fluids by visible currents.
In some batteries sawdust is used to prevent currents. The experiments of Graham, however, shew that the process of diffusion goes on nearly as rapidly when two liquids are separated by a division of this kind as when they are in direct contact, provided there are no visible currents, and it is probable that if a septum is employed which diminishes the diffusion, it will increase in exactly the same ratio the resistance of the element, because electrolytic conduction is a process the mathematical laws of which have the same form as those of diffusion, and whatever interferes with one must interfere equally with the other. The only difference is that diffusion is always going on, while the current flows only when the battery is in action.
In all forms of Daniell's battery the final result is that the sulphate of copper finds its way to the zinc and spoils the battery. To retard this result indefinitely, Sir W. Thomson[7] has constructed Darnell's battery in the following form.
Fig. 21.
In each cell the copper plate is placed horizontally at the bottom and a saturated solution of sulphate of zinc is poured over it. The zinc is in the form of a grating and is placed horizontally near the surface of the solution. A glass tube is placed vertically in the solution with its lower end just above the surface of the copper plate. Crystals of sulphate of copper are dropped down this tube, and, dissolving in the liquid, form a solution of greater density than that of sulphate of zinc alone, so that it cannot get to the zinc except by diffusion. To retard this process of diffusion, a siphon, consisting of a glass tube stuffed with cotton wick, is placed with one extremity midway between the zinc and copper, and the other in a vessel outside the cell, so that the liquid is very slowly drawn off near the middle of its depth. To supply its place, water, or a weak solution of sulphate of zinc, is added above when required. In this way the greater part of the sulphate of copper rising through the liquid by diffusion is drawn off by the siphon before it reaches the zinc, and the zinc is surrounded by liquid nearly free from sulphate of copper, and having a very slow downward motion in the cell, which still further retards the upward motion of the sulphate of copper. During the action of the battery copper is deposited on the copper plate, and SO4 travels slowly through the liquid to the zinc with which it combines, forming sulphate of zinc. Thus the liquid at the bottom becomes less dense by the deposition of the copper, and the liquid at the top becomes more dense by the addition of the zinc. To prevent this action from changing the order of density of the strata, and so producing instability and visible currents in the vessel, care must be taken to keep the tube well supplied with crystals of sulphate of copper, and to feed the cell above with a solution of sulphate of zinc sufficiently dilute to be lighter than any other stratum of the liquid in the cell.
Daniell's battery is by no means the most powerful in common use. The electromotive force of Grove's cell is 192,000,000, of Daniell's 107,900,000 and that of Bunsen's 188,000,000.
The resistance of Daniell's cell is in general greater than that of Grove's or Bunsen's of the same size.
These defects, however, are more than counterbalanced in all cases where exact measurements are required, by the fact that Daniell's cell exceeds every other known arrangement in constancy of electromotive force. It has also the advantage of continuing in working order for a long time, and of emitting no gas.
1. Galvanismus, bd. i.
2. Berlin Monatsbericht, July, 1868.
3. Pogg, Ann. bd. cxxxviii. s. 286 (October, 1869).
4. Specification of C. F. Varley, 'Electric Telegraphs, &c.,' Jan. 1860.
5. Proc. R. S., Jan. 12, 1871.
6. Annalen der Chemie und Pharmacie, bd. xc. 257 (1854).
7. Proc. R. S., Jan. 19, 1871. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209472894668579, "perplexity": 573.5356270346872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00168-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1255074/creating-a-sequence-convergent-to-zero-with-special-characteristic/1256489 | # Creating a sequence convergent to zero with special characteristic
Let $\{a_k\}$ and $\{b_k\}$ be positive sequences in $\mathbb{R}$ that both converge to zero. Can we choose $\{c_k\}$ such that it converges to zero and
$$0<\lim_{k \to \infty} \frac{a_k}{c_k} = \lim_{k \to \infty} \frac{b_k}{c_k} < +\infty$$
• $a_k$ and $b_k$ are given. We can only decide on $c_k$ – Mehdi Jafarnia Jahromi Apr 27 '15 at 23:14
• I think that would be very difficult for $a_k=e^{-k}$ and $b_k=1/k$. I am pretty certain you would need $\lim_{k\to\infty}a_k/b_k=1$ – Arthur Apr 27 '15 at 23:14
• @arthur: Actually, not that hard. Just take c_k that goes slower to 0 than both a_k and b_k. Like 1/ln(k). Both limits will then be +oo (But yes, I'm not sure that's what the OP had in head) – Tryss Apr 27 '15 at 23:19
• @Tryss As long as you think $\infty=\infty$ is a correct statement, then you're right (except that you would want $c_k$ to go to zero faster, not slower). I know many would disagree. – Arthur Apr 27 '15 at 23:22
• I edited the question, we do not need $+\infty$ – Mehdi Jafarnia Jahromi Apr 27 '15 at 23:24
This is not possible in general, for example, consider the sequences $(a_n) = (\frac{1}{n})$ and $(b_n)=(\frac{1}{n^2})$. Then if possible let $(c_k)$ be a sequence such that $$0<\lim_\limits{n\to \infty}\frac{a_n}{c_n} =\lim_\limits{n\to \infty}\frac{b_n}{c_n}=l<\infty.$$ Then it will follow that $$\frac{\lim_\limits{n\to \infty}\frac{a_n}{c_n}}{\lim_\limits{n\to \infty}\frac{b_n}{c_n}} =1= \lim_\limits{n\to \infty}\frac{a_n}{b_n}=\lim_\limits{n\to \infty}n.$$ which is obviously not true.
$\textbf{Note:}$ The above observation shows that it is crucial to have $\lim_\limits{n\to \infty}\frac{a_n}{b_n}=1$ in the hypothesis and if this is the case then it can be easily seen that taking the sequence $(c_n) = (b_n)$ will do the job.
$\textbf{Observation}:$ Suppose if it is added in the hypothesis that $\lim_\limits{n\to \infty}\frac{a_n}{b_n}=1$ and it is asked that whether there is a sequence $(c_n)$ such that $(c_n)$ converges to $0$ and $$0<\lim_\limits{n\to \infty}\frac{a_n}{c_n} =\lim_\limits{n\to \infty}\frac{b_n}{c_n}=l<\infty.$$ and $l$ is different than $1$. Even in this case the answer turns out to be affirmative. Suppose $l\in\mathbb{R}$ is any real number different from $1$. Take the sequence $(c_n) =(\frac{a_n^2}{lb_n})$. Then it is clear that $c_n\longrightarrow 0$ and moreover, $$0<\lim_\limits{n\to \infty}\frac{a_n}{c_n} =\lim_\limits{n\to \infty}\frac{b_n}{c_n}=l<\infty.$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9589671492576599, "perplexity": 165.12097275841256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00423.warc.gz"} |
https://math.stackexchange.com/questions/805655/show-that-mathbbq-zeta-contains-one-of-the-two-numbers-sqrt-pm5-and | # Show that $\mathbb{Q}(\zeta)$ contains one of the two numbers $\sqrt{\pm5}$ and decide which one is contained in $\mathbb{Q}(\zeta)$.
Let $\zeta$ be the 15th primitive root of unity in $\mathbb{C}$, show that $\mathbb{Q}(\zeta)$ contains one of the two numbers $\sqrt{\pm5}$ and decide which one is contained in $\mathbb{Q}(\zeta)$.
First consider the element $\zeta^{3}\in\mathbb{Q}(\zeta)$; it is a primitive 5th root of unity since $(\zeta^{3})^{5}=\zeta^{15}=1$.
We are allowed to use the following theorem; let $p$ be an odd prime and let $\zeta$ be a primitive $p$th root of unity in $\mathbb{C}$ then $S^{2}=(\frac{-1}{p})p$. In particular, the cyclotomic field $\mathbb{Q}(\zeta)$ contains at least one of the quadratic fields $\mathbb{Q}(\sqrt{p})$ or $\mathbb{Q}(\sqrt{-p})$.
The Legendre symbol $(\frac{-1}{5})=1$ since $5\equiv 1(mod4)$ so we get the following: $$S^{2}=(\frac{-1}{5})5=5$$ Therefore $\mathbb{Q}(\zeta)$ contains $\mathbb{Q}(\sqrt{5})$ by the theorem above.
So I can prove that $\mathbb{Q}(\zeta^{3})$ contains $\mathbb{Q}(\sqrt{5})$ therefore $\mathbb{Q}(\zeta)$ contains $\mathbb(\sqrt{5})$ but I don't know how to show that $\sqrt{-5}$ is not contained in $\mathbb{Q}(\zeta)$.
I was thinking that it might involve showing that $i\notin\mathbb{Q}(\zeta)$ but I'm not sure how to go about this neither.
I also looked into showing that if $\mathbb{Q}(\sqrt{-5})\subseteq\mathbb{Q}(\zeta)$ then $[\mathbb{Q}(\zeta)\colon\mathbb{Q}]$ is divisible by $$[\mathbb{Q}(\sqrt{5},\sqrt{-5})\colon\mathbb{Q}]=4$$ But I cannot use this to disprove anything because the minimal polynomial of $\zeta$ is $x^8-x^7+x^5-x^4+x^3-x+1$ and clearly 4 does divide 8.
Any help would be appreciated, thank you.
If $\sqrt 5$ and $\sqrt{-5}$ were both contained in $\mathbb Q(\zeta)$, then $\mathbb Q(\zeta)$ would also contain their quotient $\sqrt{-5}/\sqrt 5 = i$. But the only roots of unity in $\mathbb Q(\zeta)$ are of the form $\pm \zeta^k$ (the $30$th roots of unity). Since 4 does not divide 30, $i \not\in \mathbb Q(\zeta)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804308414459229, "perplexity": 36.319298462621326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00404.warc.gz"} |
https://buboflash.eu/bubo5/show-dao2?d=149630105 | Tags
#odersky-programming-in-scala-2ed #scala
Question
When multiple operators of the same precedence appear side by side in an expression, the [...] of the operators determines the way operators are grouped.
associativity
Tags
#odersky-programming-in-scala-2ed #scala
Question
When multiple operators of the same precedence appear side by side in an expression, the [...] of the operators determines the way operators are grouped.
?
Tags
#odersky-programming-in-scala-2ed #scala
Question
When multiple operators of the same precedence appear side by side in an expression, the [...] of the operators determines the way operators are grouped.
associativity
If you want to change selection, open document below and click on "Move attachment"
Open it
When multiple operators of the same precedence appear side by side in an expression, the associativity of the operators determines the way operators are grouped.
#### Summary
status measured difficulty not learned 37% [default] 0
No repetitions | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216538429260254, "perplexity": 2249.620885241614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00486.warc.gz"} |
http://mathhelpforum.com/calculus/167185-integral.html | # Math Help - integral
1. ## integral
find the value of ( b ) so that the line y=b divides the region bound by the graph of the tow functions f(x)= 9-x^2 , g(x)=0 into regions of equal area
find the value of ( b ) so that the line y=b divides the region bound by the graph of the tow functions f(x)= 9-x^2 , g(x)=0 into regions of equal area
What have you tried to solve this?
3. sorry, i have not idea to solve
sorry, i have not idea to solve
start by calculating the area of the region between $y = 9-x^2$ and the x-axis ... you'll need that piece of information to find the "half" area.
what do you get?
5. the area of the region between f(x)=9-x^2 and the x-axis equal 36
good, then let $y = b$ be the horizontal line that cuts the region's area in half.
using symmetry, note that the area of the region in quadrant I is $\displaystyle \int_0^3 9 - x^2 \, dx = 18$
since $y = 9-x^2$ , $x = \sqrt{9-y}$ , and ...
$\displaystyle \int_0^b \sqrt{9-y} \, dy = 9$
evaluate the integral and solve for $b$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979309618473053, "perplexity": 647.7801808502564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112231.78/warc/CC-MAIN-20160428161512-00202-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/311593/total-variation-of-a-fourier-series | # Total variation of a Fourier series
Let $f(x) = f(x+2\pi)$ be a bounded real function given by the Fourier series of the form
$$f(x) = \sum_{k=1}^N a_k \sin(kx + \phi_k).$$ What is the total variation $V(f)$ of this function over one period? In this case, one should be able to use that $V(f) = \int |f'(x)|dx$ and that $$f'(x) = \sum_{k=1}^{N} k a_k \cos(kx + \phi_k),$$ but how?
If instead the function is given by an infinite Fourier series, then what are the conditions on the $a_k$ terms for the total variation to be finite?
-
A periodic function is of bounded variation if and only if it is antiderivative of a finite signed measure on $[0,2\pi)$ (or, better, on the circle $\mathbb T$) with total mass $0$. Therefore, $\sum_{n\in \mathbb Z} c_n e^{inx}$ is the Fourier series of a function of bounded variation if and only if $\sum_{n\in\mathbb Z} in c_n e^{inx}$ is the Fourier series of a finite signed measure. Let $b_n=i n c_n$ to simplify notation. The following result can be found, for example, in An Introduction to Harmonic Analysis by Katznelson.
Theorem (Herglotz). $\sum_{n\in\mathbb Z} b_n e^{inx}$ is the Fourier series of a positive measure if and only if the sequence $(b_n)$ is positive definite. The latter means that $$\sum_{n,m}b_{n-m}z_n\overline{z_m}\ge 0\quad \text{ for all sequences }\ z_n\in \mathbb C \tag1$$ where only finitely many $z_n$ are nonzero.
Hence, $\sum_{n\in \mathbb Z} c_n e^{inx}$ is the Fourier series of a function of bounded variation if and only if the sequence $(nc_n)$ is the difference of two positive definite sequences. I don't think this is a practical condition, but then, neither is (1). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884429574012756, "perplexity": 36.22163376198776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010628283/warc/CC-MAIN-20140305091028-00068-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.mediatebc.com/torani-move-drjr/c96c66-statistically-significant-psychology | The test has been running for two months. What this means is that there is less than a 5% probability that the results happened just by random chance, and therefore a 95% probability that the results reflect a meaningful pattern in human psychology. During researches, results can be statistically significant but not meaningful. The point of doing research and running statistical analyses on data is to find truth. You should ____ asked Apr 11, 2017 in Psychology by Likal. 1: The P value fallacy. Advertising, Cancer, Drug industry. This is closely related to Janet Shibley Hyde’s argument about sex differences (Hyde 2007). If this was just a chance event, this would only happen roughly one in 150 times but the fact that this happened in your experiment, it makes you feel pretty confident that your experiment is significant. Or embodied cognition. by Tabitha M. Powledge, Public Library of Science : Broadly speaking, statistical significance is assigned to a result when an event is found to be unlikely to have occurred by chance. If a test of significance gives a p-value lower than the α-level, the null hypothesis is rejected. Statistical significance comes from the bell curve. For any given statistical experiment – including A/B testing – statistical significance is based on several parameters: The confidence level (i.e how sure you can be that the results are statistically relevant, e.g 95%); Your sample size (little effects in small samples tend to be unreliable); Your minimum detectable effect (i.e the minimum effect that you want to observe with that experiment) (93 in psychology, and 16 in experimental economics, after excluding initial studies with P > 0.05), these numbers are suggestive of the potential gains in reproducibility that would accrue from the new threshold of P < 0.005 in these fields. You then run statistical tests on your observations.You use the standard in psychology for statistical testing that allows a 5 percent chance of getting a false positive result. A psychologist runs a study with three conditions and displays the resulting condition means in a line graph.3 The readers of the psychologist's article will want to know which condition means are statistically significantly different from one another. Smaller α-levels give greater confidence in the determination of significance, but run greater risks of failing to reject a false null hypothesis (a Type II error, or "false negative determination"), and so have less statistical power. If the CI for the odds ratio excludes 1, then your results are statistically significant. It is important to understand that statistical significance reflects the chance probability, not the magnitude or effect size of a difference or result. Yet another common pitfall often happens when a researcher writes the ambiguous statement "we found no statistically significant difference," which is then misquoted by others as "they found that there was no difference." Statistically significant results are those that are understood as not likely to have occurred purely by chance and thereby have other underlying causes for their occurrence - hopefully, the underlying causes you are trying to investigate! These statistical results indicate that an effect exists. Congruent validity 25. Popular levels of significance are 5%, 1% and 0.1%. A result is statistically significant if it satisfies certain statistical criteria. Statistically Significant Definition: A result in a study can be viewed as statistically significant if the probability of achieving the result or a result more extreme by chance alone is less than . We call that degree of confidence our confidence level, which demonstrates how sure we are that our data was not skewed by random chance. Or power pose. 0 votes. Armstrong suggests authors should avoid tests of statistical significance; instead, they should report on effect sizes, confidence intervals, replications/extensions, and meta-analyses. More precisely, a study's defined significance level, denoted by α {\displaystyle \alpha }, is the probability of the study rejecting the null hypothesis, given that the null hypothesis was assumed to be true; and the p-value of a result, p {\displaystyle p}, is the probability of … Sign In Sign Up. The confidence of a result (and its associated confidence interval) is not dependent on effect size alone. Statistical significance is a determination that a relationship between two or more variables is caused by something other than chance. Thus, it is safe to assume that the difference is due to the experimental manipulation or treatment. And that 5% threshold is set at 5% to ensure that there is a high probability that we make a correct decision and that our determination of statistical significance is an accurate reflection of reality. How to get statistically significant effects in any ERP experiment (and why you shouldn't) Steven J. The situations occurs at the end of a study when the statistical figures relating to certain topics of study are calculated in absence of qualitative aspect and other details that can be … We can call a result statistically significant when P < alpha. Categories. Technical note: In general, the more predictor variables you have in the model, the higher the likelihood that the The F-statistic and corresponding p-value will be statistically significant. Address correspondence to: Steven J. A common misconception is that a statistically significant result is always of practical significance, or demonstrates a large effect in the population. Talk about how your findings contrast with existing theories and previous research and emphasize that more research may be needed to reconcile these differences. Technically, statistical significance is the probability of some result from a statistical test occurring by chance. Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the populationof interest. Most researchers work with samples, defined as … statistically significant and insignificant results. A statistically significant result would be one where, after rigorous testing, you reach a certain degree of confidence in the results. A statistical significance of "" can be converted into a value of α via use of the error function: The use of σ is motivated by the ubiquitous emergence of the Gaussian distribution in measurement uncertainties. Word count: 3,250 Reading time: 10 minutes Published: 2011. an excerpt from xkcd, geeky web comic. Plain language should be used to describe effects based on the size of the effect and the quality of the evidence. The significance level is usually represented by the Greek symbol, α (alpha). The significance level is usually represented by the Greek symbol, α (alpha). Significance comes down to the relationship between two crucial quantities, the p-value and the significance level (alpha). Most often, psychologists look for a probability of 5% or less that the results are do to chance, which means a 95% chance the results are "not" due to chance. :>), Get the word of the day delivered to your inbox, © 1998-, AlleyDog.com. answered Apr 11, 2017 by Holly . And hopefully when we conclude that an effect is not statistically significant there really is no effect and if we tested the entire population we would find no effect. Rick, not statistically significant (relationship, difference in means, or difference in proportion) is one of the two possible outcomes of any study. A Priori Sample Size Estimation: Researchers should do a power analysis before they conduct their study to determine how many subjects to enroll. If the p value is being less than 5% (p<0.05), we will identify it being Statistically Significant. A Significant Difference between two groups or two points in time means that there is a measurable difference between the groups and that, statistically, the probability of obtaining that difference by chance is very small (usually less than 5%). In one study, 60% of a sample of professional researchers thought that a p value of .01—for an independent-samples t- test with 20 participants in each sample—meant there was a 99% chance of replicating the statistically significant result (Oakes, 1986) [4] . Or power pose. And, importantly, it should be quoted whether or not the p-value is judged to be significant. They point out that "insignificance" does not mean unimportant, and propose that the scientific community should abandon usage of the test altogether, as it can cause false hypotheses to be accepted and true hypotheses to be rejected.[6][1]. psychological-assessment; 0 Answers. However, modern statistical advice is that, where the outcome of a test is essentially the final outcome of an experiment or other study, the p-value should be quoted explicitly. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null is correct (and the results are random). Therefore, we reject the null hypothesis, and accept the alternative hypothesis. Summary Beware of placing too much weight on traditional values of α, such as α= 0.05. The most commonly agreed border in Significance Testing is at the P value 0.05. None were significant, but after including tree age as independent variable, suddenly elevation and slope become statistically significant. As a marketer, you want to be certain about the results you get… However, both t-values are equally unlikely under H0. Luck. The first two, .03 and .001, would be statistically significant. In psychology nonparametric test are more usual than parametric tests. Toward evidence-based medical statistics. In more complicated, but practically important cases, the significance level of a test is a probability such that the probablility of making a decision to reject the null hypothesis when the null hypothesis is actually true is no more than the stated probability. Similarly, if the P value is more than 5% (p>0.05), we will identify it being Statistically Insignificant. 2-tailed statistical significance is the probability of finding a given absolute deviation from the null hypothesis -or a larger one- in a sample.For a t test, very small as well as very large t-values are unlikely under H0. Tags. When you hear that the results of an experiment were stastically significant, it means that you can be 95% sure the results are not due to chance...this is a good thing. "A statistically significant difference" simply means there is statistical evidence that there is a difference; it does not mean the difference is necessarily large, important, or significant in the common meaning of the word. Fixed significance levels such as those mentioned above may be regarded as useful in exploratory data analyses. This allows for those applications where the probability of deciding to reject may be much smaller than the significance level for some sets of assumptions encompassed within the null hypothesis. Statistical significance means that a result from testing or experimenting is not likely to occur randomly or by chance, but is instead likely to be attributable to a specific cause. Psychological science—the good, the bad, and the statistically significant. Even a very weak result can be statistically significant if it is based on a large enough sample. If we continue the test, and if we assume that the data keeps coming in the same proportions… It’s 50 shades of gray all over again. However, you’ll need to use subject area expertise to determine whether this effect is important in the real world to determine practical significance. The difference is statistically significant 23. Yet it’s one of the most common phrases heard when dealing with quantitative methods. All material within this site is the property of AlleyDog.com. Introduction. ), The Concept of Statistical Significance Testing, Pearson product-moment correlation coefficient, https://psychology.wikia.org/wiki/Statistical_significance?oldid=175032. It’s possible that each predictor variable is not significant and yet the F-test says that all of the predictor variables combined are jointly significant. Significance Testing is fundamental in identifying whether there is a relationship exists between two or more variables in a Psychology Research. The selection of an α-level inevitably involves a compromise between significance and power, and consequently between the Type I error and the Type II error. In order to do this, you have to take lots of steps to make sure you set up good experiments, use good measures, measure the correct variables, etc...and you have to determine if the findings you get occurred because you ran a good study or by some fluke. In terms of α, this statement is equivalent to saying that "assuming the theory is true, the likelihood of obtaining the experimental result by coincidence is 0.27%" (since 1 − erf(3/√2) = 0.0027). It’s hard to say and harder to understand. Critical Regions. For clarity, the above formula is presented in tabular form below. Failing to find evidence that there is a difference does not constitute evidence that there is no difference. The decision is often made using the p-value: if the p-value is less than the significance level, then the null hypothesis is rejected. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1°F over the course of the day). In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systema… Popular levels of significance are 5%, 1% and 0.1%. In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. You will also want to discuss the implications of your non-significant findings to your area of research. Technically, statistical significance is the probability of some result from a statistical test occurring by chance. A number of attempts failed to find empirical evidence supporting the use of significance tests. In these cases p-values are adjusted in order to control either the false discovery rate or the familywise error rate. It is achieved by comparing the probability of which the data has demonstrated its effect due to chance, or due to real connection. If the sample size is large and the noise is low a small effect size can be measured with great confidence. Such results are informally referred to as 'statistically significant'. by Tabitha M. Powledge, Public Library of Science In psychology nonparametric test are more usual than parametric tests. Toward evidence-based medical statistics. In other words, the confidence one has in a given result being non-random (i.e. In some fields, for example nuclear and particle physics, it is common to express statistical significance in units of "σ" (sigma), the standard deviation of a Gaussian distribution. The smaller the p-value, the more significant the result is said to be. Online marketers seek more accurate, proven methods of running online experiments. Statistical significance can be considered to be the confidence one has in a given result. In medicine, small effect sizes (reflected by small increases of risk) are often considered clinically relevant and are frequently used to guide treatment decisions (if there is great confidence in them). A p-value less than 0.05 (typically ≤ 0.05) is statistically significant. Therefore, it doesn't make sense to treat α= 0.05 as a universal rule for what is significant. This is to allow maximum information to be transferred from a summary of the study into meta-analyses. The 'p' value in Significance Testing indicates the probability of which the effect is cause by chance.… In a comparison study, it is dependent on the relative difference between the groups compared, the amount of measurement and the noise associated with the measurement. [2][3] See Bayes factor for details. It’s 50 shades of gray all over again. In this section, you’ll learn about some of the tools that psychologists use in statistical analysis. Unfortunately, this problem is commonly encountered in scientific writing. Clinical Significance Statistical Significance; Definition. Such results are informally referred to as 'statistically significant'. In biomedical research, 96% of a sample of recent papers claim statistically significant results with In order to do this, you have to take lots of steps to make sure you set up good experiments, use good measures, … Probability refers to the likelihood of an event occurring. That they think about is the probability of some result from a statistical test occurring by.! To assist them in analyzing data, and accept the alternative hypothesis suggests we. [ an effect ] ‟, „ borderline significant‟ ) should not used! Formula is presented in tabular form ) where, after rigorous testing, you ’ ll learn about of! „ borderline significant‟ ) should not be used in EPOC reviews statistical test occurring by chance size can considered... The probability of replicating a statistically significant result? oldid=175032 is commonly encountered in scientific writing both though is... Encountered in scientific writing into meta-analyses not just random “ noise. determine how subjects... It has a way of evoking as much emotion with existing theories previous. Random “ noise. to real connection this principle is sometimes described by the maxim Absence of is. Formula a clinician-trialist is ever likely to need ( or understand as α= 0.05 as a rule. Because they distract researchers from the use of significance gives a p-value less than (! Straightforward meaning useful in exploratory data analyses form below should be quoted whether or not the p-value is judged be... Than parametric tests that we would n't reject the null hypothesis if t had been 2.2 instead of -2.2 not... And this is true and this is to find evidence that there is practical. Most commonly agreed border in significance testing, you ’ ll learn about some of the distribution like we when! When an event is found to be significant effects in any ERP experiment ( and its associated confidence interval is. Judged to be transferred from a statistical test occurring by chance s ____ actually, statistics not... Performance indicators section, you ’ ll learn about some of the events compared a p-value than. Most common phrases heard when dealing with quantitative methods physiological statistics, a (... Beware extrapolating the preference of a result is called statistically significant effects in any ERP experiment and... Is presented in tabular form below by the term p < 0.05 ), the. On cancer drug spin scientific statistically significant psychology site is the probability of some result from a summary of psychological... That psychologists use in statistical hypothesis testing concept is important to understand s a phrase that s... Certain statistical statistically significant psychology, often represented by the Greek symbol, α ( alpha ) result when event. Of scores or subscores that is statistically significant, unlikely to have occurred purely by chance assessment an., attempts to educate researchers on how to avoid pitfalls of using statistical significance is probability. Being less than 0.05 ( typically ≤ 0.05 ), we should n't ignore the right tail of psychological. When it is very unlikely to have occurred given the null hypothesis is rejected most commonly border... Used in EPOC reviews are more usual than parametric tests the noise is low a small size. In statistical analysis what each of these quantities represents: Broadly speaking, statistical significance the... Or p-values should be quoted whether or not the p-value, the relationship between two quantities. Started using them more and more with the rise of A/B testing any difference as statistical significance is also consideration! Statistically Insignificant, University of California, Davis, Davis, California USA. Comes down to the relationship or difference is probably not just random noise!, results can be statistically significant the maxim Absence of evidence is not a consequence of chance ) on... With the rise of A/B testing necessarily a strong one California, USA relationship or difference is due the. Error rate threshold that statistically significant psychology think about is the probability of which the has. Or more variables in a given result being non-random ( i.e proven methods of running online experiments % p. Learn about some of the tools that psychologists use statistics to assist them in analyzing using! A number ( 0.5 ) or a zillion other examples pushed by the happy-talk crowd abandonment and many key. For any reason without the express written consent of AlleyDog.com whether a small size. Written consent of AlleyDog.com both though idea is more absurd for nonparametric.. Not zero to a common misconception is that a statistically significant relationship between the psychological assessment of an A/B for! Say and harder to understand that statistical significance '' that a given treatment is important... Consequence of chance ) depends on the context of the study into meta-analyses experimental... Effect ] ‟, „ borderline significant‟ ) should not be reprinted or copied for any reason the... Researchers from the use of the evidence smaller the p-value, the is! Https: //psychology.wikia.org/wiki/Statistical_significance? oldid=175032 lower the significance level, the more significant the result is statistically significant researchers the. The false discovery rate or the familywise error rate bordering on meaningless when a! The phrase statistically significant effects in any ERP experiment ( and why you should ____ Apr! The population gives a p-value lower than the α-level, the concept of statistical significance test been. This site is the probability of something statistically significant but not meaningful can determine the test, accept. Events compared statistical significance '' and, importantly, it has a very important and common in. Evidence that there is exactly zero difference between two crucial quantities, the more significant result... For t = -2.2 and the statistically significant if it is important to understand that statistical significance 1. On cancer drug spin of Absence. cases p-values are considered by to. Formula is presented in tabular form ) difference of scores or subscores that statistically. Distribution like we do when reporting a 1-tailed p-value mentioned above may be regarded as in. -2.2 and the significance level ( alpha ) be regarded as useful in exploratory data analyses we can be with! Construct, researchers can determine the test ’ s argument about sex differences ( Hyde 2007 ) sex! Something statistically significant result is not evidence of Absence. seek more accurate, proven of. But otherwise unimportant general point estimates and confidence intervals, when interpreting the of! To your area of research: 2011. an excerpt from xkcd, geeky web comic when interpreting results. Safe to assume that the difference is probably not just random “ noise ''. Findings to your area of research flavours of “ significant ”: statistical versus clinical % and 0.1 % //psychology.wikia.org/wiki/Statistical_significance... Utilize statistics, digital marketers have started using them more and more with rise. The false discovery rate or the familywise error rate two or more variables in a result!.03 and.001, would be statistically significant, unlikely to have occurred by chance copied any... Related misinterpretation is that 1 − p equals the probability of something statistically significant describe effects based a. Much emotion is to compute p for t = -2.2 and the sample size:... Precisely, is being tested statistically ( SNR ) and the statistically significant if it is by. Performance indicators misconception is that frequentist analyses of p-values are considered by some to ... Much weight on traditional values of α, such as those mentioned above may be needed to these! Accept the alternative hypothesis occurring by chance ) and the o… Suppose a is... Is fundamental in identifying whether there is no difference statistically significant psychology [ an effect ],... Manipulated independent variable in her experiment high-ticket-value product – perhaps a SaaS company of. People have problems with the use of the statistical significance is also consideration..., attempts to educate researchers on how to Get statistically significant 11, 2017 in psychology by.... Be quoted whether or not the magnitude or effect size can be to. Cases p-values are adjusted in order to control either the false discovery rate or the familywise rate! Between two crucial quantities, the concept of statistical significance some of the statistical significance testing at... Distribution like we do when reporting a 1-tailed p-value, geeky web.! Enables researchers to find empirical evidence supporting the use of the events compared you will also want discuss... To treat α= 0.05, AlleyDog.com information to be unlikely to have by... Note what, precisely, is being tested statistically in significance testing significance is same proved! Brain, University of California, USA clinician-trialist is ever likely to need ( or!! From a summary of the statistical significance α ( alpha ) it does n't make any difference statistical! Argument about sex differences ( Hyde 2007 ) with noise, signal and sample size ( tabular form.. Events compared to your inbox, © 1998-, AlleyDog.com exactly zero difference between two crucial quantities the! Preference of the same proportions… Introduction to assume that the data has demonstrated its effect due to real.... To have occurred by chance continue the test ’ s 50 shades of gray all over.. Form ) be regarded as useful in exploratory data analyses pitfalls of using statistical significance of the. Probability refers to the experimental manipulation or treatment important is dependent on effect size considered! Transferred from a statistical statistically significant psychology occurring by chance a consideration when interpreting a stated significance, or due the... Transferred from a statistical test occurring by chance but need n't: 2 assessments of statistical significance test been... Play on conversions, average order value, cart abandonment and many other key performance indicators Hyde. Shibley Hyde ’ s ____, precisely, is being less than 5 %, 1 % 0.1! Alpha ) the nature of significance are 5 %, 1 % and 0.1 % than statistically significant psychology ( typically 0.05!, precisely, is being less than 5 % ( p > ). Significant represents the result is always of practical significance, often represented by the crowd!
Refresh Advanced Eye Drops Coupon, Map Of Cat Island, Iupui Application Deadline Fall 2020, 559 Area Code Time Zone, Resorts Near Kelowna, Auxiliary Verb French, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826396644115448, "perplexity": 1023.7902889274379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00506.warc.gz"} |
https://nyuscholars.nyu.edu/en/publications/sticky-brownian-motion-and-its-numerical-solution | # Sticky Brownian Motion and Its Numerical Solution
Nawaf Bou-Rabee, Miranda C. Holmes-Cerfon
Research output: Contribution to journalArticlepeer-review
## Abstract
Sticky Brownian motion is the simplest example of a diffusion process that can spend finite time both in the interior of a domain and on its boundary. It arises in various applications in fields such as biology, materials science, and finance. This article spotlights the unusual behavior of sticky Brownian motions from the perspective of applied mathematics, and provides tools to efficiently simulate them. We show that a sticky Brownian motion arises naturally for a particle diffusing on R+ with a strong, short-ranged potential energy near the origin. This is a limit that accurately models mesoscale particles, those with diameters_100nm-10_m, which form the building blocks for many common materials. We introduce a simple and intuitive sticky random walk to simulate sticky Brownian motion, which also gives insight into its unusual properties. In parameter regimes of practical interest, we show that this sticky random walk is two to five orders of magnitude faster than alternative methods to simulate a sticky Brownian motion. We outline possible steps to extend this method toward simulating multidimensional sticky diffusions.
Original language English (US) 164-195 32 SIAM Review 62 1 https://doi.org/10.1137/19M1268446 Published - 2020
## Keywords
• Feller boundary condition
• Finite difference methods
• Fokker{planck equation
• Generalized wentzell boundary condition
• Kolmogorov equation
• Markov chain approximation method
• Markov jump process
• Sticky brownian motion
• Sticky random walk
## ASJC Scopus subject areas
• Theoretical Computer Science
• Computational Mathematics
• Applied Mathematics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869292140007019, "perplexity": 1730.4203480763936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00017.warc.gz"} |
https://labs.tib.eu/arxiv/?author=V.%20Cirigliano | • ### Constraining the top-Higgs sector of the Standard Model Effective Field Theory(1605.04311)
Dec. 5, 2018 hep-ph
Working in the framework of the Standard Model Effective Field Theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy CP-conserving and CP-violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both CP-even and CP-odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model Effective Field Theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing minimal flavor violation does not significantly affect the bounds on the top couplings.
• ### A new leading contribution to neutrinoless double-beta decay(1802.10097)
Feb. 27, 2018 hep-ph, nucl-th, hep-lat
Within the framework of chiral effective field theory we discuss the leading contributions to the neutrinoless double-beta decay transition operator induced by light Majorana neutrinos. Based on renormalization arguments in both dimensional regularization with minimal subtraction and a coordinate-space cutoff scheme, we show the need to introduce a leading-order short-range operator, missing in all current calculations. We discuss strategies to determine the finite part of the short-range coupling by matching to lattice QCD or by relating it via chiral symmetry to isospin-breaking observables in the two-nucleon sector. Finally, we speculate on the impact of this new contribution on nuclear matrix elements of relevance to experiment.
• ### Interpreting top-quark LHC measurements in the standard-model effective field theory(1802.07237)
Feb. 20, 2018 hep-ph, hep-ex
This note proposes common standards and prescriptions for the effective-field-theory interpretation of top-quark measurements at the LHC.
• ### Neutrinoless double beta decay in chiral effective field theory: lepton number violation at dimension seven(1708.09390)
Dec. 27, 2017 hep-ph, nucl-th
We analyze neutrinoless double beta decay ($0\nu\beta\beta$) within the framework of the Standard Model Effective Field Theory. Apart from the dimension-five Weinberg operator, the first contributions appear at dimension seven. We classify the operators and evolve them to the electroweak scale, where we match them to effective dimension-six, -seven, and -nine operators. In the next step, after renormalization group evolution to the QCD scale, we construct the chiral Lagrangian arising from these operators. We develop a power-counting scheme and derive the two-nucleon $0\nu\beta\beta$ currents up to leading order in the power counting for each lepton-number-violating operator. We argue that the leading-order contribution to the decay rate depends on a relatively small number of nuclear matrix elements. We test our power counting by comparing nuclear matrix elements obtained by various methods and by different groups. We find that the power counting works well for nuclear matrix elements calculated from a specific method, while, as in the case of light Majorana neutrino exchange, the overall magnitude of the matrix elements can differ by factors of two to three between methods. We calculate the constraints that can be set on dimension-seven lepton-number-violating operators from $0\nu\beta\beta$ experiments and study the interplay between dimension-five and -seven operators, discussing how dimension-seven contributions affect the interpretation of $0\nu\beta\beta$ in terms of the effective Majorana mass $m_{\beta \beta}$.
• ### Neutrinoless double beta decay matrix elements in light nuclei(1710.05026)
Oct. 13, 2017 hep-ph, nucl-th, hep-lat
We present the first ab initio calculations of neutrinoless double beta decay matrix elements in $A=6$-$12$ nuclei using Variational Monte Carlo wave functions obtained from the Argonne $v_{18}$ two-nucleon potential and Illinois-7 three-nucleon interaction. We study both light Majorana neutrino exchange and potentials arising from a large class of multi-TeV mechanisms of lepton number violation. Our results provide benchmarks to be used in testing many-body methods that can be extended to the heavy nuclei of experimental interest. In light nuclei we have also studied the impact of two-body short range correlations and the use of different forms for the transition operators, such as those corresponding to different orders in chiral effective theory.
• ### Neutrinoless double beta decay in effective field theory: the light Majorana neutrino exchange mechanism(1710.01729)
May 21, 2019 hep-ph, nucl-th, hep-lat
We present the first chiral effective theory derivation of the neutrinoless double beta-decay $nn\rightarrow pp$ potential induced by light Majorana neutrino exchange. The effective-field-theory framework has allowed us to identify and parameterize short- and long-range contributions previously missed in the literature. These contributions can not be absorbed into parameterizations of the single nucleon form factors. Starting from the quark and gluon level, we perform the matching onto chiral effective field theory and subsequently onto the nuclear potential. To derive the nuclear potential mediating neutrinoless double beta-decay, the hard, soft and potential neutrino modes must be integrated out. This is performed through next-to-next-to-leading order in the chiral power counting, in both the Weinberg and pionless schemes. At next-to-next-to-leading order, the amplitude receives additional contributions from the exchange of ultrasoft neutrinos, which can be expressed in terms of nuclear matrix elements of the weak current and excitation energies of the intermediate nucleus. These quantities also control the two-neutrino double beta-decay amplitude. Finally, we outline strategies to determine the low-energy constants that appear in the potentials, by relating them to electromagnetic couplings and/or by matching to lattice QCD calculations.
• ### Right-handed charged currents in the era of the Large Hadron Collider(1703.04751)
May 22, 2017 hep-ph
We discuss the phenomenology of right-handed charged currents in the framework of the Standard Model Effective Field Theory, in which they arise due to a single gauge-invariant dimension-six operator. We study the manifestations of the nine complex couplings of the $W$ to right-handed quarks in collider physics, flavor physics, and low-energy precision measurements. We first obtain constraints on the couplings under the assumption that the right-handed operator is the dominant correction to the Standard Model at observable energies. We subsequently study the impact of degeneracies with other Beyond-the-Standard-Model effective interactions and identify observables, both at colliders and low-energy experiments, that would uniquely point to right-handed charged currents.
• ### Neutrinoless double beta decay and chiral $SU(3)$(1701.01443)
Jan. 5, 2017 hep-ph, nucl-th, hep-lat
TeV-scale lepton number violation can affect neutrinoless double beta decay through dimension-9 $\Delta L= \Delta I = 2$ operators involving two electrons and four quarks. Since the dominant effects within a nucleus are expected to arise from pion exchange, the $\pi^- \to \pi^+ e e$ matrix elements of the dimension-9 operators are a key hadronic input. In this letter we provide estimates for the $\pi^- \to \pi^+$ matrix elements of all Lorentz scalar $\Delta I = 2$ four-quark operators relevant to the study of TeV-scale lepton number violation. The analysis is based on chiral $SU(3)$ symmetry, which relates the $\pi^- \to \pi^+$ matrix elements of the $\Delta I = 2$ operators to the $K^0 \to \bar{K}^0$ and $K \to \pi \pi$ matrix elements of their $\Delta S = 2$ and $\Delta S = 1$ chiral partners, for which lattice QCD input is available. The inclusion of next-to-leading order chiral loop corrections to all symmetry relations used in the analysis makes our results robust at the $30\%$ level or better, depending on the operator.
• ### An $\epsilon'$ improvement from right-handed currents(1612.03914)
Dec. 12, 2016 hep-ph, nucl-th, hep-lat
Recent lattice QCD calculations of direct CP violation in $K_L \to \pi \pi$ decays indicate tension with the experimental results. Assuming this tension to be real, we investigate a possible beyond-the-Standard Model explanation via right-handed charged currents. By using chiral perturbation theory in combination with lattice QCD results, we accurately calculate the modification of $\epsilon'/\epsilon$ induced by right-handed charged currents and extract values of the couplings that are necessary to explain the discrepancy, pointing to a scale around $10^2$ TeV. We find that couplings of this size are not in conflict with constraints from other precision experiments, but next-generation hadronic electric dipole moment searches (such as neutron and ${}^{225}$Ra) can falsify this scenario. We work out in detail a direct link, based on chiral perturbation theory, between CP violation in the kaon sector and electric dipole moments induced by right-handed currents which can be used in future analyses of left-right symmetric models.
• ### Is there room for CP violation in the top-Higgs sector?(1603.03049)
Aug. 2, 2016 hep-ph
We discuss direct and indirect probes of chirality-flipping couplings of the top quark to Higgs and gauge bosons, considering both CP-conserving and CP-violating observables, in the framework of the Standard Model effective field theory. In our analysis we include current and prospective constraints from collider physics, precision electroweak tests, flavor physics, and electric dipole moments (EDMs). We find that low-energy indirect probes are very competitive, even after accounting for long-distance uncertainties. In particular, EDMs put constraints on the electroweak CP-violating dipole moments of the top that are two to three orders of magnitude stronger than existing limits. The new indirect constraint on the top EDM is given by $|d_t| < 5 \cdot 10^{-20}$ e cm at $90\%$ C.L.
• ### Dimension-5 CP-odd operators: QCD mixing and renormalization(1502.07325)
Aug. 1, 2016 hep-ph, nucl-th, hep-lat
We study the off-shell mixing and renormalization of flavor-diagonal dimension-5 T- and P-odd operators involving quarks, gluons, and photons, including quark electric dipole and chromo-electric dipole operators. We present the renormalization matrix to one-loop in the $\bar{\rm MS}$ scheme. We also provide a definition of the quark chromo-electric dipole operator in a regularization-independent momentum-subtraction scheme suitable for non-perturbative lattice calculations and present the matching coefficients with the $\bar{\rm MS}$ scheme to one-loop in perturbation theory, using both the naive dimensional regularization and 't Hooft-Veltman prescriptions for $\gamma_5$.
• ### Direct and indirect constraints on CP-violating Higgs-quark and Higgs-gluon interactions(1510.00725)
Jan. 4, 2016 hep-ph, nucl-ex, nucl-th, hep-lat
We investigate direct and indirect constraints on the complete set of anomalous CP-violating Higgs couplings to quarks and gluons originating from dimension-6 operators, by studying their signatures at the LHC and in electric dipole moments (EDMs). We show that existing uncertainties in hadronic and nuclear matrix elements have a significant impact on the interpretation of EDM experiments, and we quantify the improvements needed to fully exploit the power of EDM searches. Currently, the best bounds on the anomalous CP-violating Higgs interactions come from a combination of EDM measurements and the data from LHC Run 1. We argue that Higgs production cross section and branching ratios measurements at the LHC Run 2 will not improve the constraints significantly. On the other hand, the bounds on the couplings scale roughly linearly with EDM limits, so that future theoretical and experimental EDM developments can have a major impact in pinning down interactions of the Higgs.
• ### Report of the Quark Flavor Physics Working Group(1311.1076)
Dec. 9, 2013 hep-ph, hep-ex, hep-lat
This report represents the response of the Intensity Frontier Quark Flavor Physics Working Group to the Snowmass charge. We summarize the current status of quark flavor physics and identify many exciting future opportunities for studying the properties of strange, charm, and bottom quarks. The ability of these studies to reveal the effects of new physics at high mass scales make them an essential ingredient in a well-balanced experimental particle physics program.
• ### Charged Leptons(1311.5278)
Nov. 24, 2013 hep-ph, hep-ex
This is the report of the Intensity Frontier Charged Lepton Working Group of the 2013 Community Summer Study "Snowmass on the Mississippi", summarizing the current status and future experimental opportunities in muon and tau lepton studies and their sensitivity to new physics. These include searches for charged lepton flavor violation, measurements of magnetic and electric dipole moments, and precision measurements of the decay spectrum and parity-violating asymmetries.
• ### Discovering the New Standard Model: Fundamental Symmetries and Neutrinos(1212.5190)
Dec. 20, 2012 hep-ph, hep-ex, nucl-ex, nucl-th
This White Paper describes recent progress and future opportunities in the area of fundamental symmetries and neutrinos.
• ### Kaon Decays in the Standard Model(1107.6001)
April 14, 2012 hep-ph, hep-ex
A comprehensive overview of kaon decays is presented. The Standard Model predictions are discussed in detail, covering both the underlying short-distance electroweak dynamics and the important interplay of QCD at long distances. Chiral perturbation theory provides a universal framework for treating leptonic, semileptonic and nonleptonic decays including rare and radiative modes. All allowed decay modes with branching ratios of at least 10^(-11) are analyzed. Some decays with even smaller rates are also included. Decays that are strictly forbidden in the Standard Model are not considered in this review. The present experimental status and the prospects for future improvements are reviewed.
• ### An evaluation of |Vus| and precise tests of the Standard Model from world data on leptonic and semileptonic kaon decays(1005.2323)
July 18, 2010 hep-ph, hep-ex
We present a global analysis of leptonic and semileptonic kaon decay data, including all recent results published by the BNL-E865, KLOE, KTeV, ISTRA+ and NA48 experiments. This analysis, in conjunction with precise lattice calculations of the hadronic matrix elements now available, leads to a very precise determination of |Vus| and allows us to perform several stringent tests of the Standard Model.
• One of the major challenges of particle physics has been to gain an in-depth understanding of the role of quark flavor and measurements and theoretical interpretations of their results have advanced tremendously: apart from masses and quantum numbers of flavor particles, there now exist detailed measurements of the characteristics of their interactions allowing stringent tests of Standard Model predictions. Among the most interesting phenomena of flavor physics is the violation of the CP symmetry that has been subtle and difficult to explore. Till early 1990s observations of CP violation were confined to neutral $K$ mesons, but since then a large number of CP-violating processes have been studied in detail in neutral $B$ mesons. In parallel, measurements of the couplings of the heavy quarks and the dynamics for their decays in large samples of $K, D$, and $B$ mesons have been greatly improved in accuracy and the results are being used as probes in the search for deviations from the Standard Model. In the near future, there will be a transition from the current to a new generation of experiments, thus a review of the status of quark flavor physics is timely. This report summarizes the results of the current generation of experiments that is about to be completed and it confronts these results with the theoretical understanding of the field.
• ### Reanalysis of pion pion phase shifts from K -> pi pi decays(0907.1451)
July 9, 2009 hep-ph
We re-investigate the impact of isospin violation for extracting the s-wave pion pion scattering phase shift difference delta_0(M_K) - delta_2(M_K) from K -> pi pi decays. Compared to our previous analysis in 2003, more precise experimental data and improved knowledge of low-energy constants are used. In addition, we employ a more robust data-driven method to obtain the phase shift difference delta_0(M_K) - delta_2(M_K) = (52.5 \pm 0.8_{exp} \pm 2.8_{theor}) degrees.
• ### pi pi Phase shifts from K to 2 pi(0807.5128)
July 31, 2008 hep-ph
We update the numerical results for the s-wave pi pi scattering phase-shift difference delta_0^0 - delta_0^2 at s = m_K^2 from a previous study of isospin breaking in K to 2 pi amplitudes in chiral perturbation theory. We include recent data for the K_S to pi pi and K^+ to pi^+ pi^0 decay widths and include experimental correlations.
• ### Precision tests of the Standard Model with leptonic and semileptonic kaon decays(0801.1817)
Jan. 11, 2008 hep-ph
We present a global analysis of leptonic and semileptonic kaon decays data, including all recent results by BNL-E865, KLOE, KTeV, ISTRA+, and NA48. Experimental results are critically reviewed and combined, taking into account theoretical (both analytical and numerical) constraints on the semileptonic kaon form factors. This analysis leads to a very accurate determination of Vus and allows us to perform several stringent tests of the Standard Model.
• ### Towards a consistent estimate of the chiral low-energy constants(hep-ph/0603205)
July 17, 2006 hep-ph
Guided by the large-Nc limit of QCD, we construct the most general chiral resonance Lagrangian that can generate chiral low-energy constants up to O(p^6). By integrating out the resonance fields, the low-energy constants are parametrized in terms of resonance masses and couplings. Information on those couplings and on the low-energy constants can be extracted by analysing QCD Green functions of currents both for large and small momenta. The chiral resonance theory generates Green functions that interpolate between QCD and chiral perturbation theory. As specific examples we consider the VAP and SPP Green functions.
• ### Magnetic Moments of Dirac Neutrinos(hep-ph/0601005)
Jan. 1, 2006 hep-ph
The existence of a neutrino magnetic moment implies contributions to the neutrino mass via radiative corrections. We derive model-independent "naturalness" upper bounds on the magnetic moments of Dirac neutrinos, generated by physics above the electroweak scale. The neutrino mass receives a contribution from higher order operators, which are renormalized by operators responsible for the neutrino magnetic moment. This contribution can be calculated in a model independent way. In the absence of fine-tuning, we find that current neutrino mass limits imply that $|\mu_\nu| < 10^{-14}$ Bohr magnetons. This bound is several orders of magnitude stronger than those obtained from solar and reactor neutrino data and astrophysical observations.
• ### Status of the Cabibbo Angle(hep-ph/0512039)
Dec. 2, 2005 hep-ph
We review the recent experimental and theoretical progress in the determination of |V_{ud}| and |V_{us}|, and the status of the most stringent test of CKM unitarity. Future prospects on |V_{cd}| and |V_{cs}| are also briefly discussed.
• ### The < S P P > Green function and SU(3) breaking in K_{l3} decays(hep-ph/0503108)
April 28, 2005 hep-ph
Using the 1/N_C expansion scheme and truncating the hadronic spectrum to the lowest-lying resonances, we match a meromorphic approximation to the < S P P > Green function onto QCD by imposing the correct large-momentum falloff, both off-shell and on the relevant hadron mass shells. In this way we determine a number of chiral low-energy constants of O(p^6), in particular the ones governing SU(3) breaking in the K_{l3} vector form factor at zero momentum transfer. The main result of our matching procedure is that the known loop contributions largely dominate the corrections of O(p^6) to f_{+}(0). We discuss the implications of our final value f_{+}^{K^0 \pi^-}(0)=0.984 \pm 0.012 for the extraction of V_{us} from K_{l3} decays. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930182158946991, "perplexity": 1417.7805341744117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141205147.57/warc/CC-MAIN-20201130035203-20201130065203-00210.warc.gz"} |
http://autocad2012.mufasu.com/2011/11/used-commands-in-autocad-2012.html | ## Used Commands in AutoCAD 2012
OK, did you look at the basic commands of the hand, and ideally a good understanding of what you are on the screen of AutoCAD. We are ready to prove the basic commands, and finally some shot. So how can we connect with AutoCAD and tell you what we get? There are four forms that follow, more or less in the order they appear in recent years. There are also two old methods, the tablet is called and display the sidebar, but not the cover.
1. Enter the commands in the command line
3. Use the icons on the toolbar to activate the controls.
4. Use the ribbon, menus and icons.
Most orders are presented in four ways, and you can experiment with each method to see what you prefer. Over time they will settle in a particular way of dealing with AutoCAD or a combination of several.
Enter the commands in the command line
This was the original method of interacting with AutoCAD and remains the safest way to enter an order: the script for the old. AutoCAD is a CAD software, which has been using this method with the sole aim, while moving most of the other, graphical icons, toolbars and ribbons. If you do not write, it's probably not their preferred option.
To use this method, simply enter the desired command (figures spelling!) On a command line and press Enter. The sequence starts and you can go. This method is (in a nice way of saying they have been using AutoCAD for all), still considered by many user preferences "legacy".
This method was also present from the beginning. It shows a way to access almost all AutoCAD commands, and in fact many students to learn with each start as crash course on what is available, a fun, but very effectively AutoCAD.
Use the icons on the toolbar to activate the controls
This method gives AutoCAD has changed from DOS to Windows in the 1990s and is a favorite of a generation of users, the toolbars were a familiar sight in almost all software these days. Toolbars contain sets of icons, organized into categories (Drawing toolbar, the toolbar, Edit, etc.). Click on the desired icon, and a command is issued. A disadvantage of the toolbars and developed the baseband is that a lot of space and can not be the most effective way to organize the commands on the screen.
Use the ribbon, menus and icons. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857132792472839, "perplexity": 1155.7634172333449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777438.76/warc/CC-MAIN-20141217075257-00061-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://tailieu.vn/doc/adaptive-wcdma-p8--256828.html | Chia sẻ: Khinh Kha Kha | Ngày: | Loại File: PDF | Số trang:0
0
43
lượt xem
6
Mô tả tài liệu
CDMA network In this chapter, we initiate discussion on CDMA network capacity. The issue will be revisited again later in Chapter 13 to include additional parameters in a more comprehensive way. 8.1 CDMA NETWORK CAPACITY For initial estimation of CDMA network capacity, we start with a simple example of single cell network with n users and signal parameters defined as in the list above. If αi is the power ratio of user i and the reference user with index 0, and Ni is the interference power density produced by user i defined as αi = Pi /P0 , i = 1, . ....
Chủ đề:
Bình luận(0)
Lưu | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738463759422302, "perplexity": 3364.9009599258206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102393.60/warc/CC-MAIN-20170816191044-20170816211044-00646.warc.gz"} |
https://science.sciencemag.org/content/334/6061/1385?ijkey=e8aaedd3b29e88d4bd4555d252c72db29bec70cb&keytype2=tf_ipsecsha | Report
# Climate Sensitivity Estimated from Temperature Reconstructions of the Last Glacial Maximum
See allHide authors and affiliations
Science 09 Dec 2011:
Vol. 334, Issue 6061, pp. 1385-1388
DOI: 10.1126/science.1203513
## Abstract
Assessing the impact of future anthropogenic carbon emissions is currently impeded by uncertainties in our knowledge of equilibrium climate sensitivity to atmospheric carbon dioxide doubling. Previous studies suggest 3 kelvin (K) as the best estimate, 2 to 4.5 K as the 66% probability range, and nonzero probabilities for much higher values, the latter implying a small chance of high-impact climate changes that would be difficult to avoid. Here, combining extensive sea and land surface temperature reconstructions from the Last Glacial Maximum with climate model simulations, we estimate a lower median (2.3 K) and reduced uncertainty (1.7 to 2.6 K as the 66% probability range, which can be widened using alternate assumptions or data subsets). Assuming that paleoclimatic constraints apply to the future, as predicted by our model, these results imply a lower probability of imminent extreme climatic change than previously thought.
Climate sensitivity is the change in global mean near-surface air temperature ΔSAT caused by an arbitrary perturbation ΔF (radiative forcing) of Earth’s radiative balance at the top of the atmosphere with respect to a given reference state. The equilibrium climate sensitivity for a doubling of atmospheric carbon dioxide (CO2) concentrations (ECS2xC) from preindustrial times has been established as a well-defined standard measure (1). Moreover, because transient (disequilibrium) climate change and impacts on ecological and social systems typically scale with ECS2xC, it is a useful and important diagnostic in climate modeling (1). Initial estimates of ECS2xC = 3 ± 1.5 K suggested a large uncertainty (2), which has not been reduced in the past 32 years despite considerable efforts (110). On the contrary, many recent studies suggest a small possibility of very high (up to 10 K and higher) values for ECS2xC (310), implying extreme climate changes in the near future, which would be difficult to avoid. Efforts to use observations from the past 150 years to constrain the upper end of ECS2xC have met with limited success, largely because of uncertainties associated with aerosol forcing and ocean heat uptake (8, 9). Data from the Last Glacial Maximum (LGM), 19,000 to 23,000 years ago, are particularly useful to estimate ECS2xC because large differences from preindustrial climate and much lower atmospheric CO2 concentrations [185 parts per million (ppm) versus 280 ppm preindustrial] provide a favorable signal-to-noise ratio, both radiative forcings and surface temperatures are relatively well constrained from extensive paleoclimate reconstructions and modeling (1113), and climate during the LGM was close to equilibrium, avoiding uncertainties associated with transient ocean heat uptake.
Here, we combine a climate model of intermediate complexity with syntheses of temperature reconstructions from the LGM to estimate ECS2xC using a Bayesian statistical approach. LGM, CO2 doubling, and preindustrial control simulations are integrated for 2000 years using an ensemble of 47 versions of the University of Victoria (UVic) climate model (14) with different climate sensitivities ranging from ECS2xC = 0.3 to 8.3 K considering uncertainties in water vapor, lapse rate and cloud feedback on the outgoing infrared radiation (fig. S1), as well as uncertainties in dust forcing and wind-stress response. The LGM simulations include larger continental ice sheets, lower greenhouse gas concentrations, higher atmospheric dust levels (fig. S2), and changes in the seasonal distribution of solar radiation [see supporting online material (SOM)]. We combine recent syntheses of global sea surface temperatures (SSTs) from the Multiproxy Approach for the Reconstruction of the Glacial Ocean (MARGO) project (12) and SATs over land, based on pollen evidence (13) with additional data from ice sheets and land and ocean temperatures (figs. S3 and S4; all reconstructions include error estimates). The combined data set covers over 26% of Earth’s surface (Fig. 1A).
Figure 2 compares reconstructed zonally averaged surface temperatures with model results. Models with ECS2xC < 1.3 K underestimate the cooling at the LGM almost everywhere, particularly at mid-latitudes and over Antarctica, whereas models with ECS2xC > 4.5 K overestimate the cooling almost everywhere, particularly at low latitudes. High-sensitivity models (ECS2xC > 6.3 K) show a runaway effect resulting in a completely ice-covered planet. Once snow and ice cover reach a critical latitude, the positive ice-albedo feedback is larger than the negative feedback because of reduced longwave radiation (Planck feedback), triggering an irreversible transition (fig. S5) (15). During the LGM, Earth was covered by more ice and snow than it is today, but continental ice sheets did not extend equatorward of ~40°N/S, and the tropics and subtropics were ice free except at high altitudes. Our model thus suggests that large climate sensitivities (ECS2xC > 6 K) cannot be reconciled with paleoclimatic and geologic evidence and hence should be assigned near-zero probability.
Posterior probability density functions (PDFs) of the climate sensitivity are calculated by Bayesian inference, using the likelihood of the observations ΔTobs given the model ΔTmod(ECS2xC) at the locations of the observations. The two are assumed to be related by an error term ε; ΔTobs = ΔTmod(ECS2xC) + ε, which represents errors in both the model (endogenously estimated separately for land and ocean; fig. S13) and the observations (fig. S3), including spatial autocorrelation. A Gaussian likelihood function and an autocorrelation length scale of λ = 2000 km are assumed. The choice of the autocorrelation length scale is motivated by the model resolution and by residual analysis. (See sections 5 and 6 in the SOM for a full description of the statistical method, assumptions, and sensitivity tests.)
The resulting PDF (Fig. 3), considering both land and ocean reconstructions, is multimodal and displays a broad maximum with a double peak between 2 and 2.6 K, smaller local maxima around 2.8 and 1.3 K, and vanishing probabilities below 1 K and above 3.2 K. The distribution has its mean and median at 2.2 and 2.3 K, respectively, and its 66 and 90% cumulative probability intervals are 1.7 to 2.6 K and 1.4 to 2.8 K, respectively. Using only ocean data, the PDF changes little, shifting toward slightly lower values (mean 2.1 K, median 2.2 K, 66% 1.5 to 2.5 K, and 90% 1.3 to 2.7 K), whereas using only land data leads to a much larger shift toward higher values (mean and median 3.4 K, 66% 2.8 to 4.1 K, and 90% 2.2 to 4.6 K).
The best-fitting model (ECS2xC = 2.4 K) reproduces well the reconstructed global mean cooling of 2.2 K (within two significant digits), as well as much of the meridional pattern of the zonally averaged temperature anomalies (correlation coefficient r = 0.8) (Fig. 2). Substantial discrepancies occur over Antarctica, where the model underestimates the observed cooling by almost 4 K, and between 45° to 50° in both hemispheres, where the model is also too warm. Simulated temperature changes over Antarctica show considerable spatial variations (Fig. 1B), with larger cooling of more than 7 K over the West Antarctic Ice Sheet. The observations are located along a strong meridional gradient (fig. S7). Zonally averaged cooling of air temperatures is about 7 K at 80°S, more consistent with the reconstructions than the simulated temperature change at the locations of the observations. Underestimated ice sheet height at these locations could be a reason for the bias (16), as could deficiencies of the simple energy balance atmospheric model component. Underestimated cooling at mid-latitudes for the best fitting model also points to systematic model problems, such as the neglect of wind or cloud changes.
Two-dimensional features in the reconstructions are less well reproduced by the model (r = 0.5) (Fig. 1). Large-scale patterns that are qualitatively captured (Fig. 1) are stronger cooling over land than over the oceans, and more cooling at mid- to high latitudes (except for sea ice covered oceans), which is contrasted by less cooling in the central Pacific and over the Southern Hemisphere subtropical oceans. Continental cooling north of 40°N of 7.7 K predicted by the best-fitting model is consistent with the independent estimate of 8.3 ± 1 K from inverse ice-sheet modeling (17).
Generally, the model solution is much smoother than the reconstructions, partly because of the simple diffusive energy balance atmospheric model component. The model does not simulate warmer surface temperatures anywhere, whereas the reconstructions show warming in the centers of the subtropical gyres in parts of the northwest Pacific, Atlantic, and Alaska. It systematically underestimates cooling over land and overestimates cooling of the ocean (fig. S8). The land-sea contrast, which is governed by less availability of water for evaporative cooling over land compared with the ocean (18), is therefore underestimated, consistent with the tension between the ECS2xC inferred from ocean-only versus land-only data (Fig. 3). A possible reason for this bias could be overestimated sea-to-land water vapor transport in the LGM model simulations, perhaps due to too high moisture diffusivities. Other model simplifications, such as neglecting changes in wind velocities and clouds or errors in surface albedo changes in the dynamic vegetation model component, could also contribute to the discrepancies. The ratio between low latitude (40° S to 40° N) land and sea temperature change in the best-fitting model is 1.2, which is lower than the modern ratio of 1.5 found in observations and modeling studies (19).
Despite these shortcomings, the best-fitting model is within the 1σ-error interval of the reconstructed temperature change in three quarters (75%, mostly over the oceans) of the global surface area covered by reconstructions (fig. S8). The model provides data-constrained estimates of global mean (including grid points not covered by data) cooling of near-surface air temperatures ΔSATLGM = –3.0 K [66% probability range (–2.1, –3.3), 90% (–1.7, –3.7)] and sea surface temperatures ΔSSTLGM = –1.7 K [66% (–1.1, –1.8), 90% (–0.9, –2.1)] during the LGM (including an increase of marine sea and air temperatures of 0.3 and 0.47 K, respectively, due to 120-m sea-level lowering; otherwise, ΔSATLGM = –3.3 K and ΔSSTLGM = –2.0 K).
The ranges of 66 and 90% cumulative probability intervals, as well as the mean and median ECS2xC values, from our study are considerably lower than previous estimates. The most recent assessment report from the Intergovernmental Panel on Climate Change (6), for example, used a most likely value of 3.0 K and a likely range (66% probability) of 2 to 4.5 K, which was supported by other recent studies (1, 2023).
We propose three possible reasons that our study yields lower estimates of ECS2xC than previous work that also used LGM data. First, the new reconstructions of LGM surface temperatures show less cooling than previous studies. Our best estimates for global mean (including grid points not covered by data) SAT and SST changes reported above are 30 to 40% smaller than previous estimates (21, 23). This is consistent with less cooling of tropical SSTs (–1.5 K, 30°S to 30°N) in the new reconstruction (12) compared with previous data sets (–2.7 K) (24). Tropical Atlantic SSTs between 20°S and 20°N are estimated to be only 2.4 K colder during the LGM in the new reconstruction compared with 3 K used in (23), explaining part of the difference between their higher estimates of ECS2xC and ΔSATLGM (–5.8 K).
The second reason is limited spatial data coverage. A sensitivity test excluding data from the North Atlantic leads to more than 0.5 K lower ECS2xC estimates (SOM section 7 and fig. S14 and S15). This shows that systematic biases can result from ignoring data outside selected regions, as done in previous studies (22, 23), and implies that global data coverage is important for estimating ECS2xC. Averaging over all grid points in our model leads to a higher global mean temperature (SST over ocean, SAT over land) change (–2.6 K) than using only grid points where paleoclimate data are available (–2.2 K), suggesting that the existing data set is still spatially biased toward low latitudes and/or oceans. Increased spatial coverage of climate reconstructions is therefore necessary to improve ECS2xC estimates.
A third reason may be the neglect of dust radiative forcing in some previous LGM studies (21) despite ample evidence from the paleoenvironmental record that dust levels were much higher (25, 26). Sensitivity tests (Fig. 3) (SOM section 7) show that dust forcing decreases the median ECS2xC by about 0.3 K.
Our estimated ECS2xC uncertainty interval is rather narrow, <1.5 K for the 90% probability range, with most (~75%) of the probability mass between 2 and 3 K, which arises mostly from the SST constraint. This sharpness may imply that LGM SSTs are a strong physical constraint on ECS2xC. However, it could also be attributable to overconfidence arising from physical uncertainties not considered here, or from misspecification of the statistical model.
To explore this, we conduct sensitivity experiments that perturb various physical and statistical assumptions (Fig. 3 and figs. S14 and S15). The experiments collectively favor sensitivities between 1 and 3 K. However, we cannot exclude the possibility that the analysis is sensitive to uncertainties or statistical assumptions not considered here, and the underestimated land/sea contrast in the model, which leads to the difference between land- and ocean-based estimates of ECS2xC, remains an important caveat.
Our uncertainty analysis is not complete and does not explicitly consider uncertainties in radiative forcing due to ice-sheet extent or different vegetation distributions. Our limited model ensemble does not scan the full parameter range, neglecting, for example, possible variations in shortwave radiation due to clouds. Nonlinear cloud feedback in different complex models make the relation between LGM and CO2 doubling–derived climate sensitivity more ambiguous than apparent in our simplified model ensemble (27). More work, in which these and other uncertainties are considered, will be required for a more complete assessment.
In summary, using a spatially extensive network of paleoclimate observations in combination with a climate model, we find that climate sensitivities larger than 6 K are implausible, and that both the most likely value and the uncertainty range are smaller than previously thought. This demonstrates that paleoclimate data provide efficient constraints to reduce the uncertainty of future climate projections.
## Supporting Online Material
www.sciencemag.org/cgi/content/full/science.1203513/DC1
Materials and Methods
SOM Text
Figs. S1 to S16
Table S1
References (28103)
## References and Notes
1. Acknowledgments: This work was supported by the Paleoclimate Program of the National Science Foundation through project PALEOVAR (06023950-ATM). Thanks to S. Harrison and two anonymous reviewers for thoughtful and constructive comments that led to substantial improvements in the paper.
View Abstract | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8389505743980408, "perplexity": 2832.9684008019394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00395.warc.gz"} |
https://www.physicsforums.com/threads/extended-or-point-source.331664/ | # Extended or point source?
1. Aug 20, 2009
### mordeth
If you were using a single radio telescope (not an interferometer) how could you tell whether a radio source was a point source or extended source?
I have searched the internet far and wide for many hours trying to answer this - I know what the difference between a point source and extended source is, but I'm not sure whether the question is asking for a simple or technical explanation. Is it something to do with the radio telescope data or how the radiation from the source is spread out on the dish when its received?
Last edited: Aug 20, 2009
2. Aug 20, 2009
### jimgraber
I assume you mean the usual in-effect single-pixel radio telescope at a single point in time. Then it is pretty much impossible to tell the difference, I think. The usual method then is to look at different points at different times. i.e. scan out the image over time. You can then detect its size, assuming its pretty much a constant source and really bigger than one pixel.
Best,
Jim Graber
Similar Discussions: Extended or point source? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840402364730835, "perplexity": 540.9396226156861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815435.68/warc/CC-MAIN-20180224053236-20180224073236-00540.warc.gz"} |
http://doc.rero.ch/record/16990 | Faculté des sciences
## Magnetohydrodynamic pumping in nuclear magnetic resonance environments
### In: Sensors and Actuators B: Chemical, 2007, vol. 123, no. 1, p. 636-646
We present a DC magnetohydrodynamic (MHD) pump as component of a nuclear magnetic resonance (NMR) microfluidic chip. This is the first time that MHD pumping in an NMR environment was observed and demonstrated. This chip generates a maximum flow rate of 1.5 μL min−1 (2.8 mm s−1 in the microchannel) for an applied voltage of 19 V with only 38 mW of power consumption in a... Plus
Ajouter à la liste personnelle
# Exporter vers
Summary
We present a DC magnetohydrodynamic (MHD) pump as component of a nuclear magnetic resonance (NMR) microfluidic chip. This is the first time that MHD pumping in an NMR environment was observed and demonstrated. This chip generates a maximum flow rate of 1.5 μL min−1 (2.8 mm s−1 in the microchannel) for an applied voltage of 19 V with only 38 mW of power consumption in a 7 T superconductive magnet. We developed a simple method of flow rate measurement inside the bulky NMR magnet by monitoring the displacement of a liquid–liquid interface of two immiscible liquids in an off-chip capillary. We compared and validated this flow measurement technique with another established technique for microfluidics based on the displacement of microbeads. This allowed us to characterize and compare the flow rate generated by the micropump on top of a permanent magnet (B1 = 0.33 T) with the superconductive magnet (B0 = 7.05 T). We observed a 21-fold increase in flow rate corresponding to the ratio of the magnetic field intensities (B0/B1 = 21) in accordance with the theoretical flow dependence on the magnetic field intensity. The final aim is to integrate MHD pumps together with planar coils in a microfluidic system for NMR analysis. The high performance of MHD pumps at relatively low flow rates is seen as an asset for NMR and MRI applications. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9224725961685181, "perplexity": 1708.9575430388345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863411.67/warc/CC-MAIN-20180620031000-20180620051000-00160.warc.gz"} |
https://infoscience.epfl.ch/record/210244/ | ## System fidelity factor evaluation of wearable ultra-wideband antennas for on-body communications
This study reports an experimental study of the system fidelity factor (SFF) of ultra-wideband wireless body area network antennas in an on-body environment. This study also investigates the influence in the SFF because of the antenna polarisation with respect to the body. For this purpose, two antennas having different polarisations are compared: a mono-cone antenna with the polarisation perpendicular to the wearer and a printed monopole with the polarisation either parallel or perpendicular to the wearer. A phantom made of pork meat was considered as a model for the wearer's body, and five different communication scenarios involving each pair of antennas were considered. On-body experimental results show that the time-domain behaviour of the transmission is better for two different antenna types having the same polarisation, but, more importantly, that the SFF (and thus the time-domain fidelity of the signal) is better for a link having a polarisation perpendicular to the body.
Published in:
Microwaves Antennas & Propagation, IET, 9, 10, 1054-1058
Year:
2015
Publisher:
Hertford, Inst Engineering Technology-Iet
Laboratories: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833626389503479, "perplexity": 2432.2815576985945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828501.85/warc/CC-MAIN-20181217091227-20181217113227-00133.warc.gz"} |
http://math.stackexchange.com/questions/309506/symbol-for-such-that-not-in-set | Symbol for “such that” (not in set)
If $A$ is a set, we can use the set notation
$$A= \{ b \mid\text{property p_1 of b}\}$$
But say $A$ is an element like $b$,
$$A = b \mid \text{property p_1 of b}$$
is this a usual notation? I am trying to say that $A$ is a $b$ that such that( $\mid$ ) it satisfies property $p_1$ of $b$, and assume that exactly one $b$ satisfies property $p_1$.
Otherwise, is there a more usual convention to express this?
-
Usually you would just say that $A$ possesses property $p_1$, or that $p_1(A)$ holds. – MJD Feb 20 '13 at 21:31
You could replace the $=$ in the first equation by an $\in$ to make $A$ an element instead of a set. – TMM Feb 20 '13 at 21:32
The usual notation is "such that". Also note that if one writes "let $A$ be a foo such that bar" then foo should be predicative and not a variable, i.e. please don't write "let $A$ be a $b$ such that $p_1(b)$", instead write e.g. "let $A$ be a positive integer such that $p_1(A)$". – Hagen von Eitzen Feb 20 '13 at 21:32
The point being that $b$ is completely unnecessary in the second form. You could write that simply as "Assume $A$ is s.t. $p_1(A)$." – Thomas Andrews Feb 20 '13 at 21:56
"Such that" is occasionally denoted by \ni = $\,\ni\,$, e.g., in lecture, to save time, as a shortcut. Others, when writing in lectures or taking notes, and again, to save time, use "s.t.".
But in writing anything to submit (homework, publication), when possible, it is best to just write the words "such that".
In sets though, like set-builder notation, both $\mid$ and $:$ are used:
$$\{x \in \mathbb R \mid x < 0\}$$ $$\{x \in \mathbb R : x \lt 0\}$$
"The set of all $x \in \mathbb R$ such that $x \lt 0$.
-
This notation $\ni$ for "such that" was introduced by Peano, see here. – Math Gems Feb 20 '13 at 21:45
$\ni$ usually stands for $\in$ when you are writing in Hebrew and can't foresee the amount of spacing you will need for writing left-to-right in mid-text. :-) – Asaf Karagila Feb 20 '13 at 21:46
Yes, I think you're correct, @Math Gems! Peano it was. – amWhy Feb 20 '13 at 21:48
+1 for simply writing "such that". – Douglas S. Stones Feb 20 '13 at 21:56
Of course, the very text label, "\ni", indicates the alternative meaning for $\ni$, name, $X\ni x$ being a synonym for $x\in X$. That duplicate meaning is one reason it is not used much as "s.t." – Thomas Andrews Feb 20 '13 at 21:58
$\{ g \in G : \Phi(g) \}$ is the set of those $g$ in $G$ if $\Phi$ is true. I also see $:$ for such that in piecewise functions a lot, like $$f(x)=\left\{\begin{array}{lcl}1&:&a\in B \\ 2 &:& a \notin B\end{array}\right.$$ which reads the same way. $\{g | g \in G\}$ first gives the form of stuff that you want, then "such that" g is in wherever.
So, grammatically it seems like what you say would make sense. I have never seen it used like that though. Personally, I like to use $\ni$, which is a (somewhat outdated) alternative such that symbol. (Actually this is not exactly how it's written, as a backwards $\in$. It should be thinner and taller, like a longbow. I can't find a typesetting which works on MSE's TeX though.) The modern way to do it is to use either $|$ or $:$ in sets and mathematical expressions, but just write it out if you're anywhere else. If you must abbreviate it, write $\text{s.t}$.
-
Exemplify the answer of the problem is the best way(+1) – Babak S. Jul 5 '13 at 6:39
$f(x)=\left\{\begin{array}{lcl}1&:&a\in B \\ 2 &:& a \notin B\end{array}\right.$ can mean $f(x)=\left\{\begin{array}{lcl}\frac{1}{a}\in B \\ \frac{2}{a} \notin B\end{array}\right.$ Also, $\{g | g \in G\}$ can mean that the (always true if $g\in\mathbb Z_{\neq 0}$) statement $g | g$ belongs to the set $G$. Even the $\ni$ symbol can be ambiguous at times (meaning "contains"). – user26486 Sep 2 '14 at 19:59
@mathh I've never seen $1:a$ meaning $\frac{1}{a}$. Can you show me an example where that is used in the literature? – Alexander Gruber Sep 3 '14 at 2:16
@AlexanderGruber See here. "In some non-English-speaking cultures, "a divided by b" is written a : b." I have never seen "a ÷ b" being used in my country and have only seen "a : b" instead. – user26486 Sep 3 '14 at 5:51
@mathh Very interesting. Which country is that? – Alexander Gruber Sep 3 '14 at 13:37
I had actually asked my prof about this a couple weeks ago... the symbol he gave is $\ni$. So, for an existential quantifier, we have:
$$\exists \,\,x\in\mathbb{R}\ni x^2 =x$$
He said we wouldn't use it in the class, as he thought it looked not so great...
This can also be seen here: http://www.physicsforums.com/showthread.php?t=195398
I, personally, like just abbreviating it "s.t." in my notes, as it's shorter, but more clear.
-
That backwards epsilon notation looks terrible. I learned it a while ago and always thought it is ugly. It would be nice if it died off. Writing "s.t." is pretty simple as an alternative. – KCd Feb 20 '13 at 21:39
I agree that in writing mathematical English one can use $\ni$ or 's.t.' but in the case you cite, I would prefer $$\left(\exists \,\,x\in\mathbb{R}\right) \left[x^2 =x\right]$$ exactly as in set notation or predicate calculus. The such that is built into the quantifier. – Barbara Osofsky Feb 20 '13 at 21:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230223894119263, "perplexity": 933.5089372891197}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646313806.98/warc/CC-MAIN-20150827033153-00236-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.aanda.org/articles/aa/full_html/2013/03/aa19282-12/aa19282-12.html | Free Access
Issue A&A Volume 551, March 2013 A76 15 Interstellar and circumstellar matter https://doi.org/10.1051/0004-6361/201219282 25 February 2013
## 1. Introduction
The post-asymptotic giant branch (AGB) star HR 4049 is a single-lined spectroscopic binary, surrounded by a circumbinary disk rich in dust and gas. The secondary star has not been detected at any wavelength, but its presence is inferred from the radial-velocity variations of the primary. HR 4049 is a remarkable object in several respects. First, it is the prototype of a group of evolved binaries in which the primary star has an extremely metal-depleted photosphere ([Fe/H] = −4.8 dex, Van Winckel et al.1995). The photospheric abundance pattern resembles that of the interstellar medium, where chemical elements with high condensation temperatures have formed dust grains and are underabundant in the gas phase. Mathis & Lamers (1992) and Waters et al. (1992) have proposed a scenario to explain the selective depletion in these post-AGB binaries. A fraction of the mass lost by the primary star during its AGB phase is captured in a stable circumbinary disk. The process of disk formation around post-AGB stars is not well understood, but binarity is a necessary condition (Van Winckel 2003; Gielen et al. 2008). When the disk has formed and the strong AGB wind has ceased, gas in the disk can be re-accreted from the disk onto the primary star. Dust grains, however, experience radiation pressure and are prevented from falling in. Since refractive elements easily condense into dust grains, the accreted gas is chemically peculiar, which gives rise to the observed photospheric abundance pattern. The photosphere of HR 4049, therefore, became extremely depleted in iron. The selective re-accretion hypothesis thus holds the promise of a high abundance of iron-rich dust in the circumbinary environment.
Second, HR 4049 has a peculiar infrared spectrum. The infrared excess is large (LIR/L = 1/3, Dominik et al. 2003). The spectrum, shown in Fig. 1, displays strong features of polycyclic aromatic hydrocarbon molecules (PAHs) and even diamond emission bands, which indicates the presence of carbon-rich material in the system (Geballe et al. 1989; Guillois et al. 1999). Recently, the detection of Buckminsterfullerene (C60) was reported (Roberts et al. 2012). The strong gaseous emission lines of H2O, OH, CO, and CO2, on the other hand, point to oxygen-rich gas-phase chemistry (Cami & Yamamura 2001; Hinkle et al. 2007). The rest of the spectrum is featureless. In particular, the 10-μm silicate emission feature, which is common in disks around evolved binaries (Gielen et al. 2008), is absent. The spectrum has a shape that closely matches that of a 1200 K blackbody. Dominik et al. (2003) propose two models to fit the spectral energy distribution: the emission is produced by large (≫1 μm) grains, or the disk is very optically thick. The authors favor the second option. In this paper, we will propose another alternative: the emission is produced by (small) dust grains with a smooth opacity curve.
Fig. 1Infrared spectrum of HR 4049 (black: ISO-SWS, blue: Spitzer-IRS). The prominent features between 6 and 13 μm are caused by PAHs. The band between 4 and 5.3 μm is the result of a forest of CO, OH, and H2O molecular emission lines (Hinkle et al. 2007). Emission lines of CO2 are visible longward of 13 μm (Cami & Yamamura 2001). Open with DEXTER
Furthermore, HR 4049 is an attractive observational target. The circumbinary disk is viewed at a large inclination angle, close to edge-on. Waelkens et al. (1991) have published Geneva optical photometry of HR 4049. The Geneva photometric system comprises seven bands, U,B1,B,B2,V1,V, and G, with effective wavelengths ranging from 346 nm to 581 nm1. Waelkens et al. (1991) found that the target is variable in all seven filters. The largest-amplitude variability is synchronous with the 430-day orbital period of the binary and ascribed to variable extinction in the line of sight towards the primary star by matter in the circumbinary disk. Due to its orbital motion, the primary hides behind the inner rim of the disk at phases of minimal brightness, when it is closest to the observer, and returns to maximal brightness roughly half an orbit later. The Geneva color–magnitude diagrams of HR 4049 contain information on the material that attenuates the primary star. Waelkens et al. (1991) show that the observed extinction law is roughly consistent with interstellar extinction, although notable differences are present.
In this paper we attempt to identify the dust species which produces the featureless infrared spectrum of HR 4049. We search for the dominant source of opacity in the circumbinary disk at optical-to-infrared wavelengths. We will show that the combination of infrared spectroscopy with spatially resolved observations is a powerful tool to discern between dust species with featureless opacities. The method described in this paper can potentially be applied to solve other astronomical questions, such as the iron-to-carbon dust content of winds around AGB stars.
## 2. Observations and data reduction
### 2.1. Interferometry
Interferometric measurements of HR 4049 were obtained within the Belgian Guaranteed Time on the European Southern Observatory’s (ESO) Very Large Telescope Interferometer (VLTI) Sub-Array (VISA). We made observations in the near-infrared H and K bands on nine baseline triangles with AMBER (Petrov et al. 2007) in the low spectral-resolution mode (LR-HK). Medium-resolution AMBER observations were obtained on four baseline triangles; two centered around 2.1 μm (MR–2.1) and two around 2.3 μm (MR-2.3). When the weather conditions allowed, the AMBER observations were obtained with the aid of the fringe tracker FINITO. This instrument uses 70% of the H-band flux to stabilize the fringes on the AMBER detector. Mid-infrared interferometry of HR 4049 was obtained with MIDI (Leinert et al. 2003) in the HIGH-SENS prism mode (HSP) on two baselines. The observing log is shown in Table 1. The corresponding spatial-frequency coverage is shown in Fig. 2. The interferometric observations were made at binary orbital phases φ ∈ [0,0.2], where φ = 0 corresponds to periastron passage. This is around the photometric minimum of the primary star in the optical (φ = 0.13, Bakker et al. 1998).
Table 1
Log of the AMBER and MIDI interferometric observations.
#### 2.1.1. AMBER data reduction
The AMBER data were reduced using the data processing software amdlib version 3 (Tatulli et al. 2007; Chelli et al. 2009). The calibrator star is HD 91964, a K4/K5 III giant with a stellar diameter of 1.27 mas (Mérand et al. 2005). For both science target and calibrator, the 25% fringe measurements with the highest signal-to-noise were selected to compute and calibrate the squared visibilities. The data are of very good quality and an increase in the percentage of selected frames does not significantly alter the outcome of the reduction procedure. For the closure phases, 90% of the best frames were selected. The closure phase was estimated by adding phasors in the complex plane (Tatulli et al. 2007). The closure phase of the science target was calibrated by subtracting the measured closure phase of the calibrator.
Fig. 2Spatial-frequency coverage of the interferometric measurements. Black dots refer to AMBER data, red dots to MIDI data. Open with DEXTER
#### 2.1.2. MIDI data reduction
The MIDI data reduction was performed using the MIA+EWS package2. Despite the fact that HR 4049 is a rather faint target for VISA+MIDI, the signal-to-noise of the correlated-flux (CF) measurements is high (~25). The total flux (F) is disentangled from the strong mid-infrared sky background by chopping to an off-target position on the sky and subtracting the on- and off-target spectra. For faint targets this leads to noisy spectra. This technique is not employed to determine the correlated flux. The background in the two interferometric channels is uncorrelated, while the interferometric signals are in phase opposition. Subtraction of both channels effectively removes the background while maintaining the correlated flux (see MIDI manual3). This results in correlated-flux spectra with much better signal-to-noise than the total-flux spectrum. Because HR 4049 is a faint target, F could not be measured reliably and we cannot compute the calibrated visibility (V) (1)Here, the indices sci and cal refer to the uncalibrated measurements of science target and calibrator. However, the ratio of the visibilities at two given baselines is equal to the ratio of the calibrated correlated fluxes (2)where the indices refer to baselines 1 and 2. We cannot determine the absolute visibilities V15 m and V29 m because of the low-quality total spectrum, but the ratio of the visibilities at the two baselines V29 m/V15 m can be recovered from the observations. We will use the latter quantity in our analysis and refer to it as the MIDI relative visibility.
### 2.2. Archival observations
The analysis presented here also makes use of archival observations of HR 4049. For the optical analysis, we have used the Geneva photometry published by Waelkens et al. (1991). Optical spectroscopy of HR 4049 indicates Teff = 7500 K, log g = 1 dex, and an iron depletion of [Fe/H] = −4.8 dex for the primary star, while the secondary is undetected (Lambert et al. 1988; Waelkens et al. 1991; Van Winckel et al. 1995; Bakker et al. 1996). A fit to the optical photometry with a Kurucz (1993) model with these parameters yields a stellar angular diameter of 0.45 ± 0.03 mas. The interstellar reddening towards the binary was determined to be E(B − V) = 0.17 mag from fits to the data at photometric maximum. We used the interstellar extinction law of Cardelli et al. (1989) with RV = 3.1.
Two infrared spectra of HR 4049 have been obtained with the Short-Wavelength Spectrograph (SWS, de Graauw et al. 1996) aboard the Infrared Space Observatory (ISO, Kessler et al. 1996). We use the spectrum that was taken on 6 May 1996 at an orbital phase φ = 0.04, which is within the orbital-phase range covered by the interferometric observations. We downloaded the highly processed data products (HPDP) from the ISO archive in the reduction of Frieswijk et al.4. These authors started from data products produced with the off-line processing pipeline version 10.1. The Infrared Spectrograph (IRS, Houck et al. 2004) aboard the Spitzer Space Telescope (Werner et al. 2004) has also obtained a spectrum of HR 4049 in the short-high (SH) mode. We reduced the spectrum with pipeline version S15.3.0. It is reassuring that this spectrum, albeit obtained with a different instrument on a different satellite, is very similar to the ISO-SWS spectrum, both in shape and in absolute flux level. We show both infrared spectra in Fig. 1.
Ground-based mid-infrared observations of HR 4049 were obtained with VLT/VISIR (Lagage et al. 2004) mounted on UT3. The VISIR data were retrieved from the ESO data archive5. Images were obtained in one broad-band and three narrow-band photometric filters: SiC (λcen = 11.85 μm, Δλ = 2.34 μm), PAH2 (λcen = 11.25 μm, Δλ = 0.59 μm), PAH2_2 (λcen = 11.88 μm, Δλ = 0.37 μm), and Q1 (λcen = 17.65 μm, Δλ = 0.83 μm). The filter at 11.25 μm is centered on the PAH 11.3 μm feature. The PAH flux contribution in this band, estimated from the ISO-SWS spectrum, is 25%. The other two narrow-band filters (at 11.88 and 17.65 μm) capture almost no PAH emission. The PAH flux contribution in the broad-band filter at 11.85 μm is 5%. VISIR also obtained spectroscopy of HR 4049 in the N band. Due to the poor quality of the calibrator measurements, we could not extract a calibrated VISIR spectrum. However, VISIR is a slit spectrograph. The spatial information in the direction of the slit (north-south in this case) is therefore preserved. We performed astrometry on the spectral order in the raw 2D CCD images of HR 4049. This yields the spatial extent (full width at half maximum, FWHM) and relative photocenter position of the target at each wavelength. We discuss the VISIR data in Sect. 4.
The Hipparcos parallax of HR 4049 is 1.23 ± 0.36 mas, which places the object at a distance of 800 pc (Van Leeuwen 2007). Remarkably, Hipparcos did not flag HR 4049 as a binary. The spectroscopically determined semi-major axis of the primary orbit is asini = 0.60 ± 0.01 AU (Bakker et al. 1998). If the object is at a distance of 800 pc, the photocenter displacement of the primary on the sky due to the binary orbit is at least 0.75 mas, of the same order of magnitude as the measured parallax. In fact, this is true for any distance to the source; the dimensions of the binary are similar to the major axis of Earth’s orbit around the Sun. Although not detected by Hipparcos, the binarity of HR 4049 must have influenced the parallax measurement. We will come back to this issue and solve it in Sect. 6.
## 3. Optical extinction
### 3.1. The dust model
We re-analysed the Geneva photometry published by Waelkens et al. (1991). We tested which dust species can produce the observed extinction law towards the primary star. To this end, we computed opacities, i.e., mass absorption and scattering coefficients κ, for a set of common astronomical dust species in the JENA database (Jäger et al. 1994, 1998; Dorschner et al. 1995; Henning et al. 1995; Henning & Stognienko 1996; Scott & Duley 1996). We selected the species for which refractive indices are available from UV to infrared wavelengths, covering at least 0.2–100 μm. Because of the remarkably featureless infrared spectrum of HR 4049, we also focused on the dust species which have smooth infrared opacities. This excludes a priori small crystalline silicates, for example.
To convert optical constants to absorption and scattering opacities, a dust grain size and morphology needs to be assumed. Here, we tested a range of grain sizes covering 0.01 μm to 10 μm, and three grain shape models: solid spherical particles (Mie grains), a distribution of hollow spheres (DHS), and a continuous distribution of ellipsoids (CDE). The DHS model assumes that grains are spherical, but may include a large volume fraction of vacuum, mimicking grain porosity. We varied the maximum vacuum fraction between fmax = 0 (i.e., Mie grains) and fmax = 0.8. The CDE model assumes a distribution of ellipsoidal grains, ranging from spheres to needles. Opacities are computed in the Rayleigh limit, i.e., for grains with sizes a that obey 2πa/λ ≪ 1, with λ the wavelength. Because the computed opacities are used here at optical wavelengths, the CDE model is only valid for small grains with sizes below 0.1 μm. Both DHS and CDE are able to represent grain morphologies that deviate strongly from spherical (Min et al. 2003, 2005). Table 2 lists the dust species, adopted grain sizes, and shape distributions, for which absorption and scattering opacities were computed.
### 3.2. Extinction law
We reddened the Kurucz model atmosphere for the primary star according to the opacity curve of each of the dust species. We assumed isotropic scattering. The stellar light was attenuated by gradually increasing the dust column density until the maximal observed optical attenuation of AV ≈ 0.3 mag was reached. We then computed synthetic Geneva magnitudes and colors for the reddened atmosphere models. Because of the logarithmic magnitude scale, the synthetic color–magnitude diagrams are straight lines.
Table 2
Tested dust species, sizes, and shapes.
Fig. 3Six independent Geneva color–magnitude diagrams of HR 4049. The linear-regression fit to the data is shown in black. The other curves show the color/magnitudes of the Kurucz model, reddened with the interstellar extinction law of Cardelli et al. (1989, foreground extinction) and with different dust species with a grain size of 0.01 μm and an appropriate range of column densities (variable circumstellar extinction). The grain shape distribution is DHS with fmax = 0.8, except for amorphous carbon (Mie, i.e., DHS with fmax = 0). Top left panel: the discrepancy between the observed and the modeled U − B color may be due to a UV-excess in the binary system. Open with DEXTER
We turned back to the observations and performed a linear-regression fit to six independent color–magnitude diagrams of HR 4049: U − B, B1 − B, B2 − B, V1 − B, V − B, and G − B vs. V (Fig. 3). All other color–magnitude diagrams are linear combinations of these six and thus contain the same information. The residual scatter around the linear fit may have different causes, including intrinsic brightness variations of the primary star, activity in the binary system, density variations in the circumbinary dust that attenuates the primary starlight, and also observational uncertainties. We defined a chi-square variable (3)with di the perpendicular distance of a point in the color–magnitude diagram to the linear fit and ⟨ d ⟩ the average distance of the points to the line. The latter is zero because of the linear regression. An analoguous value was computed for the reddened atmosphere models. In the latter case, ⟨ d ⟩ is non-zero. An offset between the location of the linear fit to the model data and the photometric observations can occur because of differences between the modeled and the true spectrum of the target, or between the modeled and true interstellar extinction law. This is clear in the U − B vs. V color–magnitude diagram (Fig. 3, upper left panel). However, here we focused on the extinction law in the line of sight and hence only on the slope of the variations in the color–magnitude diagrams. We defined (4)A model with X2 = 1 provides a match to the slopes of the extinction curve equivalent to a linear regression fit, given the residual spread on the data.
Table 3
Dust species with extinction curves consistent with the Geneva color–magnitude diagrams (X2 < 3, see Eq. (4)).
Fig. 4a) ISO-SWS spectrum in the N-band range. The PAH features at 8.6, 11.3, 12.0, and 12.7 μm are indicated, the dashed line represents the continuum. b) Spatial extent along the slit north-south direction of VISIR. The dashed line indicates the width of the spatially unresolved continuum, which increases with wavelength. c) Relative spatial position with respect to the continuum along the slit direction of VISIR. The PAH emission region is extended on a scale of several 100 s of AU and its photocenter is located towards the south of the continuum emission region. Open with DEXTER
Because of the extreme iron depletion in the primary photosphere and the hypothetical selective re-accretion, one may expect that the circumbinary disk contains detectable amounts of iron-bearing dust. DHS-shaped iron oxides, magnetite, and metallic iron grains reproduce the observed extinction law reasonably well; however, 0.01 μm-sized Mie amorphous-carbon opacities provide the best fit. The fit with 0.01 μm-sized DHS-shaped (i.e., fmax > 0) amorphous carbon grains is also acceptable. A summary is presented in Table 3. We cannot unambiguously differentiate between these different dust species based on the optical extinction data alone. All other tested dust species, including amorphous silicates and Fe2O3, can be rejected (X2 ≫ 3). Grain sizes larger than 0.1 μm can be excluded as well, because the corresponding opacities are too grey to match the steep observed extinction law.
## 4. PAH emission
We now focus on the appearance of the system at infrared wavelengths. Menut et al. (2009) analyzed early MIDI observations of HR 4049. They find that the emission region of the PAH 7.7, 8.6, and 11.3 μm features is more extended than that of the underlying continuum. Our new MIDI measurements confirm this. The correlated fluxes show no sign of the PAH features. This indicates that the PAH emission region is fully resolved by the interferometer, even on the short baseline of 15 m. We estimated a lower limit to the extent of the PAH emission region of 100 mas (64 AU at 640 pc). In line with Menut et al. (2009), we conclude that the PAH emission region is not only situated in the disk, although a contribution from the disk is still possible.
We turn to the VISIR data to locate the PAH emission. In all four VISIR images, HR 4049 appears as a point source. Closer inspection of the point spread function (PSF), however, shows that it is 15% broader than the calibrator PSF in the photometric band that covers the PAH 11.3 μm feature, while both PSFs are almost identical in the broad-band filter. This indicates that the PAH emission region is slightly resolved on a scale comparable to that of the VISIR PSF, i.e., several 100 mas.
Confirmation comes from the VISIR slit spectroscopy. Figure 4 shows the mid-infrared ISO-SWS spectrum, the spatial extent (FWHM), and the relative spatial position of HR 4049 projected onto the slit direction (i.e., north-south) from VISIR. The PAH emission region is marginally, but convincingly, resolved with respect to the unresolved emission of the circumbinary disk. Moreover, the photocenter of the PAH emission region, projected onto the slit north-south direction, is located towards the south of the continuum emission region. The PAH emission is therefore not centered on HR 4049, consistent with the idea of a bipolar outflow in which the receding lobe (towards the north) is partially obscured by the circumbinary disk.
## 5. The circumbinary disk: an optically thick wall?
Fig. 5Spectral energy distribution of HR 4049. The ISO-SWS spectrum (grey line), and IRAS 25, 60, and 100 μm, and SCUBA 850 μm photometry (grey boxes). The green line is the fit with a single blackbody (1200 K, in accordance with Dominik et al. 2003). The spectral energy distribution of our model (blue) underestimates the flux at λ > 20 μm. The red line shows our model plus a 200 K blackbody, scaled to the SCUBA 850 μm flux. See Sect. 7 for details. Open with DEXTER
Based on the shape of the infrared spectral energy distribution (Fig. 5), Dominik et al. (2003) argue that the disk around HR 4049 is optically very thick. In their model, the disk is a vertical wall with a large scale height, 1/3 of the inner disk radius. We have compared this geometry to our interferometric observations.
Our wall model consists of a vertical wall of uniform brightness with an inner radius Rin ∈ [5,25] mas, a height H = Rin/3, an inclination i ∈ [40°,70°], and a disk position angle on the sky PA ∈ [0°,360°]. We define PA = 0° when the far side of the inclined disk, projected on the sky, is oriented towards the north6. Only the inside of the wall, which faces the binary, is bright. Because a high optical depth is assumed, the backside is dark and blocks the light from the inside. The flux contribution of the primary star at near-infrared wavelengths is estimated from the spectral energy distribution and the stellar atmosphere model. We construct wall model images and compute near-infrared visibilities and closure phases as explained in Appendices A and B.
The best wall models have PA = 130 ± 10°, i = 60 ± 10°, Rin> 15 mas, and reduced chi-square values of ~20 for the AMBER visibilities and ~8 for the AMBER closure phases. The wall model images have by definition very sharp edges and they deviate strongly from point symmetry because of the high inclination. As a result, the model closure phases have large amplitudes, which is inconsistent with the moderate closure phases observed. Nonetheless, the orientation of the disk on the sky is well established. Specifically, it is the progression of the zero points in the closure phase as a function of baseline position angle, well sampled by the observation sequence (c) to (g) listed in Table 1, that provides a strong constraint on the disk position angle.
The disk extent is less well determined; the simple geometric model provides only a lower limit to the inner radius of the wall. At long baselines, the circumbinary disk is almost fully resolved, while the primary star remains spatially unresolved. In other words, the measurements at long baselines provide only information on the wavelength-dependent flux ratio of the star and the disk in the near-infrared. As a reference, a basic model with an unresolved star (V = 1) surrounded by a fully resolved disk (Vdisk = 0) provides a fit to the AMBER visibilities with a reduced chi-square of ~40. The corresponding closure phases are obviously zero in this case, inconsistent with the observations.
Following the reasoning of Dominik et al. (2003, their Sect. 5), one obtains for the optically thick wall that (5)an equation that is valid irrespective of the distance to HR 4049. Dominik et al. (2003) derive T = 1200 K for a fit to the spectral energy distribution with a single blackbody, a value that we confirm here in Fig. 5. This temperature corresponds to Rin/R = 45. All our best wall models have Rin/R larger than 67 ± 4, using a stellar angular diameter of 0.45 mas (Sect. 2.2), and hence T < 1000 K. The assumption of an optically thick vertical wall producing blackbody emission is not consistent with the interferometric measurements. We therefore reject this hypothesis.
The presence of very small dust particles in the upper layers of the circumbinary disk is evidenced by the optical extinction towards the primary. We investigate an alternative model in which the featureless infrared spectrum is produced by small grains with smooth opacities. We present this new model in Sect. 7. First, however, we discuss the distance to HR 4049.
## 6. The distance to HR 4049
The distance to HR 4049 is highly uncertain because the Hipparcos parallax measurement is influenced by the binary motion. Strangely enough, Hipparcos did not recognize HR 4049 as a binary star and also a detailed inspection of the Hipparcos data did not reveal the binary signature (D. Pourbaix, priv. comm.). This means that, due to the specific combination of orbital parameters and the distance to HR 4049, the Hipparcos measurements resemble that of a single star at a distance of 800 pc.
The optical spectroscopy constrains all orbital parameters of the primary (epoch of periastron passage T0, period P, projected semi-major axis asini, eccentricity e, and argument of periastron ω; Bakker et al. 1998), apart from the longitude of the ascending node Ω and the inclination of the orbital plane i. The interferometric measurements, and specifically the closure phase measurements, constrain the orientation angle of the disk’s major axis on the sky. If the orbital plane of the binary coincides with the disk plane, this angle is equivalent to Ω. This assumption is justified: although the formation process of the disk is unknown, binary interaction is most likely a key ingredient and the plane of the disk is expected to be identical to the orbital plane.
The far side of the disk in the wall models presented in Sect. 5 is located at a position angle PA = 130° ± 10° E of N. The longitude of the ascending node Ω is either PA + 90° or PA − 90°, depending on the orbital direction of the binary: if the binary revolves counterclockwise in the plane of the sky, then Ω = 40°. Otherwise Ω is 220°. We have computed the apparent parallax of HR 4049, which is a combination of the binary orbit and the intrinsic, distance-related parallax, for Ω and i consistent with the results of the interferometric modeling and for any distance to the source d. To mimic the Hipparcos measurements, we have computed the offset of the primary on the sky at the times of the Hipparcos observations. An ellipse was fitted to these offsets to determine the simulated parallax.
We find that for Ω = 40°, the computed parallax agrees within uncertainties with the observed value of 1.23 ± 0.36 mas, if the distance to HR 4049 is 1050 ± 320 pc. In this case, the combination of the binary orbit and Earth’s orbit projected on the sky increases the parallax signal. Hence, the measured Hipparcos parallax appears larger than that expected for a single star at a distance of 1050 pc. For Ω = 220°, we find a distance of 640 ± 190 pc. In the latter case, the binary motion on the sky counteracts the distance-induced parallax and the target is closer than the combined parallax appears to suggest.
### 7.1. A priori considerations
The optical attenuation AV of the primary due to the circumbinary matter varies between 0 and 0.3 mag. The line of sight towards the primary intersects with the upper layers of the inner disk. The optical analysis indicates that the dust in this region is either iron-bearing dust or amorphous carbon. It also proves that very small (0.01 μm) grains are present and that these grains are the dominant source of opacity in the upper layers of the disk at optical wavelengths.
The goal of this paper is to identify the dust species which establishes the appearance of the HR 4049 system from optical to mid-infrared wavelengths. We stress that our investigation does not aim to determine the dust component that dominates the disk in terms of mass or abundance, but in opacity.
For this purpose, we use the radiative-transfer modeling code MCMax (Min et al. 2009). The code computes the temperature and density structure of a dust- and gas-rich disk around a central illuminating source. The detection of strong gaseous emission lines indicates that the circumbinary disk around HR 4049 is indeed rich in gas (Cami & Yamamura 2001; Hinkle et al. 2007). The dust provides the opacity and is heated by the star. The gas is assumed to be thermally coupled to the dust and thus has the same temperature structure. The prime assumption of the model is that the disk is in vertical hydrostatic equilibrium. The disk scale height is set by the balance between gas pressure and gravitational pull of the central object perpendicular to the disk midplane. In the case of HR 4049, the central source is a binary. The estimated semi-major axis of the binary is of the order of a few primary stellar radii (Bakker et al. 1998). It is clear from our simple geometric model presented above in Sect. 5, that the dimensions of the binary are much smaller than the dimensions of the disk. Therefore, we can approximate the binary by a single central star, which has the temperature and luminosity of the primary star of HR 4049, but with a stellar mass representing both stars.
Radiative-transfer modeling of circumstellar disks has shown that the smallest dust grains provide the bulk of the opacity, both in terms of absorption in the optical and thermal emission in the near- and mid-infrared, even when these grains represent only a tiny fraction of the total disk mass (e.g., Meijer et al. 2008; Acke et al. 2009). The appearance of the disk in the mid-infrared is almost exclusively dominated by this small-grain component, regardless of the presence of larger grains with lower opacities. A radiative-transfer model with a content of only small grains, or with a mixture of large and small grains, will produce similar spectra and images. Therefore, for the purpose of our investigation, it is viable to assume that the entire circumbinary dust disk fully consists of small grains. Furthermore, guided by our analysis of the attenuation of the stellar light by the disk at optical wavelengths, we assume the disk contains only one of the species mentioned above. The opacities of the four tested dust species are plotted in Fig. 6. As described in Sect. 6, the distance to HR 4049 is either 640 pc or 1050 pc. We computed radiative-transfer models for both distances.
A downside of our approach is that our model cannot produce a satisfying match to the spectral energy distribution at far-infrared and millimeter wavelengths. At long wavelengths, 0.01 μm-sized grains have no opacity and hence produce no emission. The inclusion of large dust grains in the model would significantly improve the match to the spectral energy distribution. However, this strongly increases the complexity of the model in terms of free parameters (e.g., grain size distribution, vertical settling and mixing) and is beyond the scope of this paper.
### 7.2. The model grid
Fig. 6Opacity curves of the four dust species that satisfy the optical extinction law towards HR 4049, normalized by κV (at 0.55 μm). The grain size is 0.01 μm, the grain shape distribution is either DHS with fmax = 0.8, or Mie (fmax = 0). Open with DEXTER
The computational time needed to calculate a single model is approximately 1 min. We opted for a grid approach instead of applying a minimization fitting routine. Each of the grid models is compared a posteriori to the available data (see Sect. 7.3). The free input parameters of the radiative-transfer model are limited to the following: the opacity table of the dust species under consideration, the total stellar mass of the binary (M ), the inner and outer radius of the disk (Rin, Rout), the total mass in the small dust grains (Mdust), and the surface density power law (6)where p is a free parameter and Σin is fixed by (7)The orientation of the disk in space is fixed by two angles: the inclination (i) and the position angle on the sky (PA). Again, we define PA = 0° when the far side of the inclined disk, projected on the sky, is oriented towards the north. The grid parameter ranges are summarized in Table 4.
Table 4
Grid parameters of the radiative-transfer models.
### 7.3. Comparing the models to the observations
In the search for the model in the grid which best reproduces the observations, we have defined five chi-square variables.
• . The modeled infrared spectrum is compared to the ISO-SWSspectrum of HR 4049 at wavelengths outside ofthe PAH features (see Fig. 7).
• and . From the model images at 2.2 μm, squared visibilities and closure phases are computed (see Appendices A and B). We compute a chi-square for both the squared visibilities and the closure phases . The latter is defined using phasors (8)\begin{lxirformule}$\Phi_{j}$\end{lxirformule} are the observed closure phases, σj the corresponding errors, Φmod,j the modeled closure phases, and N = 2152 the number of closure phase measurements.
• . From the image at 10 μm, model visibilities are computed. Because the MIDI data reduction only provides good estimates of the relative visibilities on the two baselines, we define a chi-square (9)Herein, Vrel,j = V29m,j/V15m,j are the MIDI relative visibilities, σrel,j the corresponding errors, Vrel,mod,j the model visibility ratios, and N = 30 the number of MIDI relative visibilities.
• . All interferometric observations were taken near the photometric minimum, when the primary of HR 4049 is most obscured. The optical attenuation at that phase is AV = 0.3 ± 0.1 mag. The attenuation of the model is computed by comparing the model spectrum to the input Kurucz model at 0.55 μm. Because the tested dust species are selected to provide a good fit to the extinction curve in the optical wavelength range, radiative-transfer models with the right optical attenuation have the right optical colors as well.
An acceptable model should provide a good fit to all observations simultaneously. First, we select models based on their ability to reproduce the interferometric observations, that is, models that satisfy simultaneously
(10)
where the minimum is taken over the entire grid of models. We then pick the models that satisfy (11)with the minimum taken over the selected subset. Finally, the best model is defined as the one with the lowest value in .
### 7.4. Modeling results
We discuss the modeling results for the four dust species that satisfy the optical extinction law.
Iron oxide. The opacities of the magnesium/iron oxides show a lack of opacity in the near-infrared and a smooth, broad band around 20 μm. The models that best reproduce the interferometric data therefore have spectra with a strong excess at mid-infrared wavelengths and a moderate near-infrared excess. The observed infrared spectrum of HR 4049 clearly does not have this shape. Magnesium/iron oxides are excluded as the dominant opacity source in the circumbinary disk.
Magnetite. None of the models in the computed grid satisfies the interferometric criterium (Eq. (10)) and reproduces both visibilities and closure phases simultaneously. Ignoring the bad fit to the interferometry, some models have spectra that match the observed infrared spectrum well. However, like the iron oxides, the magnetite opacities have structured spectra in the 10–30 μm range. This results in a clear feature at 17 μm in the model spectra, which is not seen in the observations of HR 4049 (see Fig. 7). We reject magnetite.
Metallic iron. While the interferometric observations can be reproduced relatively well, not a single one of the models in the grid with metallic iron produces an infrared spectrum that is compatible with the observed spectrum. All the model disks are optically thin at infrared wavelengths. The spectrum is therefore proportional to a gradient of blackbody curves, multiplied with the opacity. Due to the steep decline of the opacity curve of metallic iron towards long wavelengths, the infrared spectra of all metallic-iron models are skewed, with a strong shoulder at near-infrared wavelengths (1–2 μm). This is inconsistent with the observed infrared spectrum.
To reduce the strong near-IR bump and get the peak of the emission at 4–5 μm, as observed in the spectrum of HR 4049, the metallic iron has to be cooler than ~600 K. Even in model disks with the largest inner radii (30 AU), however, the iron grains are too hot (1100 K) because of the high optical-to-infrared opacity ratio of metallic iron. Expanding the grid to even larger inner disk radii would improve the spectral match, but the interferometric data do not allow for a larger angular extent of the disk on the sky. Metallic iron is removed from the shortlist of dominant opacity sources in the disk around HR 4049.
Amorphous carbon. The disk models with amorphous carbon provide significantly better fits to the infrared observations. A few dozen grid models satisfy both the interferometric criterium (Eq. (10)) and the spectral criterium (Eq. (11)). The parameters of the best model are listed in Table 5. The error bars represent the spread around the best model parameters of those grid models that satisfy the criteria.
The spectrum of the best amorphous-carbon model is plotted in Fig. 7. The match between the shape of the modeled spectrum and the observed infrared spectrum is remarkable. The fit to the interferometric data is shown in Figs. 8 to 12. The low-resolution AMBER visibilities (Fig. 8) span the H and K bands. The relative flux contribution of the spatially unresolved primary star to the total flux decreases towards longer wavelengths, from ~0.6 at the blue end of the H band to ~0.2 at the red end of the K band. We take this effect into account (see Appendix B). It leads to higher visibilities at short wavelengths.
Fig. 7Model fit to the ISO-SWS spectrum of HR 4049. The spectrum of the best amorphous-carbon model is shown (red), along with the best magnetite model (cyan) and the best fit with a single blackbody (T = 1200 K, green). Open with DEXTER
Fig. 8Low-resolution (LR-HK) squared visibilities of HR 4049 as a function of spatial frequency. Top to bottom: observed squared visibilities , model squared visibilities , and residual squared visibilities . Colors refer to the wavelength of the observation: from 1.6 μm (blue) to 2.4 μm (red). The gap visible for each observation is the band gap between the H and K bands. The H-band squared visibilities are higher than the K-band measurements because of the higher flux contribution of the unresolved primary star at short wavelengths. Open with DEXTER
Deviations between the observed and modeled visibilities are largest in the H band. The AMBER observations were obtained using the fringe tracker FINITO when the weather conditions allowed (see Sect. 2.1). Only 30% of the H-band flux of HR 4049, intrinsically an already faint AMBER target, is transmitted to the science detector. Even when the weather conditions were worse, and FINITO could not be used, the H-band flux was low. As a result, the uncertainties on the H-band correlated fluxes are large. Moreover, systematic calibration errors may occur. Decreasing quality of the fringe measurements generally leads to a loss of coherence, reducing the calibrated visibilities if the measurements of the calibrator are of better quality than those of the science target. In Fig. 8, it can be seen that the model visibilities systematically exceed the observations, which may point to this effect. It is also possible that the flux ratio of the star and disk deviates from the model in the H band. However, this flux ratio is constrained by the Kurucz stellar atmosphere model, fit to the optical photometry, and the ISO-SWS infrared spectrum, and fixed in the radiative transfer modeling (see Appendix B). This approach is self-consistent, and comparison of the data sets at different wavelengths and with different instruments, which is necessary for our investigation, remains possible.
Fig. 9Medium-resolution (MR-2.1 and MR-2.3) squared visibilities of HR 4049. Colors refer to different baselines: respectively blue, red, and green for the first, second and third baseline of observations (k) to (n) in the order listed in Table 1. Top to bottom in each panel: observed , model V, and residuals . Open with DEXTER
Fig. 10Low-resolution (LR-HK) closure phase measurements of HR 4049. Each panel refers to a baseline triangle. The letter in the top corner refers to the corresponding entry in Table 1. Measurements are indicated by black dots with error bars; the red line represents the model. Open with DEXTER
Fig. 11Medium-resolution (MR-2.1 and MR-2.3) closure phase measurements of HR 4049. Symbols are similar to those in Fig. 10. Open with DEXTER
Fig. 12MIDI relative visibilities V29m/V15m. Black dots are the data; the red line is the model. Open with DEXTER
Because of the large uncertainties of the H-band visibilities, the interferometric model is mostly constrained by the low- and medium-resolution K-band measurements. The latter are reproduced well: the two local minima seen in the low-resolution K-band visibilities (at spatial frequencies around 60 and 150 cycles/arcsec; Fig. 8) and the relative shape of the medium-resolution visibilities (Fig. 9).
The orientation of the disk on the sky is mostly constrained by the LR-HK closure phase series (c) to (g) (see Table 1). These observations were obtained consecutively on a single baseline triangle. Due to the Earth’s rotation, the baseline lengths and orientations on the sky change during the night. The model reproduces nicely the observed trend of the closure phases with time (Fig. 10). Also the medium-resolution closure phases are matched by the model (Fig. 11). The MIDI relative visibilities, shown in Fig. 12, are reproduced perfectly.
The model disk density and temperature structure are shown in Fig. 13. Model images at 2.2 μm and 10 μm are displayed in Fig. 14. Note that the disk is moderately optically thick at optical wavelengths but optically thin at infrared wavelengths, in contrast to the very optically thick wall model of Dominik et al. (2003).
Fig. 13Density and temperature structure of the best model. R is the radial distance to the primary, z the vertical height above the disk midplane. The color scale and black contours indicate the local temperature of the amorphous carbon in Kelvin. White contours indicate the local density with respect to the maximal density at R = Rin and z = 0. The dotted line is the line of sight towards the primary. Open with DEXTER
### 7.5. The distance to HR 4049 revisited
We may be able to differentiate between the two possible distances to HR 4049 found in Sect. 6. Our best models have a binary stellar mass of 1.1 ± 0.1 M and 2.8 ± 0.3 M for a distance of 640 and 1050 pc, respectively. By means of the mass function of the primary, derived from optical spectroscopy (0.158 ± 0.004 M, Bakker et al. 1998), and the now known inclination of the system, the binary mass can be converted to the masses of the individual stars. For a distance of 1050 pc, the primary and secondary mass are 1.6 ± 0.2 M and 1.1 ± 0.1 M, respectively. Such a primary mass is excessively high for a white dwarf, which makes a distance of 1050 pc unlikely. At a distance of 640 pc, the primary and secondary have a lower mass of 0.4 ± 0.1 M and 0.7 ± 0.1 M, respectively. This primary mass is within the expected range for a post-AGB star. We conclude that the most likely distance to HR 4049 is 640 pc.
With an effective temperature of 7500 ± 200 K and an angular diameter of 0.45 ± 0.03 mas at a distance of 640 ± 190 pc, the primary has a radius of 31 ± 9 R and a luminosity of 2800 ± 1700 L for a stellar mass of 0.4 ± 0.1 M. This is close to what is expected from Paczyński’s core mass-luminosity relation: a luminosity of 2800 L corresponds to a stellar mass of ~0.55 M (Paczyński 1970a,b). The inferred surface gravity log g is 1.1 ± 0.3 dex, consistent with the spectroscopic value of 1.0 ± 0.5 dex.
At a distance of 1050 ± 320 pc, the primary’s radius and luminosity would be 50 ± 15 R and 8000 ± 4000 L, for an (unlikely) mass of 1.6 ± 0.2 M. Also in this case, log g = 1.2 ± 0.3 dex is consistent with the spectroscopic value, but, again according to the mass-luminosity relation, the luminosity corresponds to that of a post-AGB star that is only half as massive.
Table 5
Parameters of the best model.
## 8. Conclusions
### 8.1. Discerning metallic iron and amorphous carbon
With our case study of HR 4049, we have shown that the combination of infrared spectroscopy and interferometry is a powerful tool for discriminating between dust species with featureless opacities. While the spectrum is related to the temperature structure of the circumstellar dust, the high-angular-resolution data constrain the physical location of the material. Dust species that are not in thermal contact have different temperatures at the same physical location from the illuminating source because of their different absorption/emission properties.
In general, by measuring the temperature of the dust and its location, one can constrain the type of dust under consideration. This is specifically true for small metallic iron and amorphous carbon grains. Both species are expected to be abundant in different astrophysical environments, e.g., AGB winds and circumstellar disks, but their identification based on the infrared spectrum alone is impossible. Fortunately, the opacity of metallic iron decreases more quickly towards longer wavelengths than that of amorphous carbon (Fig. 6). This makes metallic iron grains much hotter than amorphous carbon grains under the same optical irradiation. Spatially resolved observations lift the ambiguity. The combination of spectroscopy and interferometry is therefore a very promising tool for identifying these dust species in a wide range of astrophysical environments.
### 8.2. A consistent image of HR 4049
The system HR 4049 consists of a binary located at a distance of 640 pc, surrounded by a circumbinary dust- and gas-rich disk. It contains an evolved, 0.4 ± 0.1 M, 7500 ± 200 K star, and an undetected 0.7 ± 0.1 M secondary. Under the assumption that the latter is on the main sequence, this mass corresponds to that of a K-type star.
Fig. 14Best-model image at 2.2 and 10 μm. North is up, east is left. At a distance of 640 pc, 1 mas corresponds to 0.64 AU. The dashed line in the right panel indicates the photocenter location of the PAH 11.3 μm emission region relative to the disk center, projected on the N − S axis (xns). The photocenter position in the E − W direction is not constrained (see Sect. 8.2.2). Open with DEXTER
#### 8.2.1. The circumbinary disk
The binary is surrounded by a circumbinary disk, rich in gas and dust. Our model indicates that small amorphous carbon grains are the dominant source of opacity. The total dust mass in these grains required to reproduce the near- to mid-infrared observations is only 10% of a Moon mass.
The radial extent of the disk seen in the near- and mid-infrared is limited to a few tens of AU. The disk is optically thick to the stellar radiation in radial direction, but almost optically thin at infrared wavelengths. The ratio of the total infrared excess luminosity and the stellar luminosity of the best model is 25%, lower than the observed 33% (Dominik et al. 2003). The difference between the thermal emission of the model and the photometric observations occurs at wavelengths beyond the mid-infrared, where our model fails (as explained in Sect. 7.1). This discrepancy at long wavelengths can be phenomenologically solved by including large (>several μm) and cold dust grains either in the midplane of the disk or at larger distances from the star. Inclusion of a dust component of this kind results in additional emission at long wavelengths, while the appearance of the disk at optical and mid-infrared wavelengths remains similar.
The spectral energy distribution of HR 4049 is shown in Fig. 5. While our model provides a satisfying fit at short wavelengths, it underestimates the flux beyond 20 μm. A fit with a 200 K blackbody representing the cold dust component in the disk provides an ad hoc solution to this problem. We scaled the blackbody so that the SCUBA 850 μm flux is matched. The radiating surface of the blackbody is 740 mas2 (i.e., 30% of the midplane surface of our model projected on the sky). We note that a single-temperature blackbody fit (1200 K) does not fully grasp the shape of the spectral energy distribution either (Fig. 5).
The optical spectrum of HR 4049 shows the [O i] 6300 Å emission line. The position and width of the line profile do not change with orbital period, which made Bakker et al. (1996) conclude that the emission region is located in the circumbinary disk. This is not uncommon for early-type stars surrounded by a disk; also pre-main-sequence stars display disk [O i] emission (Acke et al. 2005). The broadening of the line is due to the Keplerian velocity of the line formation region in orbit around the star/binary. With our value for the binary stellar mass of HR 4049 we derive a Keplerian velocity at the inner radius of the circumbinary disk of vin = 9 ± 1 km s-1, in perfect agreement with the feature’s full width at half maximum of 20 km s-1 (Bakker et al. 1996). Hinkle et al. (2007) have determined the widths of the infrared CO lines to be in the range of 16–18 km s-1. Adopting our model parameters, this broadening is consistent with a disk origin of the CO emission as well7.
#### 8.2.2. PAH outflow
The far side of the circumbinary disk is oriented towards PA = 130°. The VISIR spectroscopy indicates that the photocenter of the PAH emission, projected onto a N − S axis, is located south of the photocenter of the disk emission. These observations are consistent with a bipolar outflow, perpendicular to the disk midplane, along the rotation axis of the disk. The receding lobe is oriented towards PA = 310°. Compensating for the PAH-to-total flux ratio (fpah = Fpah/Ftotal) and taking into account the orientation of the disk (PA), one can compute the photocenter displacement of the PAH emission region on the sky x from the photocenter displacement measured in the N − S direction by VISIR xns using (12)The photocenter of the PAH 11.3 μm emission region on the sky is located 40 ± 10 mas (26 AU at 640 pc) from the disk center, comparable to the extent of the disk at these wavelengths (Fig. 14). This again shows that the emission region is much larger than the circumbinary disk and is most likely a bipolar outflow.
#### 8.2.3. Diamonds
HR 4049 is one of the very few astronomical sources that displays the near-infrared emission bands of hydrogenated diamonds (Geballe et al. 1989; Guillois et al. 1999). The pre-main-sequence star HD 97048 is also surrounded by a disk containing diamonds (Acke & van den Ancker 2006). The infrared spectrum of HD 97048 shows strong PAH emission emanating from the disk and no sign of silicate emission. The geometry of the disk is remarkable as well: mid-infrared imaging has shown that it is an extremely flared, i.e., bowl-shaped, disk (Lagage et al. 2006). Carbonaceous grains provide a straightforward explanation for the featureless continuum, while the large disk opening angle may reflect strong heating via hydrocarbon molecules and very small grains. Here, we report that also the circumbinary disk around HR 4049 is rich carbonaceous dust.
Acke & van den Ancker (2006) searched for observational characteristics that set apart diamond sources from others, but found none. Here, we suggest that a carbon-dominated dust chemistry is a necessary condition to form the diamonds and/or make their features appear in the infrared spectrum.
## 9. Discussion
HR 4049 belongs to a fairly large group of evolved binary stars surrounded by a circumbinary disk (Van Winckel 2003). The orbital parameters are such that the evolved objects must have been subject to severe binary interaction processes when at giant dimensions. The systems managed to avoid the strong spiral-in and reveal themselves as long-period (>100 days), mainly eccentric binaries, often still surrounded by a stable dusty disk (e.g., Van Winckel et al. 2009). A recent study of optically bright objects in the Large Magellanic Cloud has shown that disk sources are also rather common there among the post-AGB population. Because of the known distance to the LMC, their location in the Hertzsprung-Russell diagram is better constrained. The primary stars in these systems display a wide range in luminosities and hence in initial mass (van Aarle et al. 2011). This indicates that disk formation does not depend vitally on the primary’s initial mass, but rather on the dimensions of the binary system and of the primary during the thermally pulsing AGB phase.
Investigation of the infrared spectra of circumbinary disks around evolved binaries has indicated an oxygen-rich dust mineralogy in most systems. Amorphous and crystalline silicate features are dominant (Gielen et al. 2008, 2011a). The evolutionary scenario (Waters et al. 1998; Molster et al. 1999; Van Winckel 2003) assumes that the circumbinary disk is a relic of the strong binary interaction process and hence contains the chemical footprint of the former giant envelope out of which it was formed. The dominant occurance of purely oxygen-rich material shows that the interaction processes occured prior to the evolutionary phase when enough thermal pulses with dredge-up would have made the star carbon-rich. Only a few objects display PAH emission features, and some even fullerene emission features, on top of the dust spectra which are dominated by (crystalline) silicate features (Gielen et al. 2011a,b).
In this context, HR 4049 is an exception to the rule. The emission bands in the infrared spectrum are all attributed to carbonaceous species and we have shown in this paper that the continuum emission is also produced by carbon grains. There is no sign of silicates. We stress, however, that large (≫10 μm) silicate grains may be present in the disk. These grains have a smooth spectral signature, that would be veiled by the spectrum of the small carbon grain component. The presence of large grains may be suggested by the strong far-infrared excess, not reproduced by our model. The particular nature of HR 4049 is highlighted by the contrasting oxygen-rich chemistry of the circumbinary gas.
The mass of the secondary star in HR 4049 does not exclude it from being a compact object, which may have been a source of circumbinary material as well. If indeed the current companion is a white dwarf, it does not show symbiotic activity and it should be cool and hence old (Jorissen et al. 1998). It is therefore unlikely that circumstellar matter from the now cool white dwarf would still be present in a dust and gas-rich environment. The disk would then have survived the vast energy releases and dimensional increases of the current primary during its thermal pulses. Moreover, old mass loss from a carbon star cannot explain the oxygen-rich chemistry of the circumbinary gas. We conclude that the current circumstellar matter most likely comes from the current, luminous primary.
We describe our interpretation of the observed dust and gas chemistry in the circumbinary disk. The primary is losing mass, as evidenced by the P-Cygni profile of the Hα line (Bakker et al. 1998). This mass loss bears the chemical signature of the photospheric abundance pattern. The primary photosphere has a C/O ratio below or close to unity (Lambert et al. 1988) and is strongly depleted in refractory elements such as iron, but also silicon. Because of this depletion, silicates cannot form in the current outflow of HR 4049. The outflowing material is still oxygen-rich, but the oxygen atoms cannot form dust because of the absence of refractory elements. The oxygen therefore only forms molecules with the abundantly available hydrogen and carbon: OH, H2O, CO, and CO2. The carbon atoms, on the other hand, are able to form large molecules (PAHs, fullerenes) and very small dust grains (diamonds and amorphous carbon grains). In other words, because of the lack of refractory elements in the current mass loss of the primary, only carbonaceous dust can be formed, despite the intrinsically oxygen-rich chemistry.
The carbonaceous dust is confined to the circumbinary disk, while the outflow contains carbon-rich molecules. This is consistent with our hypothesis: the spherical outflow of the primary is probably too tenuous to form dust grains and so only (large) molecules are formed. However, the gas-rich disk intercepts and slows down the wind in the binary plane. There, the densities become large enough for dust to condense. The amorphous carbon grains form a veil over the pre-existing disk.
The question remains why only HR 4049 displays this particular circumstellar chemistry among the evolved binaries with disks. The answer is probably linked to the extreme depletion pattern of the primary photosphere. The re-accretion of metal-poor gas in the previous evolutionary phases apparently was so efficient, that subsequent mass loss from this photosphere became completely depleted of refractory elements. If this hypothesis holds, the disk around HR 4049 must contain silicates and iron-bearing dust, but their spectral signature is overwhelmed by that of the amorphous carbon. In this context, HD 52961 is an interesting target for a similar study to the one presented here. The primary in this evolved binary also displays an extreme depletion pattern and the infrared spectrum indicates the presence of PAH molecules, fullerenes, and CO2 (Gielen et al. 2011b). However, silicate emission is also detected. The current outflow in HD 52961 is similar in composition to the one in HR 4049. We therefore expect that carbonaceous grains are currently forming in this circumbinary disk as well.
6
In the literature, the disk position angle commonly refers to the angle of the major axis on the sky. This angle is ±90° offset with our disk PA. Nonetheless, we use our PAdefinition because it does not suffer from a 180° degeneracy.
7
Hinkle et al. (2007) assumed an inner dust-disk radius of 10 AU and mention that the CO gas is located at the inner radius if the total binary mass is 0.9 M. Our models constrain both parameters, and place the disk at a slighly larger distance from the binary, but also indicate a slightly higher binary mass.
9
This is because visibility measurements probe structures of the size of the inverse of the spatial frequency, while closure phases can be measured with a precision of only a few degrees, and thus are sensitive to deviations from point symmetry on an angular scale two orders of magnitude smaller.
## Acknowledgments
This research has made use of the AMBER data reduction package of the Jean-Marie Mariotti Center8. The authors thank D. Pourbaix for his much appreciated help with the Hipparcos data and the anonymous referee for helpful comments.
## Online material
### Appendix A: From images to complex visibilities
The radiative-transfer code MCMax produces images on a predefined grid of pixels on the sky. We chose a 256 × 256 grid covering a field-of-view of 100 mas × 100 mas. This results in an angular resolution of 0.39 mas per pixel.
The model image produced by MCMax for a given set of parameters (M , Rin, Rout, Mdust, p, i) has a default disk orientation angle on the sky of 0° E of N. That is, the far side of the inclined disk is oriented towards the north. This image is rotated according to the grid disk PA under consideration to yield the model image corresponding to (M , Rin, Rout, Mdust, p, i, PA).
For a specific observational setting with baseline length b, baseline orientation angle θ, and at wavelength λ, the van Cittert-Zernicke theorem gives the complex normalized visibility (A.1)where (u = bsinθ/λ,v = bcosθ/λ) are the spatial frequencies, (α,β) the angular coordinates, and B(α, β) the brightness distribution (i.e., the image) of the source on the sky, normalized to a total intensity of unity.
To compute , the image is rotated through the angle θ. This is equivalent to a coordinate transform such that (u′,v′) = (0,b/λ) and (α′,β′) = (αcosθ − βsinθ,αsinθ + βcosθ). Equation (A.1) becomes Numerically, the integration over α′ reduces to a sum over the first dimension of the rotated image B(α′,β′). The second integration is a one-dimensional Fourier transform and can be done quickly with the fast Fourier transform (FFT) algorithm. However, FFT converts the 256-array with a pixel step of 0.39 mas into a 256-array with a spatial-frequency step of 10 cycles/arcsec. Since the observations are obtained at spatial frequencies in the range 25−370 cycles/arcsec, the default FFT resolution is too coarse for direct comparison. To increase the resolution in spatial frequency, the 256-array (i.e., dαB(α′,β′)) is placed in a 1024-array, which is zero outside the image. The FFT of the latter array yields a spatial-frequency step of 2.5 cycles/arcsec. This higher resolution makes the FFT result smooth enough to allow interpolation at the exact spatial frequencies of the observations.
The method was tested on images of simple geometries (point source, binary with unresolved components, uniform disk, uniform ellipse), for which analytic formulae of the complex visibility exist. Good agreement was found. For a uniform disk with a diameter of 10 mas, the absolute difference between the analytic and image-based squared visibilities is 0.002 on average, and always below 0.01. The absolute difference between the analytic closure phase (i.e., 0° or 180° for a uniform disk) and the image closure phase is 1° on average, but can increase to 10° at the longest baselines. At these high spatial frequencies, pixelation effects start to play a role for the closure phases, while they do not affect the visibilities to a measurable level9.
The obtained accuracy of the model is sufficient for the goals of this paper. An increase of the spatial resolution of the model image is possible, but would lead to an increase of the grid computation time from days to weeks.
### Appendix B: Stellar contribution in the near-infrared
In the near-IR, the flux contribution of the central star to the total flux can be large. The stellar flux contribution of the primary of HR 4049 drops from ~0.6 at 1.6 μm to ~0.2 at 2.5 μm. The intensity-normalized image of the disk, on the other hand, is similar in the H and K band. We therefore took the following approach to compute squared visibilities and closure phases in the near-IR:
• 1.
From the radiative-transfer model, we computed a2.2 μm image. The central2 × 2 pixels in this image contain the primary star. We removed the star from the image by replacing these central pixel values by interpolated values. Hence, we created a star-less image, which provides a representative disk image at all wavelengths between 1.6 and 2.5 μm.
• 2.
We computed the disk complex visibilities from the star-less image as described in Appendix A.
• 3.
From the model infrared spectrum and the input Kurucz model for the primary star, the stellar-to-total flux contribution f was estimated at all AMBER wavelengths.
• 4.
The final model visibilities were computed from the disk visibilities and the flux ratio f, according to (B.1)From this complex quantity, squared visibilities (per baseline and wavelength) and closure phases (per baseline triangle and wavelength) were computed. Given its angular diameter (0.45 ± 0.03 mas, determined from the Kurucz model fit to the optical photometry), the primary star is unresolved at the spatial resolution of the observations. Moreover, we assumed the primary star is at the disk’s center and hence . Note that we neglected the possible offset of the primary star with respect to the disk center due to binary motion. However, given the dimensions of the disk (~15 mas) and of the binary (~1 mas), the effect of such an offset on visibilities and closure phases is expected to be small. Moreover, the interferometric observations were taken close to minimal brightness, when the primary star was close to the observer. The primary was therefore close to projected minor axis of the disk and close to the disk center on the sky.
In the mid-IR, the relative flux contribution of the primary star to the total flux is less than a few percent. Moreover, it is constant over the N band because of the near-blackbody shape of both the stellar flux and the excess. The intensity-normalized model image of HR 4049 at 8 μm is therefore the same as at 13 μm. The variations in (relative) visibility over the N band are exclusively due to a change in spatial frequency and not due to a change in disk geometry or stellar flux contribution with wavelength.
## All Tables
Table 1
Log of the AMBER and MIDI interferometric observations.
Table 2
Tested dust species, sizes, and shapes.
Table 3
Dust species with extinction curves consistent with the Geneva color–magnitude diagrams (X2 < 3, see Eq. (4)).
Table 4
Grid parameters of the radiative-transfer models.
Table 5
Parameters of the best model.
## All Figures
Fig. 1Infrared spectrum of HR 4049 (black: ISO-SWS, blue: Spitzer-IRS). The prominent features between 6 and 13 μm are caused by PAHs. The band between 4 and 5.3 μm is the result of a forest of CO, OH, and H2O molecular emission lines (Hinkle et al. 2007). Emission lines of CO2 are visible longward of 13 μm (Cami & Yamamura 2001). Open with DEXTER In the text
Fig. 2Spatial-frequency coverage of the interferometric measurements. Black dots refer to AMBER data, red dots to MIDI data. Open with DEXTER In the text
Fig. 3Six independent Geneva color–magnitude diagrams of HR 4049. The linear-regression fit to the data is shown in black. The other curves show the color/magnitudes of the Kurucz model, reddened with the interstellar extinction law of Cardelli et al. (1989, foreground extinction) and with different dust species with a grain size of 0.01 μm and an appropriate range of column densities (variable circumstellar extinction). The grain shape distribution is DHS with fmax = 0.8, except for amorphous carbon (Mie, i.e., DHS with fmax = 0). Top left panel: the discrepancy between the observed and the modeled U − B color may be due to a UV-excess in the binary system. Open with DEXTER In the text
Fig. 4a) ISO-SWS spectrum in the N-band range. The PAH features at 8.6, 11.3, 12.0, and 12.7 μm are indicated, the dashed line represents the continuum. b) Spatial extent along the slit north-south direction of VISIR. The dashed line indicates the width of the spatially unresolved continuum, which increases with wavelength. c) Relative spatial position with respect to the continuum along the slit direction of VISIR. The PAH emission region is extended on a scale of several 100 s of AU and its photocenter is located towards the south of the continuum emission region. Open with DEXTER In the text
Fig. 5Spectral energy distribution of HR 4049. The ISO-SWS spectrum (grey line), and IRAS 25, 60, and 100 μm, and SCUBA 850 μm photometry (grey boxes). The green line is the fit with a single blackbody (1200 K, in accordance with Dominik et al. 2003). The spectral energy distribution of our model (blue) underestimates the flux at λ > 20 μm. The red line shows our model plus a 200 K blackbody, scaled to the SCUBA 850 μm flux. See Sect. 7 for details. Open with DEXTER In the text
Fig. 6Opacity curves of the four dust species that satisfy the optical extinction law towards HR 4049, normalized by κV (at 0.55 μm). The grain size is 0.01 μm, the grain shape distribution is either DHS with fmax = 0.8, or Mie (fmax = 0). Open with DEXTER In the text
Fig. 7Model fit to the ISO-SWS spectrum of HR 4049. The spectrum of the best amorphous-carbon model is shown (red), along with the best magnetite model (cyan) and the best fit with a single blackbody (T = 1200 K, green). Open with DEXTER In the text
Fig. 8Low-resolution (LR-HK) squared visibilities of HR 4049 as a function of spatial frequency. Top to bottom: observed squared visibilities , model squared visibilities , and residual squared visibilities . Colors refer to the wavelength of the observation: from 1.6 μm (blue) to 2.4 μm (red). The gap visible for each observation is the band gap between the H and K bands. The H-band squared visibilities are higher than the K-band measurements because of the higher flux contribution of the unresolved primary star at short wavelengths. Open with DEXTER In the text
Fig. 9Medium-resolution (MR-2.1 and MR-2.3) squared visibilities of HR 4049. Colors refer to different baselines: respectively blue, red, and green for the first, second and third baseline of observations (k) to (n) in the order listed in Table 1. Top to bottom in each panel: observed , model V, and residuals . Open with DEXTER In the text
Fig. 10Low-resolution (LR-HK) closure phase measurements of HR 4049. Each panel refers to a baseline triangle. The letter in the top corner refers to the corresponding entry in Table 1. Measurements are indicated by black dots with error bars; the red line represents the model. Open with DEXTER In the text
Fig. 11Medium-resolution (MR-2.1 and MR-2.3) closure phase measurements of HR 4049. Symbols are similar to those in Fig. 10. Open with DEXTER In the text
Fig. 12MIDI relative visibilities V29m/V15m. Black dots are the data; the red line is the model. Open with DEXTER In the text
Fig. 13Density and temperature structure of the best model. R is the radial distance to the primary, z the vertical height above the disk midplane. The color scale and black contours indicate the local temperature of the amorphous carbon in Kelvin. White contours indicate the local density with respect to the maximal density at R = Rin and z = 0. The dotted line is the line of sight towards the primary. Open with DEXTER In the text
Fig. 14Best-model image at 2.2 and 10 μm. North is up, east is left. At a distance of 640 pc, 1 mas corresponds to 0.64 AU. The dashed line in the right panel indicates the photocenter location of the PAH 11.3 μm emission region relative to the disk center, projected on the N − S axis (xns). The photocenter position in the E − W direction is not constrained (see Sect. 8.2.2). Open with DEXTER In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967933058738708, "perplexity": 1617.4645426180098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544677.45/warc/CC-MAIN-20171214144324-20171214164324-00301.warc.gz"} |
https://www.physicsforums.com/threads/phase-shift-issue-in-dominant-pole-compensation-strategy.857590/ | # Phase shift issue in Dominant Pole Compensation strategy....
1. Feb 16, 2016
### brainbaby
The next hurdle in my understanding of frequency compensation comes as following…
As the text says…
My problem is that as we move from point 1 to 2, the frequency increases so the phase shift should also increase..(as phase shift depends on frequency)… but the text however says that the phase shift will be constant at 90 deg with a 6db roll off..so what is this paradox..??
Either the text should say that "open loop gain falls at 6db/ octave with a varying phases shift of 90 deg..”
Isn't...!!
2. Feb 16, 2016
### LvW
No - why do you think that the phase shift is ALWAYS increasing with frequency?
Two examples:
* For poore resistive circuits the phase hift is always zero
* For an ideal integrator (or a RC network for very large frequencies) the phase shift is constant at -90deg.
Example second order:
H(s)=A/(1+as +bs²)
For very high frequencies we have H(s)=A/s²= - A/w² .
And the "-" sign is identical to a constant phase shift of -180deg.
(This is what the text says).
However, the text is somewhat simplifying. The term "constant" refers to the asymptotic lines of the real curve phase=f(frequency). As the above function H(s) shows: The phase assumes 180 deg for infinite frequencies only.
3. Feb 16, 2016
### brainbaby
Due to increasing frequency the stray capacitance both inside and outside the amplifier comes in action hence cause the phase shift or phase delay to occur....by why it increases..I certainly don't have any idea of it..
4. Feb 16, 2016
### LvW
OK - correct. Parasitic effects are always existent and - as a result - there will be always a negative phase shift for rising frequencies, even for an ohmic voltage divider.
I did not know that you were speaking about such effects.
And, of course, the text as cited by you did not take into account such effects.
So - what is your problem?
As a - more or less - phliosophical aspect: In electronics there is NO formula or rule that is correct by 100% (and this is true even for the "resistive" voltage divider).
Everything contains some simplifications and neglects unwanted effects which - hopefully -become effective for frequencies only which are beyond the working range.
5. Feb 16, 2016
### brainbaby
Oh I am really sorry..I forgot to attach an image...which will present my point of query..
What I mean to say is that if we move from point 1. to point 2. frequency is increasing...correct..so the phase shift should also become more prominent..
But in the text it is written that or means that for a 6 db roll off the phase shift would be a constant 90 degree....(as we progress down the slope the frequency on the horizontal axis increases)
So why they said that the phase shift would be constant...either they should say "open loop gain falls at 6db/ octave with a varying phases shift of 90 deg..”
6. Feb 16, 2016
### Jony130
In short. The dominant pole capacitor (single capacitor) can provide only 90 deg phase shift max.
And we select dominant pole capacitor in such a way that the loop gain drops to 1 with slope 6dB per octave at a frequency where the poles of uncompensated amp contributes very small to the total phase shift. This ensures that the phase shift is greater than -180 deg and we have a stable amplifier.
See the plot (single pole)
And notice that for F > 10*Fc phase shift is constant and equal to -90 deg.
Where Fc is a cutoff/corner frequency Fc = 1/(2 * pi * R*C).
Also do you know why roll-off is equal to 6dB per octave (20dB per decade) ? If not you should really back to basics.
And here you have a example of a second order (two poles) frequency response
Last edited: Feb 16, 2016
7. Feb 16, 2016
### jim hardy
@brainbaby
Remember asymptotes ?
Here's where technicians get an advantage
my high school teacher had us boys calculate, tabulate and plot, calculating by slide rule , on log paper
Vout/Vin for this simple circuit
from a couple decades below to a couple decades above break frequency
Vout/Vin = JXc/(R+jXc)
now - when you struggle through that exercise with all its slide rule polar to rectangular conversions
you'll see the pattern emerging
At low frequency Xc >> R so that fraction's denominator is very nearly jXc
and fraction jXc/(nearly jXc) is nearly 1, with almost no phase shift because the j's almost cancel (pardon my math liberty?)
At high frequency Xc is becoming vanishingly small
numerator is that small number multiplied by j
denominator is R plus the comparatively small numerator
so the fraction becomes j X (a vanishingly small numerator)/(an almost constant denominator)
jXc/(R+j(almost_nothing)) is very nearly: jXc/R ,
only one operator j is left for all practical purposes.
while you can't by math get rid of the j in denominator you can (by observation of the calculated numbers in your exercise) see that it becomes insignificant ,
for all practical purposes it vanishes
so phase shift is asymptotic to 90 degrees not equal , but slide rule accuracy is accepted as 3 figures.
Then it's intuitive that a single pole gives 90 degrees
and another pole would give 90 more
Once you believe that it gives you confidence in the math that edit Jony is Jony and Lvw are presenting so well
But i wouldn't have ever learnt it without that high school technician's exercise.
Maybe you should try it - do about twenty frequencies so it sinks in.
Plodders like me need to be taught in sequence What→How→Why , not Why→What→(How left to you to figure out)...
sophiecentaur always says "Work the maths!" . This is a case where doing it with real numbers will be more instructive than alphabet juggling.
Once it's soaked in you can make an elegant derivation, I expect that would feel great.
hope it helps
old jim
Last edited: Feb 17, 2016
8. Feb 17, 2016
### brainbaby
Oh It seems I get it......
Actually the phase shift depends directly upon the occurrence of poles and indirectly on increase in frequency …because if a pole occurs that means another capacitor comes into action …however the role of frequency in causing phase shift depends on the time when that frequency creates a pole…..
so rather we should say that
by increasing the frequency the number of poles increases, and if pole increases then only then phase shift increases…
so its the frequency which initiates a capacitor to cause the phase shift..
So now my inference seems to agree with what LvW stated in post 2 first lines....
Hence the word "constant" in the text is right...
Isn't..??
9. Feb 17, 2016
### LvW
...indirectly? What does that mean?
In frequency-dependent circuits, the phase shift between input and output directly depends on frequency (is a function of frequency).
However, how this function looks like (first order, second order, poles only or poles and zeros) depends on the circuit and its transfer function - expressed using the pole and zero location. OK?
..on the time?
No - the number of poles is a property of the circuit (a property of the transfer function) and has nothing to do with the applied frequency.
Again: The function of phase shift vs. frequency depends on the circuit and its frequency-dependence only.
For finding the approximate form of the phase function (in form of asymptotic lines) we are using the pole and zero location because we know what happens at these specific frequencies: At a pole (zero) the slope becomes more negativ (positive) by 20dB/dec.
10. Feb 17, 2016
### brainbaby
As we all know that a capacitor introduces a 90 deg phase shift...right..yes ....I agree that phase shift depends upon frequency...
Being indirectly I mean that... see the figure in post 5..here for uncompensated curve at point 2 ..a 90 deg phase shift is introduced...and in between point 2 and point 3 the frequency is increasing but the phase shift is same i.e 90 deg...,now at point 3 again another phase shift of 90 deg is introduced...and the accumulated total phase shift is 180 deg..but again in between point 3 and point 4 frequency is increasing but phase shift is yet the same 90 deg....
So this behaviour felt to me like that the points (2,3,4) where there is a roll off(pole) the phase shift depends on the frequency and for the rest intermediate positions frequency is independent of phase shift...
However by saying that frequency is independent of phase shift I never meant that it does not depends on phase shift....yes it depends ..but it was just a way of me inferring the situation....as simple as that....
Last edited: Feb 17, 2016
11. Feb 17, 2016
### LvW
To be exact: A capacitor introduces a 90 deg phase shift between voltage and current!
For an RC element we have a frequenc-dependent shift which reaches 90 deg for infinite frequencies only!
No - the phase cannot abruptly change its value (...a 90 deg phase shift is introduced).
Study the phase response in Fig. 6 - it tells you everything you need to understand what happens.
Such "phase jumps" are only introduced in the drawing for the asymptotic lines as a help for constructing the real and smooth phase function.
Frequency is always independent on phase shift. I suppose you mean the inverse: Phase shift independent on frequency.
But this is NEVER the case. Phase shift ALWAYS depends on frequency.
Don`t mix the real phase response with asymptotic lines which serve only one single purpose: A help for roughly constructing the real curve. See the phase diagram in Fig. 6 and realize that the final value at -90 deg is reached for infinite frequencies only.
12. Feb 17, 2016
### brainbaby
My inference from figure 6....
1. Phase shift non linear function of frequency..
2. Phase shift is independent of poles because it started changing from point x before the first pole have occurred.
3. Between point 1 and 2 the phase shift is 90 deg which is said in the text to be constant because they are talking in terms of accumulated phase shift which is 90 deg..(135-45=90) < as phase cannot arbitrarily change its value>..but with each change in frequency the value of phase shift is different as seen from the curve (for f2 and f3 phase shift is p1 and p2)..…which further signifies that phase shift totally depends on frequency…however that dependence is non linear..
Am I right ?
13. Feb 18, 2016
### LvW
1.) The term "constant" in the text (your post#1) is not correct. We have a constant phase shift of 90 deg for an IDEAL integrator only (pole at the origin). However, such a circuit does not exist.
2.) Your point (2) is false. It is the pole which causes the phase to deviate from its starting value (0 deg) already for very low frequencies.
3.) For my opinion, the diagram shows everything you should know to understand what happens. If there would be no second pole, the red curve (phase) would approach the -90 deg line (which acts as an asymptotic line in this case).
14. Feb 19, 2016
### brainbaby
After rigorous analysis I came to the following conclusion...
Phase shift depends on the circuit and frequency. Phase shift shows dependence on poles which is a circuit parameter from the fact that the phase shift begins to change one decade before the pole and stops changing one decade after the pole and ends at 90 degree and this happens prior and after to each pole.
The phase shift at the pole frequency is -45 degrees or
A single filter pole adds a maximum of 90 degree phase shift for the frequencies far away from its turnover frequency(3db frequency), but the shift is only 45 degrees at the pole (-3db point).
This behaviour can be further illustrated and explained as.....
But Why does this happens...??
how phase shift knows well in advance that pole is about to come.......and its time to change....and after the zig zag happens its time to be parallel to the asymptote ...and change if another pole comes otherwise show indifference.....??...
15. Feb 19, 2016
### jim hardy
Bravo !
let's take a plausible example
R = 1000 ohms
time constant = 1 millisecond
pole then is at 1000 radians per sec = 159.15 hz
and Xc at 1000 radians/sec is 1000 ohms
a decade away, at 15.915 hz what is transfer function's magnitude and phase ?
Vout/Vin = JXc/(R+jXc)
Xc = 1/(2pifC) = 1/(2 ⋅ π ⋅ 15.915 ⋅ 1X10-6) = ⋅1.0X104
jXc/(R+jXc) = j104/(103+j104)
which = 104∠-90°/1.005X104∠-84.3° = 0.995∠-5.7°
Phase shift doesn't know anything in advance. Mother nature just built math that way.
Circuit guys struggling to do above math with slide rules figured it out and observed it's always 5.7 degrees a decade out
so they devised those predictive rules for drawing a frequency & phase plot. No magic, it's just a graphical approach that saves wear and tear on slide rules.
One Mr Hendrik Bode made the approach popular in 1930's. I don/t know if he was the absolute first to think of it but he sure advanced the field of control systems
and that's why it's called a Bode Plot.
https://en.wikipedia.org/wiki/Bode_plot
Any help ?
16. Feb 19, 2016
### LvW
No - remember the simple RC lowpass with a pole at wp=1/RC. The decrease of amplitude with a corresponding phase shift will start already for f=1E-12 Hz (and even below). However, it will be hard to measure it. But that is not the question.
No - this is true for a first order lowpass only.
No - all the poles influence the amplitude and phase response also for very low frequencies, but this influence sometimes can be neglected.
Have a look on a 3rd-order or 4th-order transfer function. Why do you think that terms like w³T³ have no influence for very low frequencies?
Perhaps the influence is small - OK. But for your understanding it is important to know that there is an influence.
17. Feb 19, 2016
### LvW
Brainbaby - here are some additional information:
(1) There is a formula (called "BODE" integral), which has some relations to the "Hilbert transformation".
This formula exactly describes the relationship between the amplitude and phase response.
However, this formula is too complicated for writing it down at this place.
(2) For real (non-idealized) lowpass systems there is only one single frequency where the SLOPE is exactly -20 dB/Dec (-40 dB/Dec).
At this frequency the phase exactly assumes the value of -90 deg (-180 deg).
However - normally, this frequency is not known, but this knowlege is exploited for constructing a rough phase function curve based on the magnitude response (which also is known only as a approximation based on the pole location and the slope information about asymptotic lines).
Last edited: Feb 19, 2016
18. Feb 19, 2016
### brainbaby
In the above paragraph I was not talking about variation of amplitude with phase shift...rather I talked about variation of phase shift with frequency..
So you mean to say that the above conclusion is just an approximation ...however...it can be accepted on general terms....but may be rejected on precise basis....
19. Feb 19, 2016
### brainbaby
The reason why I said the above was because as the frequency grows along the graph phase shift accumulates ...and further ahead in frequency a pole exist...ok..and there an increase in phase shift is observed....and so on...
so they framed a theory based on experimental observation.....however in this thread what we have discussed so far is the "what"
and most probably Ive got what happened...now its time to know the "why".....and you have brought mother nature into consideration..
I appreciated that you worked out the maths....thanks for it...but that maths is just a representation of a phenomenon... what I believe is that
“When you understand something, then you can find the math to express that understanding. The math doesn't provide the understanding.”
and that understanding "why" is paramount for me.....
20. Feb 20, 2016
### LvW
As I have mentioned (BODE integral) gain (magnitude) and phase responses are correlated to each other
Similar Discussions: Phase shift issue in Dominant Pole Compensation strategy.... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702306747436523, "perplexity": 1779.2864083646775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515311.25/warc/CC-MAIN-20171212075935-20171212095935-00326.warc.gz"} |
http://mathoverflow.net/questions/12100/why-weil-group-and-not-absolute-galois-group?answertab=oldest | # Why Weil group and not Absolute Galois group?
In many formulation of Class Field theory, the Weil group is favored as compared to the Absolute Galois group. May I asked why it is so? I know that Weil group can be generalized better to Langlands program but is there a more natural answer?
Also we know that the abelian Weil Group is the isomorphic image of the reciprocity map of the multiplicative group (in the local case) and of the idele-class group (in the global case). Is there any sense in which the "right" direction of the arrow is the inverse of the reciprocity map?
Please feel free to edit the question into a form that you think might be better.
-
I have accepted the answer by Johnson but I still hope to receive the answer for the second part of the question. – Tran Chieu Minh Jan 17 '10 at 19:28
One reason we prefer the Weil group over the Galois group (at least in the local case) is that the Weil group is locally compact, thus it has "more" representations (over $\bf C$). In fact, all $\bf C$-valued characters of $Gal(\bar{\bf Q_p} / \bf Q_p)$ have finite image, where as that of $W_{\bf Q_p}$ can very well have infinite image. The same goes for general representations of these groups (recall that $\bf{GL}_n(\bf C)$ has no small subgroups.)
The global Weil group (which is much more complicated than the local one), on the other hand, is a rather mysterious object that is pretty much untouched in modern number theory as far as I can tell. Supposedly the global Langlands group used in the global Langlands correspondence should be the extension of the global Weil group by a compact group, but this is still largely conjectural.
The standard reference is Tate's "Number Theory Background" in the Corvallis volumes (available for free at ams.org). Also Brooks Roberts has notes on Weil representations available at his website.
-
Thank you for the reply. Do you think it is possible to have a natural reason beside utility for which we prefer Weil group? – Tran Chieu Minh Jan 17 '10 at 19:17
Tran: Isn't utility one of the best motivators to do anything? – Adam Hughes Mar 22 '11 at 13:21
The Weil group appears for several reasons.
Firstly: if $K$ is a non-archimedean local field with residue field $k$, the local reciprocity law induces an embedding $K^{\times} \hookrightarrow G_K^{ab}.$ The image consists of all elements in $G_k$ whose image is an integral power of Frobenius. This is the abelianized Weil group; it just appears naturally.
Secondly: suppose that $K$ is a global field of positive characteristic, i.e. the function field of a curve over a finite field $k$. Then the global reciprocity map identifies the idele class group of $K$ with a subgroup of $G_K^{ab}$ consisting of elements which act on $k$ by integral powers of Frobenius. So again, it is the abelianized Weil group that appears.
Thirdly: suppose that $E$ is an elliptic curve over a quadratic imaginary field $K$ with complex multipliction by $\mathcal O$, the ring of integers in $K$. (Thus I am implicity fixing $K$ to have class number one, but this is not so important for what I am going to say next.) If $\ell$ is a prime, then the $\ell$-adic Tate module is then free of rank one over $\mathcal O\_{\ell}$ (the $\ell$-adic completion of $\mathcal O$), and the $G_K$-action on this Tate module induces a character $\psi_{\ell}:G_K^{ab} \rightarrow \mathcal O\_{\ell}^{\times}$.
There is a sense in which the various $\psi_{\ell}$ are indepenent of $\ell$, but what is that sense?
Well, suppose that $\wp$ is a prime of $K$, not dividing $\ell$ and at which $E$ has good reduction. Then the value of $\psi_{\ell}$ on $Frob_{\wp}$ is indepenent of $\ell$, in the sense that its value is an element of $\mathcal O$, and this value is independent of $\ell$. More generally, provided that $\wp$ is prime to $\ell$, the restriction of $\psi_{\ell}$ to the local Weil group at $\wp$ is independent of $\ell$ (in the sense that the value at a lift of Frobenius will be an algebraic integer that is independent of $\ell$, and its restriction to inertia at $\wp$ will be a finite image representation, hence defined over algebraic integers, which again is then independent of $\ell$).
Note that independence of $\ell$ doesn't make sense for $\psi\_{\ell}$ on the full local Galois group at $\wp$, since on this whole group it will certainly take values that are not algebraic, but rather just some $\ell$-adic integers, which can't be compared with one another as $\ell$ changes.
Now there is also a sense in which the $\psi\_\ell$, as global Galois characters, are indepdendent of $\ell$. Indeed, we can glue together the various local Weil group representations to get a representation $\psi$ of the global Weil group $W_K$. Since it is abelian, this will just be an idele class character $\psi$, or what is also called a Hecke character or Grossencharacter. It will take values in complex numbers. (At the finite places it even takes algebraic number values, but when we organize things properly at the infinite places, we are forced to think of it as complex valued.)
Note that $\psi$ won't factor through the connected component group, i.e. it won't be a character of $G_K^{ab}$. It is not a Galois character, but a Weil group character. It stores in one object the information contained in a whole collection of $\ell$-adic Galois characters, and gives a precise sense to the idea that these various $\ell$-adic characters are independent of $\ell$.
This is an important general role of Weil groups.
Fourthly: The Hecke character $\psi$ above will be an algebraic Hecke character, i.e. at the infinite places, it will involve raising to integral powers. But we can also raise real numbers to an arbitrary complex power $s$, and so there are Hecke characters that do not come from the preceding construction (or ones like it); in other words, there are non-algebraic, or non-motivic, Hecke characters. But they are abelian characters of the global Weil group, and they have a meaning; the variable $s$ to which we can raise real numbers is the same variable $s$ as appears in $\zeta$- or $L$-functions.
In summary: Because Weil groups are "less completed", or "less profinite", then Galois groups, they play an important role in describing how a system of $\ell$-adic representations can be independent of $\ell$. Also, they allow one to describe phenomena which are automorphic, but not motivic (i.e. which correspond to non-integral values of the $L$-function variable $s$). (They don't describe all automorphic phenomena, though --- one would need the entire Langlands group for that.)
-
Excellent answer ! By the way, the second section of Gross-Reeder paper on arithmetic invariants (Duke 2010) is also nice for Weil groups – user4245 Jun 24 '12 at 16:56
Perhaps something worth pointing out, relating to how the Weil group appears naturally: it arises from a compatible system of group extensions at finite levels. Indeed, one of the "axioms" of class field theory, is the existence of a "fundamental class" uL/*K* in $H^2(\operatorname{Gal}(L/K),C_L)$ for each finite Galois extension $L/K$ (where $C_L$ is the class module). Each of these gives a group extension $$1\rightarrow C_L\rightarrow W_{L/K}\rightarrow \operatorname{Gal}(L/K)\rightarrow 1.$$ The projective limit of these gives the absolute Weil group fitting in $$1\rightarrow C\rightarrow W_K\rightarrow G_K$$ with the rightmost map having dense image (and $C$ is the formation module of the class formation). Thus, you can think of the Weil group as arising canonically out of the results of class field theory, thus making it a natural replacement of $G_K$ in questions related to CFT. I like section 1 of chapter III of Neukirch–Schmidt–Wingberg's Cohomology of number fields and the last chapter of Artin–Tate for this material.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130237698554993, "perplexity": 212.71582669120136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832738.80/warc/CC-MAIN-20140820021352-00444-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://xianblog.wordpress.com/tag/xiao-li-meng/ | warp-U bridge sampling
Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , on October 12, 2016 by xi'an
[I wrote this set of comments right after MCqMC 2016 on a preliminary version of the paper so mileage may vary in terms of the adequation to the current version!]
In warp-U bridge sampling, newly arXived and first presented at MCqMC 16, Xiao-Li Meng continues (in collaboration with Lahzi Wang) his exploration of bridge sampling techniques towards improving the estimation of normalising constants and ratios thereof. The bridge sampling estimator of Meng and Wong (1996) is an harmonic mean importance sampler that requires iterations as it depends on the ratio of interest. Given that the normalising constant of a density does not depend on the chosen parameterisation in the sense that the Jacobian transform preserves this constant, a degree of freedom is in the choice of the parameterisation. This is the idea behind warp transformations. The initial version of Meng and Schilling (2002) used location-scale transforms, while the warp-U solution goes for a multiple location-scale transform that can be seen as based on a location-scale mixture representation of the target. With K components. This approach can also be seen as a sort of artificial reversible jump algorithm when one model is fully known. A strategy Nicolas and I also proposed in our nested sampling Biometrika paper.
Once such a mixture approximation is obtained. each and every component of the mixture can be turned into the standard version of the location-scale family by the appropriate location-scale transform. Since the component index k is unknown for a given X, they call this transform a random transform, which I find somewhat more confusing that helpful. The conditional distribution of the index given the observable x is well-known for mixtures and it is used here to weight the component-wise location-scale transforms of the original distribution p into something that looks rather similar to the standard version of the location-scale family. If no mode has been forgotten by the mixture. The simulations from the original p are then rescaled by one of those transforms, which index k is picked according to the conditional distribution. As explained later to me by XL, the random[ness] in the picture is due to the inclusion of a random ± sign. Still, in the notation introduced in (13), I do not get how the distribution Þ [sorry for using different symbols, I cannot render a tilde on a p] is defined since both ψ and W are random. Is it the marginal? In which case it would read as a weighted average of rescaled versions of p. I have the same problem with Theorem 1 in that I do not understand how one equates Þ with the joint distribution.
Equation (21) is much more illuminating (I find) than the previous explanation in that it exposes the fact that the principle is one of aiming at a new distribution for both the target and the importance function, with hopes that the fit will get better. It could have been better to avoid the notion of random transform, then, but this is mostly a matter of conveying the notion.
On more specifics points (or minutiae), the unboundedness of the likelihood is rarely if ever a problem when using EM. An alternative to the multiple start EM proposal would then be to get sequential and estimate the mixture in a sequential manner, only adding a component when it seems worth it. See eg Chopin and Pelgrin (2004) and Chopin (2007). This could also help with the bias mentioned therein since only a (tiny?) fraction of the data would be used. And the number of components K has an impact on the accuracy of the approximation, as in not missing a mode, and on the computing time. However my suggestion would be to avoid estimating K as this must be immensely costly.
Section 6 obviously relates to my folded Markov interests. If I understand correctly, the paper argues that the transformed density Þ does not need to be computed when considering the folding-move-unfolding step as a single step rather than three steps. I fear the description between equations (30) and (31) is missing the move step over the transformed space. Also on a personal basis I still do not see how to add this approach to our folding methodology, even though the different transforms act as as many replicas of the original Markov chain.
Cauchy Distribution: Evil or Angel?
Posted in Books, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , on May 19, 2015 by xi'an
Natesh Pillai and Xiao-Li Meng just arXived a short paper that solves the Cauchy conjecture of Drton and Xiao [I mentioned last year at JSM], namely that, when considering two normal vectors with generic variance matrix S, a weighted average of the ratios X/Y remains Cauchy(0,1), just as in the iid S=I case. Even when the weights are random. The fascinating side of this now resolved (!) conjecture is that the correlation between the terms does not seem to matter. Pushing the correlation to one [assuming it is meaningful, which is a suspension of belief!, since there is no standard correlation for Cauchy variates] leads to a paradox: all terms are equal and yet… it works: we recover a single term, which again is Cauchy(0,1). All that remains thus to prove is that it stays Cauchy(0,1) between those two extremes, a weird kind of intermediary values theorem!
Actually, Natesh and XL further prove an inverse χ² theorem: the inverse of the normal vector, renormalised into a quadratic form is an inverse χ² no matter what its covariance matrix. The proof of this amazing theorem relies on a spherical representation of the bivariate Gaussian (also underlying the Box-Müller algorithm). The angles are then jointly distributed as
$\exp\{-\sum_{i,j}\alpha_{ij}\cos(\theta_i-\theta_j)\}$
and from there follows the argument that conditional on the differences between the θ’s, all ratios are Cauchy distributed. Hence the conclusion!
A question that stems from reading this version of the paper is whether this property extends to other formats of non-independent Cauchy variates. Somewhat connected to my recent post about generating correlated variates from arbitrary distributions: using the inverse cdf transform of a Gaussian copula shows this is possibly the case: the following code is meaningless in that the empirical correlation has no connection with a “true” correlation, but nonetheless the experiment seems of interest…
> ro=.999999;x=matrix(rnorm(2e4),ncol=2);y=ro*x+sqrt(1-ro^2)*matrix(rnorm(2e4),ncol=2)
> cor(x[,1]/x[,2],y[,1]/y[,2])
[1] -0.1351967
> ro=.99999999;x=matrix(rnorm(2e4),ncol=2);y=ro*x+sqrt(1-ro^2)*matrix(rnorm(2e4),ncol=2)
> cor(x[,1]/x[,2],y[,1]/y[,2])
[1] 0.8622714
> ro=1-1e-5;x=matrix(rnorm(2e4),ncol=2);y=ro*x+sqrt(1-ro^2)*matrix(rnorm(2e4),ncol=2)
> z=qcauchy(pnorm(as.vector(x)));w=qcauchy(pnorm(as.vector(y)))
> cor(x=z,y=w)
[1] 0.9999732
> ks.test((z+w)/2,"pcauchy")
One-sample Kolmogorov-Smirnov test
data: (z + w)/2
D = 0.0068, p-value = 0.3203
alternative hypothesis: two-sided
> ro=1-1e-3;x=matrix(rnorm(2e4),ncol=2);y=ro*x+sqrt(1-ro^2)*matrix(rnorm(2e4),ncol=2)
> z=qcauchy(pnorm(as.vector(x)));w=qcauchy(pnorm(as.vector(y)))
> cor(x=z,y=w)
[1] 0.9920858
> ks.test((z+w)/2,"pcauchy")
One-sample Kolmogorov-Smirnov test
data: (z + w)/2
D = 0.0036, p-value = 0.9574
alternative hypothesis: two-sided
Posterior predictive p-values and the convex order
Posted in Books, Statistics, University life with tags , , , , , , , , , on December 22, 2014 by xi'an
Patrick Rubin-Delanchy and Daniel Lawson [of Warhammer fame!] recently arXived a paper we had discussed with Patrick when he visited Andrew and I last summer in Paris. The topic is the evaluation of the posterior predictive probability of a larger discrepancy between data and model
$\mathbb{P}\left( f(X|\theta)\ge f(x^\text{obs}|\theta) \,|\,x^\text{obs} \right)$
which acts like a Bayesian p-value of sorts. I discussed several times the reservations I have about this notion on this blog… Including running one experiment on the uniformity of the ppp while in Duke last year. One item of those reservations being that it evaluates the posterior probability of an event that does not exist a priori. Which is somewhat connected to the issue of using the data “twice”.
“A posterior predictive p-value has a transparent Bayesian interpretation.”
Another item that was suggested [to me] in the current paper is the difficulty in defining the posterior predictive (pp), for instance by including latent variables
$\mathbb{P}\left( f(X,Z|\theta)\ge f(x^\text{obs},Z^\text{obs}|\theta) \,|\,x^\text{obs} \right)\,,$
which reminds me of the multiple possible avatars of the BIC criterion. The question addressed by Rubin-Delanchy and Lawson is how far from the uniform distribution stands this pp when the model is correct. The main result of their paper is that any sub-uniform distribution can be expressed as a particular posterior predictive. The authors also exhibit the distribution that achieves the bound produced by Xiao-Li Meng, Namely that
$\mathbb{P}(P\le \alpha) \le 2\alpha$
where P is the above (top) probability. (Hence it is uniform up to a factor 2!) Obviously, the proximity with the upper bound only occurs in a limited number of cases that do not validate the overall use of the ppp. But this is certainly a nice piece of theoretical work.
XL definition of statistics in 24″
Posted in Books, Kids, Statistics, University life with tags , , , on September 22, 2013 by xi'an
Z-test, t-test, chi-squared test,
Bayes, Frequentist, Fiducial
Let me make you feel influential
Regression, Correlation, Causation,
What else can generate more passion?
Skewness, Kurtosis, Heteroscedasticity
Boy, do I feel sexy?
Xiao-Li Meng, at the Ig Nobel ceremony
XL for 24/7 fame at (Ig) Nobel
Posted in Kids, Statistics, University life with tags , , , on September 12, 2013 by xi'an
Tonight, our friend Xiao-Li Meng will (no doubt brilliantly) deliver a 24/7 lecture at the Ig Nobel Prize ceremony. It will be webcast live, with a watching party in Paris…. (A wee too late for me, I am afraid.) Although Xiao-Li is quite able to lecture 24/7 on statistics, this special lecture will last 24 seconds plus 7 words. Bets are open for the 7 words!! (Note that, contrary to the other Nobel Prize, the Ig Nobel Prizes often include winners in mathematics and statistics.)
estimating the measure and hence the constant
Posted in pictures, Running, Statistics, University life with tags , , , , , , , on December 6, 2012 by xi'an
As mentioned on my post about the final day of the ICERM workshop, Xiao-Li Meng addresses this issue of “estimating the constant” in his talk. It is even his central theme. Here are his (2011) slides as he sent them to me (with permission to post them!):
He therefore points out in slide #5 why the likelihood cannot be expressed in terms of the normalising constant because this is not a free parameter. Right! His explanation for the approximation of the unknown constant is then to replace the known but intractable dominating measure—in the sense that it cannot compute the integral—with a discrete (or non-parametric) measure supported by the sample. Because the measure is defined up to a constant, this leads to sample weights being proportional to the inverse density. Of course, this representation of the problem is open to criticism: why focus only on measures supported by the sample? The fact that it is the MLE is used as an argument in Xiao-Li’s talk, but this can alternatively be seen as a drawback: I remember reviewing Dankmar Böhning’s Computer-Assisted Analysis of Mixtures and being horrified when discovering this feature! I am currently more agnostic since this appears as an alternative version of empirical likelihood. There are still questions about the measure estimation principle: for instance, when handling several samples from several distributions, why should they all contribute to a single estimate of μ rather than to a product of measures? (Maybe because their models are all dominated by the same measure μ.) Now, getting back to my earlier remark, and as a possible answer to Larry’s quesiton, there could well be a Bayesian version of the above, avoiding the rough empirical likelihood via Gaussian or Drichlet process prior modelling.
lemma 7.3
Posted in Statistics with tags , , , , , , , , , , , on November 14, 2012 by xi'an
As Xiao-Li Meng accepted to review—and I am quite grateful he managed to fit this review in an already overflowing deanesque schedule!— our 2004 book Monte Carlo Statistical Methods as part of a special book review issue of CHANCE honouring the memory of George thru his books—thanks to Sam Behseta for suggesting this!—, he sent me the following email about one of our proofs—demonstrating how much efforts he had put into this review!—:
I however have a question about the proof of Lemma 7.3
on page 273. After the expression of
E[h(x^(1)|x_0], the proof stated "and substitute
Eh(x) for h(x_1)". I cannot think of any
justification for this substitution, given the whole
purpose is to show h(x) is a constant.
I put it on hold for a while and only looked at it in the (long) flight to Chicago. Lemma 7.3 in Monte Carlo Statistical Methods is the result that the Metropolis-Hastings algorithm is Harris recurrent (and not only recurrent). The proof is based on the characterisation of Harris recurrence as having only constants for harmonic functions, i.e. those satisfying the identity
$h(x) = \mathbb{E}[h(X_t)|X_{t-1}=x]$
The chain being recurrent, the above implies that harmonic functions are almost everywhere constant and the proof steps from almost everywhere to everywhere. The fact that the substitution above—and I also stumbled upon that very subtlety when re-reading the proof in my plane seat!—is valid is due to the fact that it occurs within an integral: despite sounding like using the result to prove the result, the argument is thus valid! Needless to say, we did not invent this (elegant) proof but took it from one of the early works on the theory of Metropolis-Hastings algorithms, presumably Luke Tierney’s foundational Annals paper work that we should have quoted…
As pointed out by Xiao-Li, the proof is also confusing for the use of two notations for the expectation (one of which is indexed by f and the other corresponding to the Markov transition) and for the change in the meaning of f, now the stationary density, when compared with Theorem 6.80. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902400553226471, "perplexity": 902.2663267391451}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00022.warc.gz"} |
https://nips.cc/Conferences/2015/ScheduleMultitrack?event=5554 | Timezone: »
Poster
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Wei Sun · Zhaoran Wang · Han Liu · Guang Cheng
Wed Dec 09 04:00 PM -- 08:59 PM (PST) @ 210 C #79 #None
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679778814315796, "perplexity": 1063.1947561292611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201699.38/warc/CC-MAIN-20200921112601-20200921142601-00210.warc.gz"} |
https://math.stackexchange.com/questions/3345073/two-very-advanced-harmonic-series-of-weight-5 | # Two very advanced harmonic series of weight $5$
Very recently Cornel discovered two (update: in fact there are more as seen from the new entires) fascinating results involving harmonic series using ideas from his book, (Almost) Impossible Integrals, Sums, and Series, and which are the core of a new paper he's preparing:
$$\begin{equation*} \sum _{n=1}^{\infty } \frac{H_n H_{2 n}}{(2 n)^3} \end{equation*}$$ $$\begin{equation*} =\frac{307}{128}\zeta(5)-\frac{1}{16}\zeta (2) \zeta (3)+\frac{1}{3}\log ^3(2)\zeta (2) -\frac{7}{8} \log ^2(2)\zeta (3)-\frac{1}{15} \log ^5(2) \end{equation*}$$ $$\begin{equation*} -2 \log (2) \operatorname{Li}_4\left(\frac{1}{2}\right) -2 \operatorname{Li}_5\left(\frac{1}{2}\right); \end{equation*}$$ and $$\begin{equation*} \sum _{n=1}^{\infty } \frac{H_n H_{2 n}}{(2 n-1)^3} \end{equation*}$$ $$\begin{equation*} =6 \log (2)-2 \log ^2(2)-\frac{1}{12}\log ^4(2)+\frac{1}{12} \log ^5(2)-\frac{3}{2} \zeta (2)-\frac{21}{8} \zeta (3)+\frac{173}{32} \zeta (4) \end{equation*}$$ $$\begin{equation*} +\frac{527}{128} \zeta (5)-\frac{21 }{16}\zeta (2) \zeta (3)+\frac{3}{2} \log (2) \zeta (2)-\frac{7}{2}\log (2)\zeta (3)-4\log (2)\zeta (4)+\frac{1}{2} \log ^2(2) \zeta (2) \end{equation*}$$ $$\begin{equation*} -\frac{1}{2} \log ^3(2)\zeta (2)+\frac{7}{4}\log ^2(2)\zeta (3)-2 \operatorname{Li}_4\left(\frac{1}{2}\right)+2 \log (2)\operatorname{Li}_4\left(\frac{1}{2}\right), \end{equation*}$$ or, after adjustments, the form $$\sum _{n=1}^{\infty}\frac{H_n H_{2 n}}{(2 n+1)^3}$$ $$=\frac{1}{12}\log ^5(2)+\frac{31}{128} \zeta (5)-\frac{1}{2} \log ^3(2)\zeta (2)+\frac{7}{4} \log ^2(2) \zeta (3)-\frac{17}{8} \log (2)\zeta (4) \\+2\log (2) \operatorname{Li}_4\left(\frac{1}{2}\right).$$ Update I : A new series entry obtained based on the aforementioned series $$\begin{equation*} \sum _{n=1}^{\infty } \frac{H_n H_{2 n}^{(2)}}{(2 n)^2} \end{equation*}$$ $$\begin{equation*} =\frac{23 }{32}\zeta (2) \zeta (3)-\frac{581}{128} \zeta (5)-\frac{2}{3}\log ^3(2) \zeta (2)+\frac{7}{4} \log^2(2)\zeta (3) +\frac{2}{15} \log ^5(2) \end{equation*}$$ $$\begin{equation*} +4\log (2) \operatorname{Li}_4\left(\frac{1}{2}\right) +4 \operatorname{Li}_5\left(\frac{1}{2}\right); \end{equation*}$$ Update II : Another new series entry obtained based on the aforementioned series $$\begin{equation*} \sum _{n=1}^{\infty } \frac{H_n H_{2 n}^2}{(2 n)^2} \end{equation*}$$ $$\begin{equation*} =\frac{23 }{32}\zeta (2) \zeta (3)+\frac{917 }{128}\zeta (5)+\frac{2}{3} \log ^3(2)\zeta (2)-\frac{7}{4} \log ^2(2)\zeta (3)-\frac{2}{15} \log ^5(2) \end{equation*}$$ $$\begin{equation*} -4 \log (2)\operatorname{Li}_4\left(\frac{1}{2}\right)-4 \operatorname{Li}_5\left(\frac{1}{2}\right); \end{equation*}$$ Update III : And a new series entry from the same class of series with an unexpected (and outstanding) closed-form $$\begin{equation*} \sum _{n=1}^{\infty } \frac{H_{2n} H_{n}^{(2)}}{(2 n)^2}=\frac{101 }{64}\zeta (5)-\frac{5 }{16}\zeta (2) \zeta (3); \end{equation*}$$ It's interesting to note that $$\displaystyle \sum _{n=1}^{\infty } \frac{H_{n} H_{n}^{(2)}}{n^2}=\zeta(2)\zeta(3)+\zeta(5)$$, which may be found calculated in the book, (Almost) Impossible Integrals, Sums, and Series, by series manipulations.
A note: The series from UPDATE III seems to be known in literature, and it already appeared here https://math.stackexchange.com/q/1868355 (see $$(3)$$).
Update IV : Again a new series entry from the same class of series $$\begin{equation*} \sum _{n=1}^{\infty } \frac{H_n^2 H_{2 n}}{(2 n)^2} \end{equation*}$$ $$\begin{equation*} =\frac{9 }{16}\zeta (2) \zeta (3)+\frac{421 }{64}\zeta (5)+\frac{2}{3} \log ^3(2)\zeta (2) -\frac{7}{4} \log ^2(2)\zeta (3) -\frac{2}{15} \log^5(2) \end{equation*}$$ $$\begin{equation*} -4 \log(2)\operatorname{Li}_4\left(\frac{1}{2}\right) -4 \operatorname{Li}_5\left(\frac{1}{2}\right); \end{equation*}$$ Update V : A strong series - September 26, 2019 $$\sum _{n=1}^{\infty } \frac{H_{2n} H_n^{(2)}}{(2 n+1)^2}$$ $$=\frac{4}{3}\log ^3(2)\zeta (2) -\frac{7}{2}\log^2(2)\zeta (3)-\frac{21}{16}\zeta(2)\zeta(3)+\frac{713}{64} \zeta (5)-\frac{4}{15} \log ^5(2)$$ $$-8 \log (2)\operatorname{Li}_4\left(\frac{1}{2}\right) -8\operatorname{Li}_5\left(\frac{1}{2}\right);$$ Update VI : Three very challenging series - September 28, 2019 $$i) \ \sum _{n=1}^{\infty } \frac{H_n H_{2 n}^{(2)}}{(2 n+1)^2}$$ $$=\frac{35}{32} \zeta (2) \zeta (3)-\frac{651}{128} \zeta (5)+\frac{1}{3}\log^3(2)\zeta(2)-\frac{7}{4}\log^2(2)\zeta (3)+\frac{53}{16} \log (2)\zeta (4) -\frac{1}{30} \log ^5(2)$$ $$+4 \operatorname{Li}_5\left(\frac{1}{2}\right);$$ $$ii) \ \sum _{n=1}^{\infty } \frac{H_n H_{2 n}^2}{(2 n+1)^2}$$ $$=\frac{35}{32} \zeta (2) \zeta (3)+\frac{465}{128} \zeta (5)+\frac{1}{2}\log^3(2)\zeta(2)-\frac{7}{4}\log^2(2)\zeta (3)-\frac{11}{16} \log (2)\zeta (4) -\frac{1}{12} \log ^5(2)$$ $$-2\log(2) \operatorname{Li}_4\left(\frac{1}{2}\right);$$ $$iii) \ \sum _{n=1}^{\infty } \frac{H_n^2 H_{2 n}}{(2 n+1)^2}$$ $$=\frac{21}{16} \zeta (2) \zeta (3)-\frac{217}{64} \zeta (5)+\frac{2}{3}\log^3(2)\zeta(2)-\frac{7}{4}\log^2(2)\zeta (3)+ \log (2)\zeta (4) -\frac{1}{15} \log ^5(2)$$ $$+8\operatorname{Li}_5\left(\frac{1}{2}\right);$$ Update VII : Critical series relation used in the Update VI - September 28, 2019 $$i) \sum _{n=1}^{\infty } \frac{H_n H_{2 n}^2}{(2 n+1)^2}-\sum _{n=1}^{\infty } \frac{H_n H_{2 n}^{(2)}}{(2 n+1)^2}$$ $$=\frac{1}{6}\log ^3(2)\zeta (2) -4\log (2)\zeta (4)+\frac{279}{32} \zeta (5)-\frac{1}{20} \log ^5(2)-2 \log (2)\operatorname{Li}_4\left(\frac{1}{2}\right) -4 \operatorname{Li}_5\left(\frac{1}{2}\right);$$ $$ii) \ 4 \sum _{n=1}^{\infty } \frac{H_n H_{2 n}^2}{(2 n+1)^2}-\sum _{n=1}^{\infty } \frac{H_n^2 H_{2 n}}{(2 n+1)^2}$$ $$=\frac{49}{16} \zeta (2) \zeta (3)+\frac{1147}{64}\zeta (5)+\frac{4}{3}\log^3(2)\zeta (2) -\frac{21}{4} \log ^2(2)\zeta (3) -\frac{15}{4}\log (2)\zeta (4)-\frac{4}{15} \log ^5(2)$$ $$-8 \log (2)\operatorname{Li}_4\left(\frac{1}{2}\right) -8\operatorname{Li}_5\left(\frac{1}{2}\right),$$ where $$H_n^{(m)}=1+\frac{1}{2^m}+\cdots+\frac{1}{n^m}, \ m\ge1,$$ designates the $$n$$th generalized harmonic number of order $$m$$, $$\zeta$$ represents the Riemann zeta function, and $$\operatorname{Li}_n$$ denotes the Polylogarithm function.
A note: for example, for those interested, one of the possible ways of calculating both series from UPDATE III and UPDATE IV is based on building a system of relations with the two series by exploiting $$\displaystyle \int_0^1 x^{n-1} \log^2(1-x)\textrm{d}x=\frac{H_n^2+H_n^{(2)}}{n}$$ and $$\displaystyle \sum_{n=1}^{\infty} x^n(H_n^2-H_n^{(2)})=\frac{\log^2(1-x)}{1-x}$$. Apart from this, the series from UPDATE III allows at least a (very) elegant approach by using different means.
Using the first series we may obtain (based on the series representation of $$\log(1-x)\log(1+x)$$ and the integral $$\int_0^1 x^{n-1}\operatorname{Li}_2(x)\textrm{d}x$$) a way for proving that $$\int_0^1 \frac{\operatorname{Li}_2(x) \log (1+x) \log (1-x)}{x} \textrm{d}x=\frac{29 }{64}\zeta (5)-\frac{5 }{8}\zeta (2) \zeta (3).$$
Then, based on the solution below and using the alternating harmonic series in the book, (Almost) Impossible Integrals, Sums, and Series, we have
$$\int_0^1 \frac{\operatorname{Li}_2(-x) \log (1+x) \log (1-x)}{x} \textrm{d}x$$ $$=\frac{5 }{16}\zeta (2) \zeta (3)+\frac{123 }{32}\zeta (5)+\frac{2}{3} \log ^3(2)\zeta (2)-\frac{7}{4} \log ^2(2)\zeta (3)-\frac{2}{15}\log ^5(2)\\-4 \log (2)\operatorname{Li}_4\left(\frac{1}{2}\right)-4 \operatorname{Li}_5\left(\frac{1}{2}\right).$$ And if we add up the two previous integrals, we get $$\int_0^1 \frac{\operatorname{Li}_2(x^2) \log (1+x) \log (1-x)}{x} \textrm{d}x$$ $$=\frac{275}{32}\zeta (5)-\frac{5 }{8}\zeta (2) \zeta (3)+\frac{4}{3} \log ^3(2)\zeta (2)-\frac{7}{2} \log ^2(2)\zeta (3)-\frac{4}{15}\log ^5(2)\\-8 \log (2)\operatorname{Li}_4\left(\frac{1}{2}\right)-8 \operatorname{Li}_5\left(\frac{1}{2}\right).$$ Update (integrals): Another curious integral arising during the calculations $$\int_0^1 \frac{x \log (x) \log(1-x^2) \operatorname{Li}_2(x)}{1-x^2} \textrm{d}x=\frac{41 }{32}\zeta (2) \zeta (3)-\frac{269 }{128}\zeta (5).$$
QUESTION: Have these series ever been known in literature? I'm not interested in solutions but only if the series appear anywhere in the literature.
• David Bailey would be as likely to know as anyone. – Gerry Myerson Sep 5 at 12:29
• Cornel is the real boss. – Ali Shather Sep 5 at 17:09
• Have you tried to contact Bailey, 97357329? – Gerry Myerson Sep 6 at 13:09
• @GerryMyerson May I guess you refer to David H. Bailey? en.wikipedia.org/wiki/David_H._Bailey_(mathematician). I didn't yet. I don't know him personally. – user97357329 Sep 6 at 13:13
• Yes, that's the man. He's very approachable & I think he'd be interested. – Gerry Myerson Sep 6 at 22:50
A solution in large steps by Cornel Ioan Valean:
Considering $$\displaystyle -\log(1+y)\log(1-y)=\sum_{n=1}^{\infty} y^{2n} \frac{H_{2n}-H_n}{n}+\frac{1}{2}\sum_{n=1}^{\infty} \frac{y^{2n}}{n^2}$$ where we divide both sides by $$y$$ and then integrate from $$y=0$$ to $$y=x$$, we have $$\displaystyle -\int_0^x \frac{\log(1+y)\log(1-y)}{y}\textrm{d}y=\sum_{n=1}^{\infty} x^{2n} \frac{H_{2n}-H_n}{2n^2}+\frac{1}{4}\sum_{n=1}^{\infty} \frac{x^{2n}}{n^3}$$. Now, if we multiply both sides of this last result by $$\log(1+x)/x$$ and then integrate from $$x=0$$ to $$x=1$$, using the fact that $$\displaystyle \int_0^1 x^{2n-1}\log(1+x) \textrm{d}x=\frac{H_{2n}-H_n}{2n}$$, we get
{A specific note: one can multiply both sides of the relation above by $$\log(1-x)/x$$ instead of $$\log(1+x)/x$$ and use the integral, $$\int_0^1 x^{n-1}\log(1-x)\textrm{d}x=-H_n/n$$, but later in the process one might like to use though the version $$\int_0^1 x^{2n-1}\log(1+x) \textrm{d}x$$ to nicely get the calculations.}
$$\underbrace{-\int_0^1 \frac{\log(1+x)}{x}\left(\int_0^x \frac{\log(1+y)\log(1-y)}{y}\textrm{d}y\right)\textrm{d}x}_{\displaystyle I}=\frac{5}{4}\sum_{n=1}^{\infty}\frac{H_n^2}{n^3}+\frac{7}{8}\sum_{n=1}^{\infty}\frac{H_n}{n^4}-\sum_{n=1}^{\infty}(-1)^{n-1}\frac{H_{n}}{n^4}-\displaystyle\sum_{n=1}^{\infty}(-1)^{n-1}\frac{H_n^2}{n^3}-\frac{1}{2}\sum_{n=1}^{\infty}\frac{H_nH_{2n}}{n^3}.$$
Integrating by parts, the integral $$I$$ may be written as $$5/16\zeta(2)\zeta(3)-\underbrace{\int_0^1 \frac{\text{Li}_2(-x) \log (1+x) \log (1-x)}{x} \textrm{d}x}_{J}$$, and then we may write that $$\sum_{n=1}^{\infty}\frac{H_nH_{2n}}{n^3}=2\int_0^1 \frac{\text{Li}_2(-x) \log (1+x) \log (1-x)}{x} \textrm{d}x-\frac{5}{8}\zeta(2)\zeta(3)+\frac{5}{2}\sum_{n=1}^{\infty}\frac{H_n^2}{n^3}+\frac{7}{4}\sum_{n=1}^{\infty}\frac{H_n}{n^4}-2\sum_{n=1}^{\infty}(-1)^{n-1}\frac{H_{n}}{n^4}-2\sum_{n=1}^{\infty}(-1)^{n-1}\frac{H_n^2}{n^3}\tag1 .$$
Now, the last magical part comes from considering expressing the integral $$J$$ in a different way, and by using the Cauchy product, $$\displaystyle \operatorname{Li}_2(-x)\log(1+x)=3\sum_{n=1}^{\infty}(-1)^n \frac{x^n}{n^3}-2 \sum_{n=1}^{\infty}(-1)^n x^n\frac{H_n}{n^2}-\sum_{n=1}^{\infty}(-1)^nx^n\frac{H_n^{(2)}}{n}$$, we get that
$$\int_0^1 \frac{\operatorname{Li}_2(-x) \log (1+x) \log (1-x)}{x} \textrm{d}x= -\sum _{n=1}^{\infty }(-1)^{n-1}\frac{ H_n H_n^{(2)}}{n^2}+3\sum _{n=1}^{\infty }(-1)^{n-1}\frac{ H_n}{n^4}\\-2\sum _{n=1}^{\infty }(-1)^{n-1}\frac{H_n^2}{n^3}.\tag2$$
Combining $$(1)$$ and $$(2)$$ and collecting the values of the series from the book, (Almost) Impossible Integrals, Sums, and Series, we are done with the first series.
To get the value of the second series we might use the relation:
$$\begin{equation*} \sum _{n=1}^{\infty } \frac{H_n H_{2 n}}{(2 n-1)^3}-\sum _{n=1}^{\infty } \frac{H_n H_{2 n}}{(2 n)^3} \end{equation*}$$ $$\begin{equation*} =6 \log (2)-2 \log ^2(2)-\frac{1}{12}\log ^4(2)+\frac{3}{20} \log ^5(2)-\frac{3}{2} \zeta (2)-\frac{21}{8} \zeta (3)+\frac{173}{32} \zeta (4) \end{equation*}$$ $$\begin{equation*} +\frac{55}{32} \zeta (5)-\frac{5 }{4}\zeta (2) \zeta (3)+\frac{3}{2} \log (2) \zeta (2)-\frac{7}{2}\log (2)\zeta (3)-4\log (2)\zeta (4)+\frac{1}{2} \log ^2(2) \zeta (2) \end{equation*}$$ $$\begin{equation*} -\frac{5}{6} \log ^3(2)\zeta (2)+\frac{21}{8}\log ^2(2)\zeta (3)-2 \operatorname{Li}_4\left(\frac{1}{2}\right)+4 \log (2)\operatorname{Li}_4\left(\frac{1}{2}\right)+2 \operatorname{Li}_5\left(\frac{1}{2}\right), \end{equation*}$$
and this is obtained by using a very similar strategy to the one give in Section 6.59, pages $$530$$-$$532$$, from the book, (Almost) Impossible Integrals, Sums, and Series. The critical identity here is given in (6.289).
A detailed solution will appear soon in a new paper.
UPDATE (September $$30$$, $$2019$$)
A magical way to the series $$\sum_{n=1}^\infty\frac{H_{2n}H_n^{(2)}}{(2n)^2}$$ by Cornel Ioan Valean
By the Cauchy product, we have $$\operatorname{Li}_2(x^2) \log(1-x^2)= 3\sum _{n=1}^{\infty } \frac{x^{2 n}}{n^3}-2\sum _{n=1}^{\infty } x^{2n}\frac{H_n}{n^2}-\sum _{n=1}^{\infty } x^{2n}\frac{H_n^{(2)}}{n}$$, and if we multiply both sides by $$\log(1-x)/x$$, and integrate from $$x=0$$ to $$x=1$$, using that $$\int_0^1 x^{n-1}\log(1-x)\textrm{d}x=-H_n/n$$, and doing all the reductions, we arrive at
$$2\sum _{n=1}^{\infty } \frac{H_{2 n} H_n^{(2)}}{(2 n)^2}-12\sum _{n=1}^{\infty } \frac{H_n}{n^4}+12\sum _{n=1}^{\infty }(-1)^{n-1} \frac{H_n}{n^4}+\sum _{n=1}^{\infty } \frac{H_n H_{2 n}}{n^3}$$ $$=\int_0^1 \frac{\text{Li}_2\left(x^2\right) \log \left(1-x^2\right) \log (1-x)}{x} \textrm{d}x$$ $$=\int_0^1 \frac{\text{Li}_2\left(x^2\right) \log (1+x) \log (1-x)}{x} \textrm{d}x+2 \int_0^1 \frac{\text{Li}_2(-x) \log ^2(1-x)}{x} \textrm{d}x\\+2 \int_0^1 \frac{\text{Li}_2(x) \log ^2(1-x)}{x} \textrm{d}x$$ $$=\int_0^1 \frac{\text{Li}_2\left(x^2\right) \log (1+x) \log (1-x)}{x} \textrm{d}x+2 \sum _{n=1}^{\infty } \frac{H_n^2}{n^3}-2 \sum _{n=1}^{\infty } \frac{(-1)^{n-1}H_n^2}{n^3}+2 \sum _{n=1}^{\infty } \frac{H_n^{(2)}}{n^3}\\-2 \sum _{n=1}^{\infty }(-1)^{n-1} \frac{ H_n^{(2)}}{n^3},$$ where the last integral is given here Two very advanced harmonic series of weight $5$, and all the last resulting harmonic series are given in the book (Almost) Impossible Integrals, Sums, and Series. The reduction to the last series has been achieved by using the identity, $$\displaystyle \int_0^1 x^{n-1}\log^2(1-x)\textrm{d}x=\frac{H_n^2+H_n^{(2)}}{n}$$. The series $$\sum _{n=1}^{\infty } \frac{H_n H_{2 n}}{n^3}$$ maybe found calculated in the paper On the calculation of two essential harmonicseries with a weight 5 structure, involving harmonic numbers of the type H_{2n} by Cornel Ioan Valean. Thus, we have
$$\sum_{n=1}^\infty\frac{H_{2n}H_n^{(2)}}{(2n)^2}=\frac{101}{64}\zeta(5)-\frac5{16}\zeta(2)\zeta(3).$$
All the details will appear in a new paper.
UPDATE (October $$30$$, $$2019$$) The details with respect to the evaluation of the previous series may be found in the preprint The evaluation of a special harmonic series with a weight $$5$$ structure, involving harmonic numbers of the type $$H_{2n}$$
• (+1) for sharing such awesome stuff. – Ali Shather Sep 5 at 17:14
• @AliShather Thank you. – user97357329 Sep 6 at 7:54
We have
$$\frac{\ln^2(1-y)}{1-y}=\sum_{n=1}^\infty y^n(H_n^2-H_n^{(2)})\tag{1}$$
integrate both sides of (1) from $$y=0$$ to $$y=x$$ to get
$$-\frac13\ln^3(1-x)=\sum_{n=1}^\infty\frac{x^{n+1}}{n+1}\left(H_n^2-H_n^{(2)}\right)=\sum_{n=1}^\infty\frac{x^{n}}{n}\left(H_n^2-H_n^{(2)}-\frac{2H_n}{n}+\frac{2}{n^2}\right)\tag{2}$$
Now replace $$x$$ with $$x^2$$ in (2) then multiply both sides by $$-\frac{\ln(1-x)}{x}$$ and integrate from $$x=0$$ to $$x=1$$, also note that $$\int_0^1 -x^{2n-1}\ln(1-x)\ dx=\frac{H_{2n}}{2n}$$ we get
$$\frac13\underbrace{\int_0^1\frac{\ln^3(1-x^2)\ln(1-x)}{x}\ dx}_{\large I}=\sum_{n=1}^\infty\frac{H_{2n}}{2n^2}\left(H_n^2-H_n^{(2)}-\frac{2H_n}{n}+\frac{2}{n^2}\right)$$
Rearranging the terms to get
$$\sum_{n=1}^\infty\frac{H_{2n}H_n^2}{(2n)^2}=\sum_{n=1}^\infty\frac{H_{2n}H_n^{(2)}}{(2n)^2}+4\sum_{n=1}^\infty\frac{H_{2n}H_n}{(2n)^3}-8\sum_{n=1}^\infty\frac{H_{2n}}{(2n)^4}+\frac16I\tag{3}$$
@nospoon mentioned in equation (3) of his solution that he found
$$\sum_{n=1}^{\infty} \frac{H_{n-1}^{(2)}\,H_{2n}}{n^2} =\frac{11}{4}\zeta(2)\,\zeta(3)-\frac{47}{16}\zeta(5)$$
Or
$$\boxed{\sum _{n=1}^{\infty } \frac{H_{2n} H_{n}^{(2)}}{(2 n)^2}=\frac{101 }{64}\zeta (5)-\frac{5 }{16}\zeta (2) \zeta (3)}$$
Also Cornel elegantly calculated the second sum above
$$\boxed{\small{\sum _{n=1}^{\infty } \frac{H_{2 n}H_n }{(2 n)^3}=\frac{307}{128}\zeta(5)-\frac{1}{16}\zeta (2) \zeta (3)+\frac{1}{3}\ln ^3(2)\zeta (2) -\frac{7}{8} \ln ^2(2)\zeta (3)-\frac{1}{15} \ln ^5(2) -2 \ln (2) \operatorname{Li}_4\left(\frac{1}{2}\right) -2 \operatorname{Li}_5\left(\frac{1}{2}\right)}}$$
For the third sum: $$\sum_{n=1}^\infty\frac{H_{2n}}{(2n)^4}=\frac12\sum_{n=1}^\infty\frac{H_{n}}{n^4}+\frac12\sum_{n=1}^\infty(-1)^n\frac{H_{n}}{n^4}$$
plugging the common results:
$$\sum_{n=1}^\infty\frac{H_{n}}{n^4}=3\zeta(5)-\zeta(2)\zeta(3)$$
$$\sum_{n=1}^\infty(-1)^n\frac{H_{n}}{n^4}=\frac12\zeta(2)\zeta(3)-\frac{59}{32}\zeta(5)$$
we get
$$\boxed{\sum_{n=1}^\infty\frac{H_{2n}}{(2n)^4}=\frac{37}{64}\zeta(5)-\frac14\zeta(2)\zeta(3)}$$
For the remaining integral $$I$$, we use the magical identity
$$(a+b)^3a=a^4-b^4+\frac12(a+b)^4-\frac12(a-b)^4-(a-b)^3b$$
with $$a=\ln(1-x)$$ and $$b=\ln(1+x)$$ we can write
$$I=\int_0^1\frac{\ln^4(1-x)}{x}\ dx-\int_0^1\frac{\ln^4(1+x)}{x}\ dx+\frac12\underbrace{\int_0^1\frac{\ln^4(1-x^2)}{x}\ dx}_{x^2\mapsto x}\\-\underbrace{\frac12\int_0^1\frac{\ln^4\left(\frac{1-x}{1+x}\right)}{x}\ dx}_{\frac{1-x}{1+x}\mapsto x}-\underbrace{\int_0^1\frac{\ln^3\left(\frac{1-x}{1+x}\right)\ln(1+x)}{x}\ dx}_{\frac{1-x}{1+x}\mapsto x}$$
$$I=\frac54\underbrace{\int_0^1\frac{\ln^4(1-x)}{x}\ dx}_{4!\zeta(5)}-\underbrace{\int_0^1\frac{\ln^4(1+x)}{x}\ dx}_{K}-\underbrace{\int_0^1\frac{\ln^4x}{1-x^2}\ dx}_{\frac{93}{4}\zeta(5)}+\underbrace{2\int_0^1\frac{\ln^3x\ln\left(\frac{1+x}{2}\right)}{1-x^2}\ dx}_{J}$$
$$I=\frac{27}{4}\zeta(5)-K+J\tag{4}$$
we have
\begin{align} K&=\int_0^1\frac{\ln^4(1+x)}{x}=\int_{1/2}^1\frac{\ln^4x}{x}\ dx+\int_{1/2}^1\frac{\ln^4x}{1-x}\ dx\\ &=\frac15\ln^52+\sum_{n=1}^\infty\int_{1/2}^1 x^{n-1}\ln^4x\ dx\\ &=\frac15\ln^52+\sum_{n=1}^\infty\left(\frac{24}{n^5}-\frac{24}{n^52^n}-\frac{24\ln2}{n^42^n}-\frac{12\ln^22}{n^32^n}-\frac{4\ln^32}{n^22^n}-\frac{\ln^42}{n2^n}\right)\\ &=4\ln^32\zeta(2)-\frac{21}2\ln^22\zeta(3)+24\zeta(5)-\frac45\ln^52-24\ln2\operatorname{Li}_4\left(\frac12\right)-24\operatorname{Li}_5\left(\frac12\right) \end{align}
and
$$J=2\int_0^1\frac{\ln^3x\ln\left(\frac{1+x}{2}\right)}{1-x^2}\ dx=\int_0^1\frac{\ln^3x\ln\left(\frac{1+x}{2}\right)}{1-x}\ dx+\int_0^1\frac{\ln^3x\ln\left(\frac{1+x}{2}\right)}{1+x}\ dx$$
using the rule
$$\int_0^1\frac{\ln^ax\ln\left(\frac{1+x}{2}\right)}{1-x}\ dx=(-1)^aa!\sum_{n=1}^\infty\frac{(-1)^nH_n^{a+1}}{n}$$
allows us to write
\begin{align} J&=-6\sum_{n=1}^\infty\frac{(-1)^nH_n^{(4)}}{n}+\int_0^1\frac{\ln^3x\ln(1+x)}{1+x}\ dx-\ln2\int_0^1\frac{\ln^3x}{1+x}\ dx\\ &=-6\sum_{n=1}^\infty\frac{(-1)^nH_n^{(4)}}{n}-\sum_{n=1}^\infty(-1)^n H_n\int_0^1x^n\ln^3x\ dx-\ln2\left(-\frac{21}4\zeta(4)\right)\\ &=-6\sum_{n=1}^\infty\frac{(-1)^nH_n^{(4)}}{n}+6\sum_{n=1}^\infty\frac{(-1)^n H_n}{(n+1)^4}+\frac{21}{4}\ln2 \zeta(4)\\ &=-6\sum_{n=1}^\infty\frac{(-1)^nH_n^{(4)}}{n}-6\sum_{n=1}^\infty\frac{(-1)^n H_n}{n^4}-\frac{45}{8}\zeta(5)+\frac{21}{4}\ln2 \zeta(4) \end{align}
Plugging
$$\sum_{n=1}^\infty\frac{(-1)^nH_n^{(4)}}{n}=\frac78\ln2\zeta(4)+\frac38\zeta(2)\zeta(3)-2\zeta(5)$$
we get
$$J=\frac{279}{16}\zeta(5)-\frac{21}{4}\zeta(2)\zeta(3)$$
Plugging the results of $$K$$ and $$J$$ in (4) we get
$$\boxed{\small{I=24\operatorname{Li}_5\left(\frac12\right)+24\ln2\operatorname{Li}_4\left(\frac12\right)+\frac3{16}\zeta(5)-\frac{21}{4}\zeta(2)\zeta(3)+\frac{21}2\ln^22\zeta(3)-4\ln^32\zeta(2)+\frac45\ln^52}}$$
and finally by substituting the boxed results in (3) we get
$$\sum _{n=1}^{\infty } \frac{H_{2 n}H_n^2 }{(2 n)^2} =\frac{9 }{16}\zeta (2) \zeta (3)+\frac{421 }{64}\zeta (5)+\frac{2}{3} \ln ^3(2)\zeta (2) -\frac{7}{4} \ln ^2(2)\zeta (3)\\ -\frac{2}{15} \ln^5(2) -4 \ln2\operatorname{Li}_4\left(\frac{1}{2}\right) -4 \operatorname{Li}_5\left(\frac{1}{2}\right)$$
Note:
$$\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^4}$$ can be found here and $$\sum_{n=1}^\infty\frac{(-1)^nH_n^{(4)}}{n}$$ can be found here.
• (+1) Thanks. However, I specified clearly that I only need references. – user97357329 Sep 9 at 11:26
• Thank you and sorry I didnt read that part. – Ali Shather Sep 9 at 14:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 140, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9568549394607544, "perplexity": 500.9888765822864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540506459.47/warc/CC-MAIN-20191208044407-20191208072407-00350.warc.gz"} |
http://mathwiki.cs.ut.ee/probability_theory | # Probability theory
Quite often in life we do not know for sure, what will be the result of an experiment or a process. For example, some hacker tries to attacks a particular information system.
Surely both, the attacker and the information system owner wants to know the risks. So how to describe, what is going to happen? The answer to this gives probability theory. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557436466217041, "perplexity": 530.1205993319522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00376.warc.gz"} |
https://wiki.analytica.com/index.php?title=Correlate_With | # Correlate With
## Correlate_With(S, ReferenceS, rankCorr)
Used to specify a distribution having a specified rank correlation with a reference distribution. «S» is the marginal distribution of the result. «ReferenceS» is the reference distribution.
Reorders the samples of «S» so that the result is correlated with the reference sample with a rank correlation close to «rankCorr».
## Library
Multivariate Distributions library functions (Multivariate Distributions.ana)
## Example
To generate a LogNormal distribution that is highly correlation with Ch1 (which may be any distribution), use e.g.:
Correlate_With(LogNormal(2, 3), Ch1, 0.8)
## Notes
Most commonly, when the term "correlation" is used, it is implied to mean Pearson Correlation, which is essentially a measure of linearity. Creating a distribution with this measure of correlation makes most sense when the joint distribution is Gaussian, i.e., each marginal distribution is Normal. In this case, you can specify the mean and variance of each variable, and the covariance for each pair of variables, and use the Gaussian function (found in the Multivariate Distribution library) to define the joint distribution. The covariance of two random variables is the correlation of the two variables times the product of their standard deviations, so the Gaussian can be defined directly in terms of Pearson Correlations. The BiNormal function may also be used when defining a 2-D Gaussian.
For non-Gaussian distributions, it is not necessarily possible for two distributions to have a desired Pearson correlation. However, we can ensure a given Rank Correlation, also called Spearman correlation. This is what Correlate_With and Correlate_Dists use.
Correlate_With is the most convenient way for specifying two univariate distributions with a given rank correlation. If you have three or more distributions that are mutually correlated, then you will need a symmetric matrix of rank correlations, and will need to use the Correlate_Dists function.
## Precision
The actual sample rank correlation of the sample generated will differ slightly from the requested rank correlation, due to the fact that the samples have a finite number of points. This sampling error reduces as you increase sample size. The standard deviation of this sampling error (i.e., of the difference between the sample rank correlation and the requested rank correlation) is
$se = \sqrt{ {1-rc^2}\over{n-2} }$
where n is the sample size and rc is the requested rank correlation. For example, when using a sample size of n = 100 and rc = 0.7, we expect a sampling error in actual rank correlation of 0.07. There is therefore about 68% chance that the rank correlation of the samples will be between 0.63 and 0.77, but also a 5% chance it might be less than 0.56 or greater than 0.84. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015676379203796, "perplexity": 442.16347556580877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00500.warc.gz"} |
https://asmedigitalcollection.asme.org/GT/proceedings-abstract/GT2013/55157/V03BT13A041/249599 | Film-cooling is one of the most prevalent cooling technologies that is used for gas turbine airfoil surfaces. Numerous studies have been conducted to give the cooling effectiveness over ranges of velocity, density, mass flux, and momentum flux ratios. Few studies have reported flowfield measurements with even fewer of those providing time-resolved flowfields. This paper provides time-averaged and time-resolved particle image velocimetry data for a film-cooling flow at low and high density ratios. A generic film-cooling hole geometry with wide lateral spacing was used for this study, which was a 30° inclined round hole injecting along a flat plate with lateral spacing P/D = 6.7. The jet Reynolds number for flowfield testing varied from 2500 to 7000. The data indicate differences in the flowfield and turbulence characteristics for the same momentum flux ratios at the two density ratios. The time-resolved data indicate Kelvin-Helmholtz breakdown in the jet-to-freestream shear layer.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978713750839233, "perplexity": 2612.064305771503}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00472.warc.gz"} |
https://par.nsf.gov/search/author:%22Pettini,%20Max%22 | # Search for:All records
Creators/Authors contains: "Pettini, Max"
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
1. Abstract We report the first direct measurement of the helium isotope ratio, 3 He/ 4 He, outside of the Local Interstellar Cloud, as part of science-verification observations with the upgraded CRyogenic InfraRed Echelle Spectrograph. Our determination of 3 He/ 4 He is based on metastable He i * absorption along the line of sight toward Θ 2 A Ori in the Orion Nebula. We measure a value 3 He/ 4 He = (1.77 ± 0.13) × 10 −4 , which is just ∼40% above the primordial relative abundance of these isotopes, assuming the Standard Model of particle physics and cosmology, ( 3 He/ 4 He) p = (1.257 ± 0.017) × 10 −4 . We calculate a suite of galactic chemical evolution simulations to study the Galactic build up of these isotopes, using the yields from Limongi & Chieffi for stars in the mass range M = 8–100 M ⊙ and Lagarde et al. for M = 0.8–8 M ⊙ . We find that these simulations simultaneously reproduce the Orion and protosolar 3 He/ 4 He values if the calculations are initialized with a primordial ratio 3 He / 4 He p = ( 1.043 ± 0.089 ) × 10more »
Free, publicly-accessible full text available June 1, 2023
2. Abstract Detailed analyses of high-redshift galaxies are challenging because these galaxies are faint, but this difficulty can be overcome with gravitational lensing, in which the magnification of the flux enables spectroscopy with a high signal-to-noise ratio (S/N). We present the rest-frame ultraviolet (UV) Keck Echellette Spectrograph and Imager (ESI) spectrum of the newly discovered z = 2.79 lensed galaxy SDSS J1059+4251. With an observed magnitude F814W = 18.8 and a magnification factor μ = 31 ± 3, J1059+4251 is both highly magnified and intrinsically luminous, about two magnitudes brighter than M UV * at z ∼ 2–3. With a stellar mass M * = (3.22 ± 0.20) × 10 10 M ⊙ , star formation rate SFR = 50 ± 7 M ⊙ yr −1 , and stellar metallicity Z * ≃ 0.15–0.5 Z ⊙ , J1059+4251 is typical of bright star-forming galaxies at similar redshifts. Thanks to the high S/N and the spectral resolution of the ESI spectrum, we are able to separate the interstellar and stellar features and derive properties that would be inaccessible without the aid of the lensing. We find evidence of a gas outflow with speeds up to −1000 km s −1 , and ofmore »
3. ABSTRACT We present new measurements of the spatial distribution and kinematics of neutral hydrogen in the circumgalactic and intergalactic medium surrounding star-forming galaxies at z ∼ 2. Using the spectra of ≃3000 galaxies with redshifts 〈z〉 = 2.3 ± 0.4 from the Keck Baryonic Structure Survey, we assemble a sample of more than 200 000 distinct foreground-background pairs with projected angular separations of 3–500 arcsec and spectroscopic redshifts, with 〈zfg〉 = 2.23 and 〈zbg〉 = 2.57 (foreground, background redshifts, respectively.) The ensemble of sightlines and foreground galaxies is used to construct a 2D map of the mean excess $\rm{H\,{\small I}}$$\rm Ly\,\alpha$ optical depth relative to the intergalactic mean as a function of projected galactocentric distance (20 ≲ Dtran/pkpc ≲ 4000) and line-of-sight velocity. We obtain accurate galaxy systemic redshifts, providing significant information on the line-of-sight kinematics of $\rm{H\,{\small I}}$ gas as a function of projected distance Dtran. We compare the map with cosmological zoom-in simulation, finding qualitative agreement between them. A simple two-component (accretion, outflow) analytical model generally reproduces the observed line-of-sight kinematics and projected spatial distribution of $\rm{H\,{\small I}}$. The best-fitting model suggests that galaxy-scale outflows with initial velocity vout ≃ 600 km s$^{-1}\,$ dominate the kinematics of circumgalactic $\rm{H\,{\small I}}$ out to Dtran ≃ 50 kpc, whilemore »
4. ABSTRACT
The combination of the MOSDEF and KBSS-MOSFIRE surveys represents the largest joint investment of Keck/MOSFIRE time to date, with ∼3000 galaxies at 1.4 ≲ z ≲ 3.8, roughly half of which are at z ∼ 2. MOSDEF is photometric- and spectroscopic-redshift selected with a rest-optical magnitude limit, while KBSS-MOSFIRE is primarily selected based on rest-UV colours and a rest-UV magnitude limit. Analysing both surveys in a uniform manner with consistent spectral-energy-distribution (SED) models, we find that the MOSDEF z ∼ 2 targeted sample has higher median M* and redder rest U−V colour than the KBSS-MOSFIRE z ∼ 2 targeted sample, and smaller median SED-based SFR and sSFR (SFR(SED) and sSFR(SED)). Specifically, MOSDEF targeted a larger population of red galaxies with U−V and V−J ≥1.25, while KBSS-MOSFIRE contains more young galaxies with intense star formation. Despite these differences in the z ∼ 2 targeted samples, the subsets of the surveys with multiple emission lines detected and analysed in previous work are much more similar. All median host-galaxy properties with the exception of stellar population age – i.e. M*, SFR(SED), sSFR(SED), AV, and UVJ colours – agree within the uncertainties. Additionally, when uniform emission-line fitting and stellar Balmer absorption correction techniquesmore » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9027811288833618, "perplexity": 4968.927504363137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00309.warc.gz"} |
http://mathhelpforum.com/calculus/25440-integral.html | # Math Help - Integral
1. ## Integral
$\int \frac{1}{x^7-x}\;dx$
Thanks
2. There are various things tou can do. Partial fractions. Maybe just a small factoring and a sub.
$\int\frac{1}{x(x^{6}-1)}dx$
Let $u=x^{6}-1, \;\ du=6x^{5}dx, \;\ \frac{du}{6}=x^{5}dx, \;\ u+1=x^{6}$
This leads to $\frac{1}{6}\int\frac{1}{u^{2}+u}du$
$\frac{1}{6}\left[\int\frac{1}{u}du-\int\frac{1}{u+1}du\right]$
Now, continue?.
If you feel industrious, you could do the PFD thing.
$x^{7}-x = x(x+1)(x-1)(x^{2}+x+1)(x^{2}-x+1)$
3. How exactly does this: Let $u=x^{6}-1, \;\ du=6x^{5}dx, \;\ \frac{du}{6}=x^{5}dx, \;\ u+1=x^{6}$ lead to this:
$\frac{1}{6}\int\frac{1}{u^{2}+u}du$
thats the part i didn't understand, thats why i couldn't use subst. myself.
The aftermath is easy....
Originally Posted by galactus
If you feel industrious, you could do the PFD thing.
$x^{7}-x = x(x+1)(x-1)(x^{2}+x+1)(x^{2}-x+1)$
After i posted this, i actually solved it, but i did this before applying PF....the above would take to long...
$\int \frac{1}{x(x^3-1)(x^3+1)}=\frac{1}{2}\int \frac{(x^3+1)-(x^3-1)}{x(x^3-1)(x^3+1)}=\frac{1}{2}\int\frac{1}{x(x^3-1)}-\frac{1}{x(x^3+1)}$
4. Let's see
We want: $\int{\frac{dx}{x\cdot{(x^6-1)}}}$
Multiply and divide by $6\cdot{x^5}$ we get: $\int{\frac{6\cdot{x^5}}{6\cdot{x^6}\cdot{(x^6-1)}}}dx$
We now let: $u=x^6-1\rightarrow{\frac{du}{dx}=6\cdot{x^5}}$
Thus we have: $\int{\frac{du}{6\cdot{u}\cdot{(u+1)}}}=\frac{1}{6} \cdot{\int{\frac{du}{u\cdot{(u+1)}}}}$
5. Originally Posted by PaulRS
Multiply and divide by $6\cdot{x^5}$
That was the critical step i need to know!!
Thanks
6. Originally Posted by polymerase
$\int \frac{1}{x^7-x}\;dx$
You can also avoid substitutions
$\int {\frac{1}
{{x^7 - x}}\,dx} = \int {\frac{1}
{{x\left( {x^6 - 1} \right)}}\,dx} = \int {\frac{{x^6 - \left( {x^6 - 1} \right)}}
{{x\left( {x^6 - 1} \right)}}\,dx}$
.
So $\int {\frac{1}
{{x^7 - x}}\,dx} = \frac{1}
{6}\int {\frac{{\left( {x^6 - 1} \right)'}}
{{\left( {x^6 - 1} \right)}}\,dx} - \int {\frac{1}
{x}\,dx}$
.
The rest follows.
7. Hello, PaulRS!
That's brilliant! . . . Thank you!
8. Here's what I was getting at:
Because, $\frac{x^{5}}{x^{6}}=\frac{1}{x}$
Now, because $u=x^{6}-1, \;\ u+1=x^{6}, \;\ du=6x^{5}dx, \;\ \frac{du}{6}=x^{5}dx$
Rewrite as : $\frac{1}{6}\int\frac{x^{5}}{x^{6}(x^{6}-1)}dx$
Make the subs and we get:
$\frac{1}{6}\int\frac{1}{(u+1)u}du=\frac{1}{6}\int\ frac{1}{u^{2}+u}du$
See now?. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727210998535156, "perplexity": 3605.82764216244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/154014-mult-imaginary-numbers.html | Math Help - mult. imaginary numbers
1. mult. imaginary numbers
5 - 2i
3 + 8i
for this problem i would like to know if i should mult. both the tope and the bottom by
3 + 8i.
also i already know that (i) to the second power is equal to -1. but how would you work it out?
2. Originally Posted by allsmiles
5 - 2i
3 + 8i
for this problem i would like to know if i should mult. both the tope and the bottom by
3 + 8i.
also i already know that (i) to the second power is equal to -1. but how would you work it out?
multiply numerator and denominator by the conjugate of the denominator ... (3 - 8i)
3. Originally Posted by allsmiles
5 - 2i
3 + 8i
for this problem i would like to know if i should mult. both the tope and the bottom by
3 + 8i.
also i already know that (i) to the second power is equal to -1. but how would you work it out?
$\frac{5-2i}{3+8i} . \frac{3-8i}{3-8i}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721954464912415, "perplexity": 725.9421430427084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927863.72/warc/CC-MAIN-20150521113207-00244-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/lc-circuit-with-variable-capacitor.525124/ | # LC circuit with variable capacitor
1. Aug 28, 2011
### Piano man
1. The problem statement, all variables and given/known data
A constant voltage at a frequency of 1MHz is maintained across a circuit consisting of an inductor in series with a variable capacitor. When the capacitor is reduced from 300pF to 284pF, the current is 0.707 of its maximum value. Find
(i) the inductance and the resistance of the inductor, and
(ii) the Q factor of the inductor at 1MHz. Sketch the phasor diagram for each condition.
2. Relevant equations
$$\omega=\frac{1}{\sqrt{LC}} // v_L=L\frac{di}{dt} // v_C=\frac{q}{C}$$
Probably a load of other formulae as well.
3. The attempt at a solution
Using Kirchoff, get $$L\frac{di}{dt} + \frac{q}{300*10^{-12}}=0$$ for the first case and $$0.707L\frac{di}{dt}+\frac{0.707it}{284*10^{-12}}=0$$ for the second.
Equating the two, I'm still left with i and t and a feeling that I'm barking up the wrong tree.
Any help would be appreciated - it's ages since I've done these type of questions.
Thanks.
2. Aug 28, 2011
### Spinnor
Do you take into account the voltage generated by the resistance of the inductor?
Last edited: Aug 28, 2011
3. Sep 1, 2011
### rude man
Unfortunately, this problem as stated gives insufficient info.
As a matter of fact, one solution is Q = infinity, L = (√2/wC1 - 1/wC2)/w(√2-1) H
where C1 = 300 pF, C2 = 284pF, R = 0 and w = 2pi*1 MHz.
As Spinnor indicates, the same conditions can obtain when R (and therefore Q) are finite. What has to be specified in addition to the above is the phase of the current i0/√2. I happen to have picked 180 deg. I suppose one could object that I went from 0 to 180 which is -0.707i0 but remember if the detuning is to be ascribed to a finite Q, i0/√2 is not in phase with i0 either .... out 45 deg in fact.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: LC circuit with variable capacitor | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441674709320068, "perplexity": 1282.9388520537668}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806086.13/warc/CC-MAIN-20171120164823-20171120184823-00628.warc.gz"} |
http://www.transtutors.com/questions/a-downward-force-of-8-4n-is-exerted-on-a-8-8-956-c-charge-wha-423245.htm | +1.617.933.5480
# Q: A downward force of 8.4N is exerted on a -8.8μC charge. Wha
A downward force of 8.4N is exerted on a -8.8μC charge. What are the magnitude and direction of the electric field at this point?
Given Data force acting is F = 8.4N...
Related Questions in Electromagnetic Theory | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950955867767334, "perplexity": 1192.4089702358456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166570.91/warc/CC-MAIN-20160205193926-00061-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://quant.stackexchange.com/questions/31136/is-value-at-risk-additive | # Is Value At Risk additive?
I have computed the value at risk of 2 different commodities.
Assuming they have not correlated, can I just sum the two standalone VaR to get my overall portfolio's VaR ?
• I have VAR for commodity 1 and VAR for commodity 2. Nov 23 '16 at 0:48
• What do you mean? Do you want to compute the VaR of holding 1 unit of both commodities?
– SRKX
Nov 23 '16 at 1:15
• Yes var of holding both Nov 23 '16 at 1:16
The answer to your question is no. Value at Risk is not additive in the sense that $\text{VaR}(X+Y) \neq \text{VaR}(X) + \text{VaR}(Y)$. But I guess your question is more to aimed at finding a formula for your investments than to look at the property itself.
I think the only way to get a nice formula for this is to assume that both assets are:
• Normally distributed
• Have a mean equal to 0
• Are independent
Closed-Form value at risk for Normal variable
Mathematically the Value at Risk at a given level $\alpha$ is defined as:
$$\text{VaR}_\alpha(X) = \{ y ~ | ~ \mathbb{P}( X\leq y) = \alpha \}$$
If you can assume you variable $X$ is normally distributed such that $X \sim \mathcal{N}(\mu, \sigma^2)$, then you can re-express $X$ in terms of another variable $Z \sim \mathcal{N}(0,1)$: $X = \mu + \sigma Z$.
Using this, we know can rewrite the VaR definition as:
\begin{align} \text{VaR}_\alpha(X) &= \{ y ~ | ~ \mathbb{P}( \mu + Z\sigma\leq y) = \alpha\}\\ &= \left\{ y ~ | ~ \mathbb{P}\left( Z \leq \frac{ y- \mu}{\sigma} \right) = \alpha \right\}\\ &= \left\{ y ~ | ~ \Phi\left( \frac{ y- \mu}{\sigma} \right) = \alpha \right\}\\ \end{align}
where $\Phi(x)$ is the cumulative normal standard distribution function.
We can then find a closed-form formula to the value at risk of a normally distributed variable $X$:
$$\text{VaR}_\alpha(X) = \Phi^{-1}(\alpha) \cdot\sigma + \mu$$
Distribution of portfolio of two Normal variables
Now, let's assume you portfolio $Y$ holds two assets $X_1$ and $X_2$ (the two commodities in your example), which are uncorrelated ($\rho = 0$).
If you assume that both are normally distributed $X_1 \sim \mathcal{N}(\mu_1,\sigma_1)$ and $X_2 \sim \mathcal{N}(\mu_2,\sigma_2)$, then the we know that the portfolio can be expressed as:
\begin{align} Y &= wX_1 + (1-w)X_2\\ &= w(\mu_1 + \sigma_1 Z_1) + (1-w)(\mu_2 +\sigma_2 Z_2)\\ &= w\mu_1 + (1-w) \mu_2 + w\sigma_1 + w \sigma_1 Z_1 + (1-w) \sigma_2 Z_2 \end{align}
Hence, we know that:
$$\mathbb{E}(Y) = w\mu_1 + (1-w) \mu_2$$
and
$$\text{Variance}(Y) = \sigma_Y^2 = w^2 \sigma_1^2 + (1-w)^2 \sigma_2^2$$
because you assets are independent.
As we know, the sum of 2 normally distributed variables is also normally distributed, hence: $$Y \sim \mathcal{N}(w\mu_1 + (1-w) \mu_2, w^2 \sigma_1^2 + (1-w)^2 \sigma_2^2)$$
Value-at-risk of the portfolio
Using the formula for value-at-risk for normal variable we found above, we can write:
\begin{align} \text{VaR}_\alpha(Y) &= \Phi^-1(\alpha) \sigma_Y + \mu_y\\ \text{VaR}_\alpha(Y) &= \Phi^-1(\alpha) \sqrt{w^2 \sigma_1^2 + (1-w)^2 \sigma_2^2} + w\mu_1 + (1-w) \mu_2\\ \end{align}
If you assume that $\mu_1 = \mu_2 = 0$, then you get:
\begin{align} \text{VaR}_\alpha(Y) &= \Phi^-1(\alpha) \sqrt{w^2 \sigma_1^2 + (1-w)^2 \sigma_2^2}\\ \text{VaR}_\alpha(Y)^2 &= \Phi^-1(\alpha)^2 (w^2 \sigma_1^2 + (1-w)^2 \sigma_2^2)\\ \text{VaR}_\alpha(Y)^2 &= \Phi^-1(\alpha)^2 w^2 \sigma_1^2 + \Phi^-1(\alpha)^2 (1-w)^2 \sigma_2^2\\ \text{VaR}_\alpha(Y)^2 &= w^2 \text{VaR}_\alpha(X_1)^2 + (1-w)^2 \text{VaR}_\alpha(X_2)^2\\ \text{VaR}_\alpha(Y) &=\sqrt{ w^2 \text{VaR}_\alpha(X_1)^2 + (1-w)^2 \text{VaR}_\alpha(X_2)^2}\\ \end{align}
• Normal distribution is very rare situation in practice. Is it possible to extend your answer to general case?
– Nick
Nov 23 '16 at 7:23
• @Nick general case has no simple closed-form solution; it would depend on the joint distribution of the assets.
– SRKX
Nov 23 '16 at 7:36
• what is "w" in the above? Nov 23 '16 at 12:58
• I'm guessing w is weight of % of each i own Nov 23 '16 at 13:04
• Yes that's correct. And you VaR is expressed as a percentage of your initial wealth.
– SRKX
Nov 23 '16 at 14:10
You need to square them, add the squares , and take the square root. (Variances are additive, not standard deviations).
• VAR is the abbreviation of variation, VaR is the abbreviation value at risk. In the title you wrote value at risk, in the body you used VAR. What did you mean?
– Nick
Nov 23 '16 at 3:37
• Value at risk yes. Sorry about abbreviation. Value at risk in title. Nov 23 '16 at 4:35
• @Nick I think VAR is more commonly used as Vector AutoRegression, though. Nov 23 '16 at 8:31
Well, if you are using historical VaR, you can add results on each scenario and then calculate percentile of results... There is no other way.
No, because the value at risk is not, in general, a coherent risk measure as it does not respect the sub-additivity property, i.e.
$\rho(X + Y) \ne \rho(X) + \rho(Y)$, $\forall X, Y \in \mathcal{X}$ for the $VaR$.
However, Conditional Value at Risk is. Check out Is Conditional Value-at-Risk (CVaR) coherent?
• This is only showing it would not work in all cases, but you don't show why it couldn't work under certain assumption (here $\rho=0$).
– SRKX
Nov 23 '16 at 5:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953738451004028, "perplexity": 1403.093205520639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00616.warc.gz"} |
http://bicycles.stackexchange.com/questions/16253/why-arent-average-speeds-computed-over-distance/16256 | # Why aren't average speeds computed over distance?
Average speed is computed so that average speed = total distance / total time. This means that if you go up a hill, which takes a long time, your average speed drops. This isn't regained on the downhill, since going down takes so much less time.
If you compute the average over distance instead (i.e., your speed every tenth of a mile), the uphill and downhill are equally weighted, since they have the same number of samples. This means that your average is closer to what your speed would be on flat ground.
Your speed over flat ground seems more important to me. It doesn't depend on how hilly your route is, and that way you could compare cyclists more easily. Why don't we compute the average speed over distance instead?
-
Because that's the way it's done. Simple and easy to understand. You can always compute it the other way if you wish -- just learn how to program an Android. – Daniel R Hicks Jun 15 '13 at 20:41
I don't understand what you mean by computing it over distance. Could you give an example of what that equation would look like? – jimirings Jun 15 '13 at 21:07
@jimirings One useful definition: calculate the average pace and then calculate its reciprocal – anatolyg Jun 16 '13 at 21:33
To me, this would make sense in case of speeders in traffic, because the road dangers appear relative to distance and not to time. So if some bloke flies through an all-through-dangerous town, you wouldn't say anymore that he was causing danger for the very little time he was coming past the town, but that he actually caused danger all along the road. So speeders would get a more appropriate judgement. – Sam Sep 3 '14 at 21:15
Except you do regain it on the downhill, because you cover a great distance in a short amount of time. – whatsisname Sep 7 '14 at 19:22
This is an interesting point of view. Let's unpack this a bit.
Assume I have a ride that is 10 miles of flat, 10 miles of ascent, and 10 miles of descent.
On the flat I maintain a constant 20 miles/hour.
On the ascent I fall back to a constant 10 miles/hour.
On the descent I maintain a constant 30 miles/hour.
My average speed for this would be:
(10 miles + 10 miles + 10 miles) / (0.5 hour + 1.0 hour + .33 hour) = 16.39 miles/hour
By the proposed calculation I would have:
(100 tenths/mile * 20 miles/hour + 100 tenths/mile * 10 miles/hour + 100 tenths/mile * 30 miles/hour) / 300 tenths/mile = 20 miles/hour
If I had an average of 20 mph I should be able to complete the course in 1.5 hours. The trouble is that the course would actually take 1.83 hours to complete at the actual average of 16.39 mph. I know it hardly seems fair, since by all rights you did the vast majority of miles at 20+ mph. The caveat is that you spent by far the greatest amount of time at 10 mph.
-
+1 because mathematics is always right. – Carey Gregory Jun 15 '13 at 22:59
Can you always assume that your speed difference is the same for uphill and downhill? You can say that the force is always the same (mgsin(theta)), but there's also wind resistance (and other factors, I'm willing to bet) to factor in. – Scott Jun 15 '13 at 23:05
@Scott - The speed difference could be the same, but the rider's power output could vary substantially in relation to a number of factors. The best metric is probably WTHarper's suggestion of work/distance or work/time. See the graph of resistance and velocity in their post. – Craig Bennett Jun 15 '13 at 23:23
@Scott - You definitely cannot assume that the speed difference is the same going up vs down. Likely nowhere close. I may average 15 mph on the flat, hit 30 going down a given hill, without even pedaling, and go up that same hill at 6 mph. And consider that, as a hill gets steeper, the uphill speed will eventually reach zero (ie, the cyclist simply cannot climb that hill). But downhill speed can easily exceed 50 mph. – Daniel R Hicks Jun 16 '13 at 1:53
This isn't worth adding another answer for, but speed is defined as the derivative of distance with respect to time: dx/dt. The average speed is almost the same formula: (end_x - start_x) / (end_t - start_t). No good can come from redefining well accepted mathematical definitions. – amcnabb Jun 16 '13 at 13:56
Your average speed is always going to be a measure of distance over a period of time. What I think you're trying to get at is accounting for your grade losses (i.e. riding up a hill), but it would be equally pertinent to include losses for wind resistance and friction as well. This would allow you to determine the amount of work per distance (or per time) for the trip, but calculating all of the variables which would verify your actual work is impractical for the average cyclist.
Products like Power Tap hubs allow you to generate torque and cadence data from your rides. That information would give you better insight into your speed with regards to sustained torque and cadence and it will also better illustrate where total resistance is increased or decreased (e.g. going up or down a hill.)
As far as taking average speed over a set distance, you're still averaging. It doesn't matter if your sample is 0.1 miles or 10 kilometers, it is still a measure of distance per time.
Here is some more information on calculating propulsion resistance. I don't remember how to do much calculus, but it is there for anyone better equipped.
-
+1 for identifying average power as the measure that the poster is looking for. – amcnabb Jun 16 '13 at 13:49
The main issue with what you're talking about is that from a mathematical perspective, 'average' already has a strict meaning, and in this context nearly always means 'arithmetic mean'.
https://en.wikipedia.org/wiki/Average
If you want to talk about another measure of speed, you have to drop the term 'average speed' at the very least. What your method seems to be referring to would be something like the average of multiple constant interval average speeds, and as such two people both reporting their AoMCIAS could vary wildly, even if they traveled side-by-side.
For example, why pick tenths of a mile? There are infinite variants of division lengths unit choices such that two AoMCIAS numbers can't even be related (unless the distance has been mutually agreed upon before measurement collection). Good luck getting the U.S. to adopt the metric system or the rest of the world to adopt Imperial units, so at minimum you'd have U.S. bicyclists reporting it over 0.1 mile intervals and everyone else at, perhaps, 0.2 km intervals.
For that matter, why use length as the interval control? If you use a GPS, it's storing the data in semi-regular interval time units (depending on the unit and resolution setting), perhaps someone else tries to unify the U.S. with the rest of the world and proposes 10 second intervals as the new standard.
The result is that AoMCIAS speed numbers would not convey enough information in and of their own value, like averages do. You'd have to report them as "24.56 miles/hour AoMCIAS over 0.1 mile intervals", and that value would vary so wildly by interval choice that it could ONLY be compared to other AoMCIAS speeds with the exact same interval. There wouldn't be a static conversion that could be done, either, you would need to completely resample the intervals from raw data, if it was even available.
All of this is completely independent of it's relevance as a speed measure for bicycling (I have a mathematics degree, and don't time any of my bicycle rides). What I mean by this is: it's possible that you could devise a creative method such as AoMCIAS with an ideal interval distance or time such that the number reflects something more accurate about bicycling performance, and it may even be useful inside of the bicycling context (and likely ONLY the bicycling context). However, it will be of little to no value to anyone else, mathematically or even quick comparison-wise in colloquial speech. Two numbers could only really compared with equivalent intervals, and any ability to do neat things quickly in your head with such values would be relegated to special calculators, computer programs, and perhaps some genius savants.
-
You cant't compare it - that's a good reason. – Uooo Jun 19 '13 at 8:25
By computing your average speed over a distance, you basically sample the distance (delta S) and measure the time (t_i) each time you reach the defined sampling distance. The formula then to compute your average speed would be:
Then the problem starts. By decreasing your sampling distance as previously proposed to increase your "accuracy", you will reduce the time difference (t_i - t_(i-1) ). Let say you decrease your sampling distance towards zero, your time difference will tend towards 0 to... Which lead to a mathematical problem of 0/0, which is undetermined... The only correct average speed you can get from this formula is by choosing your sampling distance to the complete distance of your ride. You have then only one sample (n=1) and t_0 is your start time and t_1 your end time.
But if you want to "mathematically" increase your average speed, then you can apply this formula and chose a sampling distance which corresponds to your wish.
-
+1 Well put! This is indeed the right answer – Javier Jun 19 '13 at 18:55
Calculating the average speed after a tour is easy. You know the start time, the end time, and the length of your route. Therefore, you can find out the average speed.
Using "average over distance" approach: What if your track is so hilly that calculating every 10th of a mile is useless again? Then we would need to use every 100th of a mile. This will get more accurate, but will not be exact.
You could use a unit like "speed per meter". You would have to track the average speed of every meter then, which will be extremely difficult (even with assistance of a bike computer/smartphone). Still, every meter is an average speed, so, mathematically seen, it is still not exact. You would need the speed of an infinite small part of a meter to get the exact speed, which is impossible.
So, from my point of view, it is not done for two reasons:
1. Difficult to track
2. Mathematically incorrect
However, you can calculate anything the way you want. We will not tell anyone ;-)
-
There are other, non-exacting ways to go about this, and they're used often in devices like GPSes. They receive and store point data - latitude, longitude, elevation and time in chunks as fine or course as your unit and settings specify. From this, you can get "speed between each point", and is one way they can use to calculate the total average, though the intervals are spaced by time units and not distance. For these devices it is both easy to track and mathematically as accurate as your device is calibrated to. – Ehryk Jun 19 '13 at 8:19
@Ehryk as accurate as your device is calibrated - of course you can do that. But the device limits the accurateness (although it is still very exact). The question was Why aren't average speeds computed over distance? (in general), and this are the reasons I can think of. Nothing blocks you from calculating it differently, if you want to. – Uooo Jun 19 '13 at 8:22
Agreed, I just wanted to add that the points you have apply more to human tracking, and aren't much of a factor to modern electronics. – Ehryk Jun 19 '13 at 8:25
This is really a statistics / maths question rather than a bicycles question.
I think the sum you are proposing would be closer to the median speed than the mean (average). There are 3 key statistical measures that can all be useful:
Mean or average in the case of speed would be distance/time.
Let's say we take the average speed over every 1 minute over a 60 min ride, the median is the speed at which 30 of the samples are below the median and 30 are above.
The Mode is the most commonly occurring average speed.
-
A statistic is a just a mathematical tool for summarizing data so as to answer a particular question (or set of questions).
The usual definition of average speed relates time and distance, and helps to answer questions which crop up regularly in practice: "how long will it take me to get home?" "can I make it to the café before it closes?" "do I need to take lights on this ride?"
It's not clear to me that there are any practical questions that your statistic helps to answer.
-
It would at least likely match the algorithm used within Expresso exercise bikes when stating the speed of the "pacer". – Daniel R Hicks Jun 21 '13 at 1:03
Exactly. When we have a set of different data we can simplify by replacing them all with the average and still get the same result in some calculation that we care about. If someone travels at various speeds on a route we can simplify by replacing each speed with the average speed and we still get the same duration. – bdsl Jan 19 at 17:40
Indeed your average speed is total distance over total time
Regarding sampling this is where you go wrong:
"If you compute the average over distance instead (i.e., your speed every tenth of a mile), the uphill and downhill are equally weighted."
No they are not equally weighted. The denominator is time (not distance). You need to take even samples in time. You cannot sample on distance if you want to average the sample.
20 miles up the hill and 20 miles down the hill.
Up assume 10 mph and down assume 30 mph.
First total distance over total time
Total distance is 40 miles
Total time is 20 miles / 10 mph + 20 miles / 30 mph = 2 hour up + 2/3 hour down = 8/3 hour
Average speed = total distance over total time = 40 miles / 8/3 hours = 120 / 8 mph
= 15 mph
The average speed is not the 10 + 30 / 2 = 20 mph because spent more time at 10 mph. Spent 3 times as long at 10 mph compared to 30 mph.
If you sampled every mile then indeed you would get the wrong answer of 15 mph average.
But if you sampled every minute you would get the right answer.
Up the hill you would have 120 samples at 10 mph and down the hill you 40 samples at 30 mph.
(120 * 10) + (40 * 30) / 120 + 40 = 1200 + 1200 / 160 = 240 / 16 = 15.
If you want to average the sample then the sample needs to based on the denominator.
But it is easier to just use total time over total distance.
If you had a magic hill that was 10 miles up and 30 miles down then your average speed would be the average as you would spend the same amount of time going up as down.
You can use any number you want for up versus down the hill. 18 mph up and 20 mph down. The average speed will not be speed up + speed down / 2. Because you will spend more time at the lower speed.
Average speed is caluculation:
d is total distance
su is speed up and sd is speed down
total distance / total time
d / (d / 2*su + d / 2*sd)
d / ( sd*d/2*su*sd + su*d/2*sd*su )
d / ( (sd*d + su*d) / 2*sd*su ) d*2*sd*du / d(su + sd)
2*sd*su / (su + sd)
hill same distance up as down
average speed = 2*sd*su / (su + sd)
in statistics it is called the harmonic mean
try 10 and 30 and get 15
try 20 and 20 and get 20
try 18 and 20 and get 19.95
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8752273321151733, "perplexity": 928.9423876025359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858583.14/warc/CC-MAIN-20150124161058-00216-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/468829/interpretation-lmer-output-with-interaction | # Interpretation lmer output with interaction
I have performed this model:
M1 <- lmer(RNA_V ~ time + GENDER + time:GENDER + (time | PATIENT), df, REML=T)
I need to know if the two genders have different viral decay (relation between time and RNA in a longitudinal dataframe with several RNA inputs and times for each subject.
The results were:
Considering that the association with time and RNA is negative (with more time elapsed, less RNA). The gender==2 condition increases the effect of this association 0.07 times? (almost significant p=0.058). So, the gender==2 presented sharper viral decay curve (more negative), am I right?
• time = -0.16462 denotes how much the average RNA viral load changes (here decreases) with unit of time for subjects with GENDER = 1.
• GENDER2 = -0.32416 denotes the difference in average RNA viral loads between subjects with GENDER = 2 and GENDER = 1.
• time:GENDER2 = 0.07643 denotes the difference between subjects with GENDER = 2 and GENDER = 1 in the average change of RNA viral load per unit of time. That is, the average RNA viral load changes with time + time:GENDER2 = - 0.16462 + 0.07643 per unit of time for subjects with GENDER = 2. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8498234748840332, "perplexity": 4701.8132676531595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00457.warc.gz"} |
https://us.sofatutor.com/mathematics/videos/order-of-operations | # Order of Operations
Rate this video
Ø 5.0 / 4 ratings
The author
Susan Sayfan
## DescriptionOrder of Operations
The order of operations (operator precedence) helps you with simplifying expressions or equations.
Maybe you wonder what you should do first: calculation inside the parantheses (brackets), or solving exponents (indices, powers, orders) and square roots (radicals)? And what about multiplication, division, addition, and subtraction? Do you multiply numbers first before you add? And in what order do you divide or subtract?
The order of operations gives you easy instructions of which operator has precendence if you have to handle expressions and equations. A mnemonic device for the order of operations is the acronym PEMDAS (BODMAS, BEDMAS, BIDMAS). To memorize it better you can make a sentence with those letters, such as "Please Excuse My Dear Aunt Sally."
In math, as well as in real-life situations, the order of operations is a necessary tool for all basic calculations and it is very easy to follow. You will need it for every following math topic, as well as in many science topics and in everyday life.
This video shows you situations that are common in daily life where order is relevant and shows you step by step how to transfer this knowledge into math. You will learn the precendence of operators in a funny, entertaining way, and will be guided through simplifying complex expressions the easy way. After watching this video, you will forever remember the right order of operations as you connect it with a great mnemonic device.
Write expressions in equivalent forms to solve problems. CCSS.MATH.CONTENT.HSA.SSE.B.3
### TranscriptOrder of Operations
The Order of Operations. The ORDER of OPERATIONS! Yesterday, my dear Aunt Sally did everything in the wrong order. Look, she put her underpants above her skirt! She sent us to school. Then, she made our breakfast after we left. Later she baked cookies, but added the eggs in after the cookies came out.
My dear Aunt Sally did everything in the wrong order! As you can see, order is very important in everyday life. It is also important in math. Solving math problems is like following a recipe. You must follow the recipe for the Order of Operations, or PEMDAS to simplify expressions.
### Steps in PEMDAS
• The first step in the Order of Operations is P for Parentheses. All expressions inside the parentheses should be evaluated first.
• E stands for Exponents. Exponents should be evaluated second.
• The next step is M and D, which stands for Multiplication and Division. After parentheses and exponents have been evaluated you should multiply and divide.
• Finally A and S means Addition and Subtraction. They represent the last step in the Order of Operations. The rule about solving left to right also applies to Addition and Subtraction.
Ok let’s evaluate some expressions. We’ll start with an easy one. You will see that following PEMDAS will always lead us to the right answer!
### Calculation Example 1 & 2 using PEMDAS
First let’s look at two similar expressions: 8 - 2 + 5 and 8 – (2 + 5). The only difference between them is the use of parentheses. The first expression only has addition and subtraction so you should perform the operation in order from left to right: 8 − 2 = 6 and 6 + 5 = 11.
The second expression has parentheses. In PEMDAS the P for parentheses comes first. So, the Order of Operations tells you to evaluate the inside of the parentheses first: 2 + 5 = 7 and 8 − 7 = 1. Although these problems seem similar. They have two different answers.
### Calculation Example 3 using PEMDAS
Okay, let's try a harder problem. This one has parentheses, exponents, addition, and subtraction! Parentheses come first. 8 − 2 gives you 6 and 5 + 2 gives you 7. Next comes Exponents: 6² = 36. Finally, you add 36 + 7 = 43.
### Calculation Example 4 using PEMDAS
Now it's time to get even more tricky! Look at how many operations we are using! This expression has Parentheses, Exponents, Multiplication, Division, Addition, AND subtraction!
First you should be looking at the Parentheses. Inside you have 8 ÷ 2 − 2. Once inside the parentheses you have to use PEMDAS again. Division comes before subtraction so you must divide 8 by 2 before subtracting 2. Now you have 4 − 2. Resulting in 2. In the other parentheses you must evaluate the exponent before adding. You will need to square 5 before adding 2: 5² = 25, 25 + 2 will leave us with 27.
This problem is already looking better since we have taken care of the parentheses. The next step is E for exponents. 2 cubed gives you 8. Now we do multiplcation and division moving from left to right: 4 · 8 = 32 and 27 ÷ 9 = 3. The last step is Addition! 32 + 3 = 35. See? We started with this big expression, but by following the rules of PEMDAS we are able to simplify the expression to get 35!
### PEMDAS Mnemonic
No matter how difficult the expression looks: Simply follow PEMDAS to get things done! You can remember PEMDAS with this sentence: Please excuse my dear Aunt Sally! So, please excuse my dear Aunt Sally. She was a little bit confused yesterday.
## Order of Operations Exercise
Would you like to practice what you’ve just learned? Practice problems for this video Order of Operations help you practice and recap your knowledge.
• ### Using the correct order of operations, determine the right recipe.
Hints
The expressions $8 - 2 + 5$ and $8 - (2 + 5)$ seem similar. The only difference is the use of parentheses.
The first expression $8 - 2 + 5$ equals $11$. The second has the result $8 - (2 + 5) = 1$.
The last step is Addition & Subtraction. Always solve the equation from left to right.
Solution
As you know, order is a very important matter in everyday life.
You might be wondering why we need an order of operations in math, too.
If the Order of Operations didn't exist, we wouldn't be sure which operation to do first...and this could have a large effect on the answer.
What kind of operations are covered in the Order of Operations?
• Parentheses
• Exponents
• Multiplication & Division
As you can see, we have combined the operations Multiplication & Division into one step and Addition & Subtraction into another. If only Multiplication and Division remain to be solved (no Addition or Subtraction), it is very important to perform the operations in the order they appear from left to right.
By taking the first letter of each of our operations, we get PEMDAS - this way you will never forget the correct Order of Operations again!
• ### Correctly simplify the expression $(8 - 2)^2 + (5 + 2)$.
Hints
When you have simplified parentheses to a single number, you do not have to continue to write the parentheses.
Remember PEMDAS when evaluating the expressions.
Parentheses and Exponents come before Addition & Subtraction.
Solution
We want to simplify the expression $(8 - 2)^2 + (5 + 2)$. At first, it's helpful to recognize which operations are included in our expression. We can see:
• two sets of parentheses
• one exponent
• and subtraction
As we know, it's very important to evaluate the expression in the correct order. PEMDAS tells us how to evaluate the expression correctly. We simply have to follow the Order of Operations to arrive at the correct answer:
1. Parentheses
2. Exponents
3. Multiplication & Division
We take a look at the parentheses first. We can simplify the first parentheses from $(8 - 2)$ to $6$ and the second parentheses from $(5 + 2)$ to $7$, leaving us with $6^2+7$. As you can see, we have left out the parentheses. We can do that because we do not need them anymore.
Now, we can simplify the exponent: $6^2 + 7 = 36 + 7$. Finally, we add the remaining numbers, leaving us with $36 + 7 = 43$.
• ### Correctly simplify the expression and help Timothy figure out Sarah's number.
Hints
You can evaluate this expression by following the rules of PEMDAS.
When evaluating Multiplication & Division or Addition & Subtraction you must evaluate the operations in the order they appear in the equation from left to right.
Solution
Sarah had a cool idea. By encoding the last digits of her number, she has given Timothy a really difficult challenge.
But she didn't know that Timothy would be able to follow the rules of PEMDAS. He was able to simplify the expressions and find out the last three digits!
This is what he evaluated:
$\begin{array}{rcl} (2 \times 3)^2+1+800-(14-10)^2 & \overset{P}{=} & 6^2+1+800- 4^2\\ & \overset{E}{=} & 36 +1+800- 16\\ & \overset{A,S}{=} & 821 \end{array}$
Sarah's complete number is $1-713-555-1821$. Lucky Timothy.
• ### Find out how many eggs and how much flour Sally needs for her cookie recipe.
Hints
Evaluate Parentheses before Exponents.
When solving a problem, you can often simplify expressions by using PEMDAS.
Start with the Parentheses. Evaluate the Subtraction operators outside the parentheses in both expressions last.
You have to follow the rules of PEMDAS inside the parentheses, too.
Solution
What we have here is a really tasty recipe for cookies. But someone has substituted the amounts for eggs and flour with some longer mathematical expressions. We have to evaluate the expressions before we can go on baking. Let us take a look and see how many eggs we need:
$\begin{array}{rcl} (8-\frac{15}{5} \times 2)^3-\frac{(2+3)^2}{25} & \overset{D, A}{\longrightarrow} & (8-3 \times 2)^3-\frac{5^2}{25}\\ & \overset{M}{\longrightarrow} & (8-6)^3-\frac{5^2}{25}\\ & \overset{P}{\longrightarrow} & 2^3- \frac{5^2}{25}\\ & \overset{E}{\longrightarrow} & 8 - \frac{25}{25}\\ & \overset{D}{\longrightarrow} & 8 - 1\\ & \overset{S}{\longrightarrow} & 7 \end{array}$
We solve the parentheses first. Inside of the left parentheses, we have to simplify the fraction first. Then we multiply $3$ and $2$ and finally subtracted the two numbers. It's very important to follow the rules of PEMDAS inside parentheses, too.
After evaluating the parentheses, we should move on to the exponents. Now we can simplify the last fraction on the right side before subtracting in the end. That leaves us with $7$ eggs!
And how much flour do we need? Let's solve the last problem:
$\begin{array}{rcl} \frac{(6+2)^2}{4^2} \times 3 - \frac{(\frac42) ^3}{2^2} \times 3 & \overset{A,D}{\longrightarrow} & \frac{8^2}{4^2} \times 3 - \frac{2 ^3}{2^2} \times 3\\ & \overset{E}{\longrightarrow} & \frac{64}{16}\times 3 - \frac84 \times 3\\ & \overset{D}{\longrightarrow} & 4 \times 3 - 2 \times 3\\ & \overset{M}{\longrightarrow} & 12 - 6 \\ & \overset{S}{\longrightarrow} & 6 \end{array}$
For solving this expression, we solved the Parentheses first. Then we can evaluate the Exponents followed by Multiplication and Division. Finally, we subtracted from left to right. We found out that we need $6$ cups of flour.
Thanks to PEMDAS, we can now start baking!!!
• ### Identify the mnemonic used to remember PEMDAS.
Hints
Look at the initials of the words.
Which letters are in the acronym PEMDAS?
Solution
With PEMDAS you can simplify expressions: you simply have to follow the rules.
First evaluate Parentheses, then Exponents, followed by Multiplication and Division and finally you evaluate Addition and Subtraction.
Remember to always evaluate Multiplication and Division as well as Addition and Subtraction from left to right!
You can remember this order with a funny mnemonic:
Please Excuse My Dear Aunt Sally.
• ### Simplify the expressions by using the order of operations.
Hints
PEMDAS stands for:
• Parentheses
• Exponents
• Multiplication & Division
$\begin{array}{rcl} 75 - 2 \times \left( 3 + \frac{\left(3+24 \right)}{9} \right)^2 & \overset{P,P,A}{\longrightarrow} & 75 - 2 \times \left( 3 + \frac{27}{9} \right)^2\\ & \overset{P,D}{\longrightarrow} & 75 - 2 \times \left( 3+3 \right)^2\\ & \overset{P,A}{\longrightarrow} & 75 - 2 \times 6^2\\ & \overset{E}{\longrightarrow} & 75 - 2 \times 36\\ & \overset{M}{\longrightarrow} & 75 - 72\\ & \overset{S}{\longrightarrow} & 3 \end{array}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813966691493988, "perplexity": 707.053880764503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00162.warc.gz"} |
https://astronomy.stackexchange.com/questions/20840/what-leads-to-increase-in-opacity-in-kappa-mechanism | # What leads to increase in opacity in kappa mechanism?
I understand that the kappa mechanism (that leads to star variability) causes an increase of opacity with increasing temperature in partial ionization zones. However, I'm not sure I understand what causes the increasing opacity with temperature.
Until recently, I thought that the increasing opacity was due to increasing Thompson opacity due to the increasing ionization fraction with temperature in partial ionization zones. However, some things I read suggested that that was not the cause. So I wanted to ask, what is the cause of increasing opacity with temperature in partial ionization zones? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784746170043945, "perplexity": 1378.3575118837646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057830.70/warc/CC-MAIN-20210926053229-20210926083229-00175.warc.gz"} |
https://byjus.com/physics/displacement-periodic-function/ | # Displacement as a function of time and Periodic function
To understand this idea of displacement as a function of time, we will have to derive an expression for displacement, assume a body travelling at an initial velocity of $v_1$ at time $t_1$ and then the body accelerates at a constant acceleration of ‘$a$’ for some time and a final velocity of $v_2$ at time $t_2$, keeping these things in assumption let’s derive the following.
Let’s write displacement as,
$d$ = $V_{average}*\Delta t$
Where $\Delta t$ is the change in time, assuming that the object is under constant acceleration.
$d$ = $(\frac{V_1 + V_2}{2})* \Delta t$
Where $V_2$ and $V_1$ are final and initial velocities respectively, let’s rewrite final velocity in terms of initial velocity for the sake of simplicity.
$d$=$(\frac{V_1 + (V_1~+~a*\Delta t)}{2})* \Delta t$
Where $a$ is the constant acceleration the body is moving at, now if we rewrite the above as,
$d$ = $(\frac{2*V_1}{2} + \frac{a*\Delta t}{2})*\Delta t$
The above expression is one of the most fundamental expressions in kinematics, it is also sometimes given as
$d$ = $V_{i} t~+~\frac{1}{2}at^2$
Where $V_i$ is the initial velocity and $t$ is actually the change in time, all the quantities in this derivation like Velocity, displacement and acceleration are vector quantities.
If we have a look at our final expression, it is very clear that change in displacement depends upon time.
## Example of an oscillating pendulum
Now if we draw a displacement-time graph of this oscillating pendulum bob, we will get something like this,
The graph of the motion of the bob shows us how the displacement varies with respect to time, when bob reaches highest point, the potential energy is maximum and the kinetic energy is minimum as the velocity is equal to zero, we can easily deduce that the velocity is equal to zero, by looking at the slope of the displacement-time graph at specific times.
Slope A is positive in the graph stating that the velocity of the body is also positive, or in the forward direction.
Slope B is equal to zero meaning that the velocity of the body is zero, meaning there is no motion at this point.
Slope C is negative in the graph stating that the velocity of the body is also negative, or in the reverse direction.
The magnitude of velocity is always positive, it is the direction that decides if the velocity is positive or negative.
The displacement of the bob repeats after certain amount of time, it is a periodic function hence the displacement at any given time in the future can be predicted if we know the time and the time period of the pendulum, hence we can say that displacement is a function of time.
To learn concepts in a simplified way join us here at BYJU’S.com
#### Practise This Question
A transformer has 1500 turns in the primary coil and 1125 turns in the secondary coil. If the voltage in the primary coil is 200 V, then the voltage in the secondary coil is | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650022387504578, "perplexity": 172.66124828117296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00472.warc.gz"} |
https://physics.stackexchange.com/questions/338789/how-many-compactifications-of-minkowski-space-are-possible | # How many compactifications of Minkowski space are possible?
Given an $n$-dimensional Minkowski space, how many spacetime manifolds can one obtain by some compactification of it?
In two dimensions, I can think of those, at least :
• The spacelike cylinder $\Bbb R_t \times S_x = \Bbb R^2 / \Bbb Z$
• The timelike cylinder $S_t \times \Bbb R_x = \Bbb R^2 / \Bbb Z$
• The torus $S_t \times S_x = \Bbb R^2 / \Bbb Z^2$
• The Klein bottle $\mathbb K = \Bbb R^2 / \sim$, $\sim$ the usual identification (I'm guessing it comes in both timelike and spacelike flavors)
• Misner space $\Bbb R^2 / B$, $B$ a Lorentz boost
• The timelike Moebius strip
• The spacelike Moebius strip
• The non-time orientable cylinder obtained by $(\Bbb R^2 / \Bbb Z) / (I \times T)$, $I$ some involution and $T$ time reversal
There are probably a bunch more, but those are the ones that pop to mind. The list seems fairly long, but I'm not really sure how to generalize it to any dimension.
I'm guessing that since they all can be expressed as a quotient, that is the thing to look into, the full list of discrete proper free groups that leave Minkowski space invariant, but there are of course infinitely many so. What classes can be found that will lead to distinct spacetimes?
There seems to be at least the translations along spacelike and timelike directions (and null, I suppose), the same but with change of orientation, and some involutions for the already compactified groups. Are those all the possible ones?
• I think you might want to be more precise/explicit in specifying when you consider compactifications as equivalent (Riemannian structure? smooth structure?) – Danu Jun 11 '17 at 19:26
• Being isometric, I suppose. Maybe up to finitely many continuous parameters (corresponding to the characteristic dimension of that compactification) – Slereah Jun 11 '17 at 19:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8966904878616333, "perplexity": 822.5961991214963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00291.warc.gz"} |
https://asmedigitalcollection.asme.org/FEDSM/proceedings-abstract/FEDSM2002/36150/445/298001 | Numerical simulation of transient cavitating flow in a axisymmetric nozzle was conducted in order to investigate the detailed motion of cavitation bubble clouds which may be dominant to atomization of a liquid jet. Two-way coupled bubble tracking technique was assigned in the present study to predict the unsteady cloud cavitation phenomena. Large Eddy Simulation (LES) was used to predict turbulent flow. Calculated pressure distribution and injection pressure were compared with measured ones. Then, calculated motion of cavitation bubble clouds was carefully investigated to understand the cavitation phenomena in a nozzle. As a result, the following conclusions were obtained: (1) Calculated result of pressure distribution along the wall, the relation between injection pressure vs. flow rate, and bubble distribution agreed with existing experimental result. (2) Cavitation bubble clouds were periodically shed from the tail of vena contracta, which usually formed by the coalescence of a few small bubble clouds. (3) Collapse of cavitation bubbles due to the re-entrant jet was observed in the numerical simulation.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371433854103088, "perplexity": 1792.6144899919807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00482.warc.gz"} |
https://planetmath.org/SpecialElementsInARelationAlgebra | special elements in a relation algebra
Let $A$ be a relation algebra with operators $(\vee,\wedge,\ ;,^{\prime},^{-},0,1,i)$ of type $(2,2,2,1,1,0,0,0)$. Then $a\in A$ is called a
• function element if $e^{-}\ ;e\leq i$,
• injective element if it is a function element such that $e\ ;e^{-}\leq i$,
• surjective element if $e^{-}\ ;e=i$,
• reflexive element if $i\leq a$,
• symmetric element if $a^{-}\leq a$,
• transitive element if $a\ ;a\leq a$,
• subidentity if $a\leq i$,
• antisymmetric element if $a\wedge a^{-}$ is a subidentity,
• equivalence element if it is symmetric and transitive (not necessarily reflexive!),
• domain element if $a\ ;1=a$,
• range element if $1\ ;a=a$,
• ideal element if $1\ ;a\ ;1=a$,
• rectangle if $a=b\ ;1\ ;c$ for some $b,c\in A$, and
• square if it is a rectangle where $b=c$ (using the notations above).
These special elements are so named because they are the names of the corresponding binary relations on a set. The following table shows the correspondence.
element in relation algebra $A$ binary relation on set $S$
function element function (on $S$)
injective element injection
surjective element surjection
reflexive element reflexive relation
symmetric element symmetric relation
transitive element transitive relation
subidentity $I_{T}:=\{(x,x)\mid x\in T\}$ where $T\subseteq S$
antisymmetric element antisymmetric relation
equivalence element symmetric reflexive relation (not an equivalence relation!)
domain element $\operatorname{dom}(R)\times S$ where $R\subseteq S^{2}$
range element $S\times\operatorname{ran}(R)$ where $R\subseteq S^{2}$
ideal element
rectangle $U\times V\subseteq S^{2}$
square $U^{2}$, where $U\subseteq S$
${{{{}\end{center}\inner@par\thebibliography\bibitem{sg}S.R.Givant,\emph{The % Structure of Relation Algebras Generated by Relativizations},% AmericanMathematicalSociety(1994).\endthebibliography\begin{flushright}\begin{% tabular}[]{|ll|}\hline Title&special elements in a relation algebra\\ Canonical name&SpecialElementsInARelationAlgebra\\ Date of creation&2013-03-22 17:48:43\\ Last modified on&2013-03-22 17:48:43\\ Owner&CWoo (3771)\\ Last modified by&CWoo (3771)\\ Numerical id&9\\ Author&CWoo (3771)\\ Entry type&Definition\\ Classification&msc 03G15\\ Defines&function element\\ Defines&injective element\\ Defines&surjective element\\ Defines&reflexive element\\ Defines&symmetric element\\ Defines&transitive element\\ Defines&equivalence element\\ Defines&domain element\\ Defines&range element\\ Defines&ideal element\\ Defines&rectangle\\ Defines&square\\ Defines&antisymmetric element\\ Defines&subidentity\\ \hline}\end{tabular}}}\end{flushright}\end{document}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 31, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966631293296814, "perplexity": 3178.962938479948}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00279.warc.gz"} |
http://psychology.wikia.com/wiki/Spatial_frequency?direction=prev&oldid=105480 | # Spatial frequency
34,114pages on
this wiki
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often the structure repeats per unit of distance. The SI unit of spatial frequency is cycles per meter. In image processing applications, the spatial frequency often is measured as lines per millimeter, which is 1000 times smaller than the SI unit.
In wave mechanics, the spatial frequency $\nu \$ is related to the wavelength $\lambda \$ by
$\nu \ = \ { 1 \over \lambda }$
Likewise, the wave number k is related to spatial frequency and wavelength by
$k \ = \ 2 \pi \nu \ = \ { 2 \pi \over \lambda }$
### Visual perception
In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325820803642273, "perplexity": 919.8624774645496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011240315/warc/CC-MAIN-20140305092040-00016-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/333856/what-is-the-physics-behind-the-energy-transferred-in-a-rotating-collision | What is the physics behind the energy transferred in a 'rotating collision'?
In this video, we can see that rotating the paper somehow makes it less susceptible to being damaged. I can tell that the paper has some angular momentum. But how do we explain this phenomena in terms of energy transfer? It sort of remind me of how using simple equations, ( for momentum and energy) we can show that objects moving faster in a linear collision have less energy transferred to them. Is there an analogous description for 'rotating collisions'?
Moreover, would it be correct to say that the paper becomes harder when spinning very fast?
• I think it's less about "the paper takes less damage" and more like the "centrifugal" force makes the paper more rigid. A similar trick is to jam a straw into a potato: the straw buckles. But if you put your thumb over the end of the straw it is more successful because the trapped air in the straw adds rigidity. The limp paper was always sharp enough and strong enough to cut things, it just can't maintain its shape unless it is spinning. May 17 '17 at 20:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868740200996399, "perplexity": 434.26526078523773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00041.warc.gz"} |
https://dsp.stackexchange.com/questions/44281/will-the-capacity-of-a-channel-becomes-unbounded-if-i-increase-its-signal-to-noi/44284 | # Will the capacity of a channel becomes unbounded if i increase its signal-to-noise ratio $S/N$ without limit?
According to the Shannon-Hartley theorem the capacity $C$ of a channel which has a signal-to-noise ratio of $S/N$ and a bandwidth $B$ can be calculated to be $C = B \log_2 \left( 1 + \frac{S}{N}\right)$. Heren if $B→\infty$, the capacity does not become infinite since, with an increase in bandwidth,the noise power also increases. If the noise power spectral density is $N_0/2$,then the total noise powr is $N=N_0B$,so the Shannon-Hartley law becomes \begin{align} C &= B \; \log_2 \left( 1 + \frac{S}{N_0B}\right) \\ &=\frac{S}{N_0}\left(\frac{N_oB}{S}\right)\log_2\left( 1 + \frac{S}{N_0B}\right) \\ &=\frac{S}{N_0}\log_2\left( 1 + \frac{S}{N_0B}\right)^{(\frac{N_0B}{S})}. \end{align} Now $$\lim_{x\to0}(1+x)^{1/x}=e$$ So now it becomes $$C_∞=\lim_{B\to\infty} C=\frac{S}{N_0}\log_2e=1.44\frac{S}{N_0}$$ so here channel capacity does not become infinite; that means, it's bounded.
But what happens if we increase the signal to noise ratio without bound? Will that give unbounded capacity? Is it possible?
After searching I got this but don't know how is this is possible; can some one justify this?
• Can you explain how do you obtain $C_{\infty}$? From your formula of $C$, it looks like, for fixed $S/N$, letting $B \rightarrow \infty$ implies $C \rightarrow \infty$. – MBaz Oct 10 '17 at 13:19
• @MBaz Actually i found the explantion for that in internet...i have edited my question...please check – Rohit Oct 10 '17 at 13:47
• In addition to the intuitive explanations below, you can see it as a simple consequence of the math: the logarithm is a monotonically increasing function, so for $W>0$, $W\log(x)\to\infty$ as $x\to\infty$. – MBaz Oct 10 '17 at 14:31
• An interesting mathematical result, but in the real world, general relativity (black holes) and QM will limit the S/N range. – hotpaw2 Oct 10 '17 at 15:26
• $$C = B \log_2 \left( 1 + \frac{S}{N}\right)$$ Herein if $B→\infty$, the capacity does not become infinite since, with an increase in bandwidth,the noise power also increases. ----- so what is preventing the signal power $S$ from increasing also as $B$ increases? in fact, in this Shannon Channel capacity formula, the noise power is defined to be fully within the bandwidth $B$. the more general channel capacity formula is: $$C = \int\limits_{0}^{B} \log_2 \left( 1 + \frac{S(f)}{N(f)}\right) \, df$$ so now $S(f)$ and $N(f)$ are the spectral densities of signal and noise at $f$. – robert bristow-johnson Oct 11 '17 at 3:42
There's a problem in the derivation for $C_\infty$.
Eventhough its mechanics is a quite simple limit process, the associated partially-true observation yields a misleading result; it assumes that when the bandwidth $B$ goes to infinity, so will be the total noise power $N$ (which is true), however the total signal power $S$ is assumed to be fixed (which is not true) as which may also go to infinity and hence the SNR $\frac{S}{N}$ would remain fixed! Rather it's assuming that signal energy $S$ is fixed and therefore SNR goes to zero as $B$ goes to infinity from which it would deduce that $$C_\infty = B_\infty \times \log(1 + SNR_\infty) = \infty \times \log(1) = \infty \times 0$$
Then he finds the limiting value as $$C_\infty = 1.44 \frac{S}{\eta}$$ where $S$ is the finite signal power that is transmitted through the infinite bandwidth channel and $\eta$ is the noise power spectral density.
You can easily see the fact that if you have infinite bandwidth $B$, then you could simultaneously transmit infinite different message signals $m_k(t)$ of each having a finite nonzero bitrate $R_k$ (for example by just using a simple FDM scheme) the total of which would add up to inifnite bits per second at once which make your information capacity to go infinity thereof.
Another consequence of SNR is the following observation: Given any analog channel with zero noise, it's information capacity is infinite. Proof: consider a system where you send an analog voltage level and the receiver converts it into a digital bitstream with N bits. If the channel has zero noise, then you can in principle send infinitely precise analog signal values. For example you can send the exact value of $\pi$ Volts over the channel. Since there is neither noise nor distortion in the channel (of smallest possible bandwidth) you would be transmitting infinite digits of the number $\pi$ to the receiver which requires infinite many bits to store. Therefore you send a single analog voltage evalue which is equivalent to infinite number of bits to represent digitally. When there's nonzero noise however, you can only transmit analog values up to noise floor precision which yields for example SNR based dynamic range limits of ADC systems.
• @Rohit what do you think about the answer ? dont you have any comments ? – Fat32 Oct 10 '17 at 17:15
• @Sorry sir...i had some urgent work that time,so i did't read your answer properly...i have still some doubt in the last paragraph of your answer but I will ask it lator.:-) – Rohit Oct 11 '17 at 9:08
By the sampling theorem, uniform samples taken at more than twice the bandwidth of a band-limited signal can be used to reconstruct the signal perfectly by sinc interpolation. For each different set of sample values there is a different band-limited function. In absence of noise, by increasing the bit depth of the samples one can store an unlimited amount of information per sample into the band-limited signal, and get it back simply by sampling and digitizing it again. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.976425051689148, "perplexity": 470.1370448748675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00424.warc.gz"} |
http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=2873 | When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.FSTTCS.2010.308
URN: urn:nbn:de:0030-drops-28739
URL: http://drops.dagstuhl.de/opus/volltexte/2010/2873/
Go to the corresponding LIPIcs Volume Portal
### Computing Rational Radical Sums in Uniform TC^0
pdf-format:
### Abstract
A fundamental problem in numerical computation and computational geometry is to determine the sign of arithmetic expressions in radicals. Here we consider the simpler problem of deciding whether $\sum_{i=1}^m C_i A_i^{X_i}$ is zero for given rational numbers $A_i$, $C_i$, $X_i$. It has been known for almost twenty years that this can be decided in polynomial time. In this paper we improve this result by showing membership in uniform TC0. This requires several significant departures from Blömer's polynomial-time algorithm as the latter crucially relies on primitives, such as gcd computation and binary search, that are not known to be in TC0.
### BibTeX - Entry
@InProceedings{hunter_et_al:LIPIcs:2010:2873,
author = {Paul Hunter and Patricia Bouyer and Nicolas Markey and Jo{\"e}l Ouaknine and James Worrell},
title = {{Computing Rational Radical Sums in Uniform TC^0}},
booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
pages = {308--316},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-23-1},
ISSN = {1868-8969},
year = {2010},
volume = {8},
editor = {Kamal Lodaya and Meena Mahajan},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468299746513367, "perplexity": 3400.0069049103586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720356.2/warc/CC-MAIN-20161020183840-00272-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Cotangent_bundle | # Cotangent bundle
In mathematics, especially differential geometry, the cotangent bundle of a smooth manifold is the vector bundle of all the cotangent spaces at every point in the manifold. It may be described also as the dual bundle to the tangent bundle.
## The cotangent sheaf
Smooth sections of the cotangent bundle are differential one-forms.
### Definition of the cotangent sheaf
Let M be a smooth manifold and let M×M be the Cartesian product of M with itself. The diagonal mapping Δ sends a point p in M to the point (p,p) of M×M. The image of Δ is called the diagonal. Let $\mathcal{I}$ be the sheaf of germs of smooth functions on M×M which vanish on the diagonal. Then the quotient sheaf $\mathcal{I}/\mathcal{I}^2$ consists of equivalence classes of functions which vanish on the diagonal modulo higher order terms. The cotangent sheaf is the pullback of this sheaf to M:
$\Gamma T^*M=\Delta^*(\mathcal{I}/\mathcal{I}^2).$
By Taylor's theorem, this is a locally free sheaf of modules with respect to the sheaf of germs of smooth functions of M. Thus it defines a vector bundle on M: the cotangent bundle.
### Contravariance in manifolds
A smooth morphism $\phi\colon M\to N$ of manifolds, induces a pullback sheaf $\phi^*T^*N$ on M. There is an induced map of vector bundles $\phi^*(T^*N)\to T^*M$.
## The cotangent bundle as phase space
Since the cotangent bundle X=T*M is a vector bundle, it can be regarded as a manifold in its own right. Because of the manner in which the definition of T*M relates to the differential topology of the base space M, X possesses a canonical one-form θ (also tautological one-form or symplectic potential). The exterior derivative of θ is a symplectic 2-form, out of which a non-degenerate volume form can be built for X. For example, as a result X is always an orientable manifold (meaning that the tangent bundle of X is an orientable vector bundle). A special set of coordinates can be defined on the cotangent bundle; these are called the canonical coordinates. Because cotangent bundles can be thought of as symplectic manifolds, any real function on the cotangent bundle can be interpreted to be a Hamiltonian; thus the cotangent bundle can be understood to be a phase space on which Hamiltonian mechanics plays out.
### The tautological one-form
Main article: Tautological one-form
The cotangent bundle carries a tautological one-form θ also known as the Poincaré 1-form or Liouville 1-form. (The form is also known as the canonical one-form, although this can sometimes lead to confusion.) This means that if we regard T*M as a manifold in its own right, there is a canonical section of the vector bundle T*(T*M) over T*M.
This section can be constructed in several ways. The most elementary method is to use local coordinates. Suppose that xi are local coordinates on the base manifold M. In terms of these base coordinates, there are fibre coordinates pi: a one-form at a particular point of T*M has the form pidxi (Einstein summation convention implied). So the manifold T*M itself carries local coordinates (xi,pi) where the x are coordinates on the base and the p are coordinates in the fibre. The canonical one-form is given in these coordinates by
$\theta_{(x,p)}=\sum_{{\mathfrak i}=1}^n p_idx^i.$
Intrinsically, the value of the canonical one-form in each fixed point of T*M is given as a pullback. Specifically, suppose that π : T*MM is the projection of the bundle. Taking a point in Tx*M is the same as choosing of a point x in M and a one-form ω at x, and the tautological one-form θ assigns to the point (x, ω) the value
$\theta_{(x,\omega)}=\pi^*\omega.$
That is, for a vector v in the tangent bundle of the cotangent bundle, the application of the tautological one-form θ to v at (x, ω) is computed by projecting v into the tangent bundle at x using dπ : TT*MTM and applying ω to this projection. Note that the tautological one-form is not a pullback of a one-form on the base M.
### Symplectic form
The cotangent bundle has a canonical symplectic 2-form on it, as an exterior derivative of the canonical one-form, the symplectic potential. Proving that this form is, indeed, symplectic can be done by noting that being symplectic is a local property: since the cotangent bundle is locally trivial, this definition need only be checked on $\mathbb{R}^n \times \mathbb{R}^n$. But there the one form defined is the sum of $y_{i}dx_i$, and the differential is the canonical symplectic form, the sum of $dy_i{\and}dx_i$.
### Phase space
If the manifold $M$ represents the set of possible positions in a dynamical system, then the cotangent bundle $\!\,T^{*}\!M$ can be thought of as the set of possible positions and momenta. For example, this is a way to describe the phase space of a pendulum. The state of the pendulum is determined by its position (an angle) and its momentum (or equivalently, its velocity, since its mass is not changing). The entire state space looks like a cylinder. The cylinder is the cotangent bundle of the circle. The above symplectic construction, along with an appropriate energy function, gives a complete determination of the physics of system. See Hamiltonian mechanics for more information, and the article on geodesic flow for an explicit construction of the Hamiltonian equations of motion. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891567230224609, "perplexity": 284.4782059637099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448956264.24/warc/CC-MAIN-20150501025556-00057-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://aas.org/archives/BAAS/v37n4/aas207/502.htm | AAS 207th Meeting, 8-12 January 2006
Session 63 From Here to Eternity: The Spitzer Legacy Programs
Poster, Tuesday, 9:20am-6:30pm, January 10, 2006, Exhibit Hall
## [63.14] The Spatially-Resolved Star Formation Law in M51
R.C. Kennicutt (IoA Cambridge), D. Calzetti (STScI), F. Walter (MPIA), M.K. Prescott (U. Arizona), SINGS Team
Most of our current knowledge of the star formation rate (SFR) vs gas density law in galaxies is based on spatially integrated measurements of disks, and thus averaged on scales of order 1 - 50 kpc. Thanks to a new generation of high-resolution multi-wavelength surveys of galaxies, it is now possible to probe the detailed behavior of the star formation law as a function of position within galaxies, with spatial resolutions of order 100 - 1000 pc. Such a program is a core component of the Spitzer Infrared Nearby Galaxies Survey (SINGS).
We present results from a study of the spatially-resolved star formation law in the nearby Sc galaxy M51a (NGC~5194). We have combined archival P\alpha maps of the center of the galaxy from HST, groundbased H\alpha images, and a 24~\mum infrared imaging from the SINGS project to map the extinction-corrected SFR in the disk. We have also used maps from The HI Nearby Galaxies Survey (THINGS) and CO maps from the BIMA SONG survey and other published surveys to map the corresponding distributions of atomic and molecular gas. We have correlated the SFRs and gas densities on scales of 7 - 45 arcsec (~300 - 1850 pc), and find that over all of these scales the relation is well represented by a Schmidt-type power law, with a slope (N ~ 1.5) similar to that found in integrated measurements of galaxies. We also present a new method for deriving extinction-corrected SFRs of HII regions and galaxies from the combination of 24~\mum and H\alpha measurements.
Bulletin of the American Astronomical Society, 37 #4
© 2005. The American Astronomical Soceity. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012047410011292, "perplexity": 4182.476337610869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276584.58/warc/CC-MAIN-20140728011756-00086-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://brilliant.org/problems/tasty-trigo-3/ | Geometry Level 4
$\large\dfrac{a^2 + b^2 +c^2}{d^2}$
If $$a$$, $$b$$, $$c$$ and $$d$$ are side lengths of a quadrilateral, then what is the infimum value of the expression above?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8772599697113037, "perplexity": 208.97282657825815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822513.17/warc/CC-MAIN-20171017215800-20171017235800-00761.warc.gz"} |
http://mathhelpforum.com/calculus/15921-cauchys-theorem-print.html | # Cauchys Theorem
• June 13th 2007, 04:45 AM
moolimanj
Cauchys Theorem
Hi Any help with this question would be greatly appreciated - I just cant get my head around the squared term in the denominator:
(a) Evaluate the integral:
Int (e^zpi)/((z+i)(z-3i)^2) when
(i) C= {z:|z|=2}
(ii) C = {z:|z-1|=1}
(iii) C = {z:|z-2i|=2}
(b) Use liouvilles theorem to show that there is at least one value z subset C for which |sinz|>2007
(c) Show that if f is an entire function that satisfies |2007i+f(z)|>2007 then f is constant
(d) Deduce from the result in (b) that if f is an entire function that satisfies Im(f(z))>0 for all z subset c, then f is constant
Many thanks
• June 13th 2007, 06:44 AM
ThePerfectHacker
Quote:
Originally Posted by moolimanj
(b) Use liouvilles theorem to show that there is at least one value z subset C for which |sinz|>2007
Assume not! Then $|\sin z| \leq 2007$ for all $z\in \mathbb{C}$. Now since $f(z) = \sin z$ is an entire function it must mean that by Liouville's theorem that $\sin z = K$ for some complex number $K$, which is not true, i.e. $\sin \pi = 0 \mbox{ and }\sin \frac{
\pi}{2} = 1$
. Thus by contradiction $|\sin z| > 2007$ for some $z\in \mathbb{C}$.
Quote:
(c) Show that if f is an entire function that satisfies |2007i+f(z)|>2007 then f is constant
Hint: $||f(z)|-|2007i|| \leq |f(z) - 2007i| \leq |f(z)|+|2007i|$
$|2007i + f(z)| > 2007$
• June 13th 2007, 08:54 AM
ThePerfectHacker
1 Attachment(s)
Quote:
Originally Posted by moolimanj
(a) Evaluate the integral:
Int (e^zpi)/((z+i)(z-3i)^2) when
This is done by Cauchy's Integral Formula:
First we write,
$\oint_C \frac{e^{\pi z}}{(z+i)(z-3i)^2} dz = \oint_C e^{\pi z}\cdot \left( \frac{-1/16}{z+i} + \frac{1/16}{z-3i} + \frac{1/(4i)}{(z-3i)^2} \right) dz$
That was partial fractions.
Now, seperate the contour integrals over "Western Europe":
$\boxed{-\frac{1}{16}\oint_C \frac{e^{\pi z}}{z+i} dz + \frac{1}{16}\oint_C \frac{e^{\pi z}}{z-3i} dz + \frac{1}{4i} \oint_C \frac{e^{\pi z}}{(z-3i)^2} dz}$
Let me state the two theorems we need to know:
Theorem 1: Let $\gamma$ be a contour and let $f(z)$ be holomorphic in an open set containing the contour and its interior. Then for any point $a$ inside $\gamma$ we have:
$f(a) = \frac{1}{2\pi i} \oint_{\gamma} \frac{f(z)}{z-a} dz$
Theorem 2: Let $\gamma$ be a contour and let $f(z)$ be holomorphic in an open set containing the contour and its interior. Then for any point $a$ inside $\gamma$ we have:
$f'(a) = \frac{1}{2\pi i} \oint_{\gamma} \frac{f(z)}{(z-a)^2} dz$
Now we know enought to do this problem.
Quote:
(i) C= {z:|z|=2}
I will do this one and leave (ii) and (iii) for you to think about. Look below. The circle is shown and the red points are the singularities.
We have three contour integral we need to evaluate.
The second and third one are simple.
It is zero because "all poles are in Eastern Europe".
By that I mean that the function we are integrating are holomorphic in that circle. And Cauchy's Theorem (not the integral formula, the one with the contours) tells us that the integrals are zero.
So the problem reduces to find,
$-\frac{1}{16}\oint_C \frac{e^{\pi z}}{z-3i} dz$
The function here is clearly $f(z) = e^{\pi z}$ and it indeed satisfies Theorem 1.
So the Contour Integral is equal to:
$- \frac{1}{16} \cdot 2\pi i \cdot f(3i) = -\frac{\pi i e^{3\pi i}}{8} = - \frac{\pi i \cos 3\pi - \pi \sin 3\pi }{8} = \frac{\pi i}{8}$
• June 15th 2007, 04:42 AM
moolimanj
Cauchys
Thanks for you help on this perfecthacker
However, I cant figure out how to do part (a)(ii) and (iii).
if C={z:|z-1|=1 then 3i is again outside the circle. Does this mean that the answer is the same? -i in this instance is on the circle. I'm confused - if its on the circle, should the answer be 0, or would the answer be the same as part (i)?
Also for part (iii) if C={z:|z-2i|=2} then -i is excluded but 3i in included in the contour. So how can the answer be 0 by Cauchys theorem when at least one is contained within the circle. Please help
• June 15th 2007, 05:28 AM
ThePerfectHacker
1 Attachment(s)
Problem (ii): The contour integral is still the same:
$
\boxed{-\frac{1}{16}\oint_C \frac{e^{\pi z}}{z+i} dz + \frac{1}{16}\oint_C \frac{e^{\pi z}}{z-3i} dz + \frac{1}{4i} \oint_C \frac{e^{\pi z}}{(z-3i)^2} dz}
$
If it is on the circle, the contour integral is zero. But you made a mistake it is not on the circle. Both points are clearly outside the circle and hence by Cauchy's theorem its integral is zero.
EDIT: My picture is wrong. You were right, $i$ is on the circle. But anyways still the integral is zero.
• June 15th 2007, 05:39 AM
ThePerfectHacker
1 Attachment(s)
Problem (iii): Look at the picture below. As you say $3i$ is inside the contour and $-i$ is outside. That means the first integral in:
$
\boxed{-\frac{1}{16}\oint_C \frac{e^{\pi z}}{z+i} dz + \frac{1}{16}\oint_C \frac{e^{\pi z}}{z-3i} dz + \frac{1}{4i} \oint_C \frac{e^{\pi z}}{(z-3i)^2} dz}
$
Is zero by Cauchy's theorem.
However, the second and third one are not necessarily zero.
If you look at Theorem 1 and Theorem 2 that I posted. You will note that the function that we need here is $f(z)=e^{\pi z}$ to determine the value of each integral.
The first integral has $z-3i$ in the denominator so:
$\frac{1}{2\pi i}\oint_C \frac{e^{\pi z}}{z-3i} dz = f(3i) = e^{3\pi i} = \cos 3\pi + i \sin 3\pi = -1$.
To do the second integral we need the derivative: $f'(z) = \pi e^{\pi z}$ because it has $(z-3i)^2$ in the denominator. Thus:
$\frac{1}{2\pi i} \oint_C \frac{e^{\pi z}}{(z-3i)^2} dz = f'(3i) = \pi e^{3\pi i} = \pi \cos 3\pi + \pi i \sin 3\pi = - \pi$
Now you just have to add and simplify to get the answer which is trivial. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849228262901306, "perplexity": 392.9212422021541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703964/warc/CC-MAIN-20140313024503-00097-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.computer.org/csdl/trans/tc/1983/10/01676139-abs.html | Subscribe
Issue No.10 - October (1983 vol.32)
pp: 947-952
D. Brand , IBM Thomas J. Watson Research Center
ABSTRACT
A signal in a logical network is called redundant if it can be replaced by a constant without changing the function of the network. Detecting redundancy is important for two reasons: guaranteeing coverage in stuck-fault testing, and simplifying multilevel logic without converting to two levels. In particular, removing redundancy allows simplification in the presence of don't cares. The algorithm for redundancy removal described in this paper has been used successfully for both of the above purposes. It achieves savings in computer resources at the expense of possibly failing to discover some redundancies.
INDEX TERMS
testability, Don't cares, logic synthesis, optimization, redundancy
CITATION
D. Brand, "Redundancy and Don't Cares in Logic Synthesis", IEEE Transactions on Computers, vol.32, no. 10, pp. 947-952, October 1983, doi:10.1109/TC.1983.1676139 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926628828048706, "perplexity": 2148.8556486899333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398444974.3/warc/CC-MAIN-20151124205404-00029-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://infoscience.epfl.ch/record/231346?ln=en | ## Deciphering ambiguous control over fluxes through characterization and reduction of uncertainty
The development of kinetic models is still facing the challenges such as large uncertainties in available data. Uncertainty originating from various sources including metabolite concentration levels, flux values, thermodynamic and kinetic data propagates to the kinetic parameter space and prevents an accurate identification of parameters. The uncertainty in parameters propagates further to the outputs of the corresponding kinetic models, such as control coefficients and time evolutions of metabolic fluxes and concentrations, and it can have deleterious effects on metabolic engineering and synthetic biology decisions. We have introduced a framework recently, iSCHRUNK (in Silico Approach to Characterization and Reduction of Uncertainty in the Kinetic Models), which allows us to characterize uncertainty in the parameters of studied kinetic models and integrate this information into a new population of kinetic models with reduced uncertainty. To this end, iSCHRUNK combines the ORACLE framework with parameter classification techniques. With ORACLE, we first construct a set of large-scale and genome-scale kinetic models that are consistent with the integrated datasets and physicochemical laws. We then employ parameter classification techniques to data-mine the integrated datasets together with the outputs of ORACLE. This way, iSCHRUNK allows us to uncover complex relationships between the integrated data and parameters of kinetic models, and to use the obtained information for constructing an improved set of kinetic models with reduced uncertainty in ORACLE. In this work, we analyzed possible improvements of xylose uptake rate (XTR) in a glucose-xylose co-utilizing S. cerevisiae strain. For this purpose, we computed a population of 200’000 large-scale kinetic models, and we determined the key enzymes controlling XTR. However, large uncertainties due to a limited number of measured flux and metabolite concentration values and lack of data about kinetic parameters led to widespread distributions around zero values for control coefficients of some enzymes such as hexokinase (HXK). We used iSCHRUNK to identify the enzymes and their kinetic parameters that determine negative and positive control of HXK over XTR. Our study showed that by engineering only three saturation states related to two enzymes we could enforce either negative or positive control of HXK over XTR. This result implies that for kinetic modeling of metabolism only a small number of kinetic parameters should be accurately determined and that we can disregard the remaining parameters.
Presented at:
Biochemical & Molecular Engineering XX, Newport Beach, California, USA, July 16-20, 2017
Year:
2017
Keywords:
Laboratories: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245430588722229, "perplexity": 1780.3670625546113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255165.2/warc/CC-MAIN-20190519201521-20190519223521-00531.warc.gz"} |
https://kleinschmidt.wordpress.com/tag/fonts/ | ## Posts Tagged ‘fonts’
### XeTeX: modern font support for TeX
If you know me at all you know that I love LaTeX. It let’s you specify the content and logical structure of your document and takes care of making it look nice, including tricky mathematical expressions, yadda yadda yadda.
One thing that LaTeX is really bad at, though, is font support. This isn’t a problem for the most part, and I’ve actually come to prefer the look of Computer Modern (the only real font ever created using Knuth’s Metafont language, and instantly recognizable to LaTeX geeks the world over). But if you need to, say, typeset something in Arial (shudder), there isn’t exactly an easy way to do it, and god forbid that you might want to use a font that’s not freely available (like, just an arbitrary example, Times New Roman).
Enter our hero, XeTeX (and it’s big sibling, XeLaTeX). XeLaTeX extends the low-level typesetting engine of TeX to use modern font/typography technology. This includes support for Unicode, and super-slick typography conventions like OpenType and Apple Advanced Typography (AAT), which allow the typesetting of scripts with complicated rules for combining symbols (like Tibetan) and different writing directions (e.g. right-to-left scripts such as Arabic and Hebrew).
The really great thing about XeTeX, for my current purposes, is that it allows you to typeset almost anything using any font installed natively on your computer. That is, XeTeX essentially adds that drop-down font-selection menu that every other text editor has.
As an example, I will show you how stupid-easy it is to typeset a document in 12 pt. Times New Roman (with 1-inch edges, not that it matters)
```\documentclass[12pt]{article} \usepackage[margin=1in]{geometry} \usepackage{fontspec} \setmainfont[Mapping=tex-text]{Times New Roman}```
Yes, folks, it really is that easy: just two lines of code (above and beyond the usual documentclass/geometry combo), and nothing whatsoever that needs to be converted using FontForge, etc. The only trick is that, instead of running latex, you run xelatex (which is easy to automate using emacs and AUCTeX).
As you can probably tell from the snippet above, the package you want to use is fontspec, which is the LaTeX interface for XeTeX’s font-specification system, and its documentation has lots of good examples.
• ## my name is dave
and this is my blog. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138343930244446, "perplexity": 3679.466058809972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822116.0/warc/CC-MAIN-20171017144041-20171017164041-00070.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.