url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://mathoverflow.net/questions/141150/2-class-group-of-a-quadratic-imaginary-extension/141153
# 2-class group of a quadratic imaginary extension Let $p\equiv 5 [8]$ be a prime number, and consider $K=\mathbb{Q}(\sqrt{-p})$. I would like to check that the $2$-Sylow subgroup of the class group $C_K$ has order $2$ (I'm pretty sure it's true). Apparently, this can be done using genus theory, but I don't know anything about it or class field theory , really. I know that $M=\mathbb{Q}(i,\sqrt{p})$ is the genus field of $K$. Knowing this, using the few results I found in the literature, I can show that, if $K'/K$ is an unramified abelian extension of degree $4$, then $K'/\mathbb{Q}$ is Galois, with Galois group the quaternionic group $\mathbb{H}_8$, and I'm stuck... If someone knows how to prove this, I would be happy to read a proof. Greg - Greg,---In the edit to my answer I should have said that a generator of one of the 2 ideals is a square root of r(a+bi), while that of the other is a square root of r(a-bi). Anyway, does my answer solve your follow-up question to your satisfaction? I think I've kept the use of class-field theory to a minimum. –  paul Monsky Sep 9 '13 at 13:14 Here's a simple argument using Hilbert's theorem 90. No doubt it's the same as Will's (deleted) argument couched in quadratic form language, and I suppose it's the same as the Frohlich -Taylor argument as well. First one shows that the 2-torsion in the class group has order 2 and is generated by the prime P of norm 2. For suppose I lies in an order 2 ideal class C. Then the fractional ideal I/I(bar) is principal on some alpha, and the norm of alpha can only be 1. Using Hilbert's theorem 90, one finds an ideal J in the class of C with J=J(bar). Since only 2 and p ramify, and the prime over p is principal, we can assume J is P or (1). Since aa+pbb isn't 2, the result follows. It remains to show that P isn't equivalent to the square of any ideal. Suppose on the contrary that we have (beta)=P*(Q^2). Taking norms we get aa+pbb=2(non-zero square), We may assume a and b are in Z and are odd. Then the left-hand side is 6 mod 8, a contradiction. EDIT: Here's an idea for handling your follow-up question in a reasonably elementary way, giving an explicit generator of your ideal lying over p, without heavy class-field-theory artillery. Let K be Q(root(-p)). Lemma 1__There is no unramified degree 2 extension of K other than K(i). For let K(root(alpha) be such an extension. Then (alpha) is the square of an ideal. So from what we know, this ideal is principal or P*(principal), and alpha is in the group generated by -1,2 and the squares. The rest is easy. Lemma 2__There is no degree 4 cyclic unramified extension of K. For Let D(L) and D(K) be the groups of fractional ideals of L and K. The first inequality of class field theory (whose proof, using the Herbrand quotient is elementary; no ideles required) tells us that the quotient of D(K) by the subgroup generated by the principal fractional ideals and the norms of elements of D(L) has order at least 4. It follows that C/C^4 has order at least 4, where C is the ideal class group of K. But we know that C^4=C^2,and that C/C^2 has order 2. Corollary__There is no unramified degree 2 extension L of K(i). For in this case, L/K would be of degree 4 and solvable, and the smallest Galois extension of K containing L would contain a degree 4 unramified abelian extension of K, contradicting Lemmas 1 and 2. Now write p as a^2+b^2 with b even, and let r be the fundamental unit in Z[(1+root(p))/2]. Then it should be possible to show that the extension of K(i) obtained by adjoining a square root of r(a+bi) is unramified. So by the corollary r(a+bi) has a square root in K(i), and this square root is what you're looking for. - Of course Hilbert's theorem 90 in this situation is a triviality, and in some sense goes back to Gauss. –  paul Monsky Sep 3 '13 at 21:09 Thanks! Very nice. Now, I'd like to ask another thing, which is related. I'd like to prove that the class group of M=Q(i,p√) has no element of order 2, in an elementary way. I know how to prove that M has no unramified 2-extension; properties of Hilbert class field then implies the result. However, I wonder if there is an elementary way to see this, in the spirit of the previous answer. My ultimate goal is to prove that the ideals lying above p are principal in an elementary way. Since the square of these ideals are principal, I would be done. –  GreginGre Sep 4 '13 at 11:13 Is this true when p=37? I don't think that either root(37) or(6+root(37))(root(37)) is a norm from the ring of integers in Q(i,root(37)). –  paul Monsky Sep 4 '13 at 13:09 The element $\alpha=(-3+4i)+(1-i)\dfrac{1+\sqrt{37}}{2}$ is an algebraic integer with absolute norm $37$. In fact, its norm over $\mathbb{Q}(i)$ is $-6+i$. –  GreginGre Sep 4 '13 at 14:12 Proof of the fact that $M$ has no unramified $2$-extension: let $G$ be the Galois group of the the maximal unramified $2$-extension of $K$. The previous result shows that $G^{ab}$ has order $2$. A classical result of group theory says that $G$ has order $2$ in this case. So the maximal unramified $2$-extension of $K$ has degree $2$ over $K$. So, this is $M$, since $M/K$ is unramified. In particular, $M$ has no unramified $2$-extensions. So $C_M$ is odd. But the square of an ideal $\mathcal{P}$ above $p$ is principal (generated by a prime divisor of $p$ in $\mathbb{Z}[i]$) and we are done. –  GreginGre Sep 4 '13 at 14:22 The result is contained in Theorem 41 of Fröhlich-Taylor: Algebraic number theory. See also Theorem 39 in that book. - Thanks! I had a look to Fröhlich-Taylor . The proof seems quite involded. I thought this would be easier to show, using elementary arguments. At least, the result is true :-) –  GreginGre Sep 3 '13 at 19:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9251823425292969, "perplexity": 331.9366264036462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463679.20/warc/CC-MAIN-20150226074103-00133-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/circularly-polarized-plane-wave-and-electron.248281/
# Circularly-polarized plane wave and electron 1. Aug 3, 2008 ### fizzle Can a classical circularly-polarized plane electromagnetic wave, that's bounded in time, transfer net energy/momentum to an electron? 2. Aug 3, 2008 ### cesiumfrog So you're excluding Compton scattering.. A point charge in a circularly polarized wave will be driven around, by the direction of the local field, in a little circle (so some energy must be transferred at least temporarily). And since it is accelerated into such motion, the point charge must radiate its own EM wave. This is very similar to the way light interacts with a medium; in one direction the little EM wave will slightly cancel (and retard) the driving wave, and in the opposite direction this little EM wave can be thought of as reflection of the driving wave. By global argument, the electron must be pushed forward (tracing a slackening helix) to complement the reflected component of the wave momentum. Last edited: Aug 3, 2008 3. Aug 3, 2008 ### Andy Resnick Are you asking if electromagnetic radiation can transfer (angular) momentum to a free particle? Circularly polarized light incident on a linearly birefringent object will transfer angular momentum to the object. Or are you asking if the electromagnetic field can drive angular momentum transitions for a bound electron? Yes- but circularly polarized light will not transfer angular momentum. Instead, so-called optical vortices are associated with orbital angular momentum- circular polarization states are associated with spin transitions. However, the literature is a little unclear (to me) if an optical vortex is formally identical to the angular momentum operator. 4. Aug 3, 2008 ### fizzle I need to be more specific! I'm talking about a single free electron initially at rest and a single CP plane wave in a completely classical world. The CP wave forces the electron in a helical path in the direction the CP wave is travelling, while under the influence of the wave. I'm interested in the end result where the CP wave's amplitude increases from zero to some max value then decreases back to zero. When I do a simple computer simulation of this the result is no net transfer of energy/momentum to the electron. It always returns to rest when the CP wave's amplitude is a Gaussian. Last edited: Aug 3, 2008 5. Aug 3, 2008 ### cesiumfrog By "bounded" (Gaussian) you must mean a (half-) wave "pulse"? I used global argument because the mechanism is not obvious. There have been published papers on the topic of how a transverse wave on a string can transfer longitudinal momentum (that case can be answered with Pythagorean longitudinal stretching of the string). It is well known that a mirror recoils from a light pulse, but the mechanism is obscure since ideal mirrors have no longitudinal extent and since the EM fields appear perfectly transverse. I suspect your simulation would have to be extremely precise (this is a small effect) and take into account the finite extent of the charge as well as its self-interaction. There is no effect if the driving field is static or if the field of the charge itself is insignificant. There is reasonable basis against treating the electron as a point particle in such context as this. By the way, did you have a specific reference mentioning the effect? Last edited: Aug 3, 2008 6. Aug 3, 2008 ### fizzle This is all standard electromagnetic theory so there's no need to get into self-interaction, etc. The E field in the CP wave drives the electron in a circle and that circular motion results in v x B forcing the electron in the direction of the wave's motion. The non-zero amplitude could be hundreds, thousands, millions, etc. of cycles of the wave. Why am I asking this question? Because it would have significant implications in any classical description of Compton Scattering. It means that an electron would never get "ejected" from the target block simply because it interacted with an incident CP wave. The electron must interact with something else (eg. an atom, another electron, etc.) while interacting with the incident CP wave in order to retain any energy. This would explain one of the problems in a classical description of Compton Scattering => the speed of the electron required to produce the correct angular-dependence of the scattered radiation due to the relativistic Doppler Effect is much less than the speed of an ejected electron. When I calculate the orbital component of the electron's velocity when interacting with a CP wave (when the electron has effectively absorbed a "photon" of energy), I find that the orbital speed is equal to the maximum possible ejection velocity. I think this gives a much more complete classical description of the Compton Effect, including the seemingly random electron ejection direction. Basically, the incident CP sets the electron on its helical path where it then has to interact with a third party to retain any energy. 7. Aug 3, 2008 ### cesiumfrog I had forgotten to consider the magnetic field, thank you. (FWIW, self-interaction is relevant to some problems in a completely classical world, such as the inertia of a charged object/"electron".) Why did your simulation fail? 8. Aug 4, 2008 ### Andy Resnick I don't follow you here. Why do you think a circularly polarized plane wave will "force the (free) electron (to move) in a helical path"? Under what conditions? A circularly polarized beam incident onto a conductor should generate synchotron radiation, if your idea is correct. 9. Aug 4, 2008 ### fizzle Standard em theory. The electric field forces the electron to travel in a circle while v x B forces the electron to move in the direction of travel of the wave. The overall result is a helical path. This is all well known. A CP wave incident on a conductor results in a reflected CP wave. 10. Aug 4, 2008 ### Andy Resnick Not to me.... can you provide a reference? Erm, yes... but if the conduction electrons are induced to move in little circles, as you claim is well-known, they are undergoing an acceleration, which produces a field. Has anyone measured or detected this field? 11. Aug 4, 2008 ### fizzle Reference? This is very old stuff. I guess you could look up Thomson Scattering and then not ignore v x B. More modern references are in papers on laser acceleration. The field radiated by electrons is responsible for the reflection off a conductor. You measure and see it all the time. How do you think a mirror works? The electric field in the incident em wave causes the electrons to accelerate and emit a wave whose electric field is the inverse of that in the incident wave. You typically ignore qv x B in the incident wave because it's force is neglible compared to qE. I'm interested in the regime where v x B is not neglible => very high field strengths in the incident wave. 12. Aug 4, 2008 ### Andy Resnick Oh- plasmas. I don't know much about them. From what I just read, an interesting experiment would be to use dusty plasmas- the particle size is larger, so the optical wavelength would be larger as well- one could probably use visible light and track individual particles. I don't know the relevant literature- maybe someone has already done it. 13. Aug 4, 2008 ### cesiumfrog Why are you specifically interested in circularly polarised light? Why are you so interested in describing Compton scattering classically (whereas it seems like evidence for the quantisation of light)? Are you trying to describe bound electrons as classical charged particles orbiting a nucleus? If so, how do you explain that they do not radiate and fall in? Do you agree that a free electron exposed to (circularly polarised) light, with a finite envelope, will indeed experience a net transfer of energy and momentum (rather than being restored to its original velocity)? Do you have any contrary evidence/argument? Why do you say (classically) that an electron would need to interact with some other object in order to "be ejected" by (sufficiently intense) light? Since you did mention the photon, could you clarify what you mean by "classical description"? 14. Aug 5, 2008 ### fizzle I'm interested in CP waves because they're the easiest to analyze. I'm interested in a classical description of Compton Scattering because I want to see how far I can get with it. Does it provide any physical insight into quantum theory? Where does it go wrong? In the abstract of his 1925 paper, "A Quantum Theory of the Scattering of X-Rays by Light Elements", Compton explicitly states that the angular-dependence of the wavelength shift can be explained by classical theory if you assume an electron moving in the direction of propagation (the "drift velocity"). I'm not trying to describe bound electrons, just the most simple electromagnetic interaction possible: a single free electron and a single plane wave. I do not agree that a free electron will receive net energy/momentum from a bounded CP em wave. By "net" I mean that the energy/momentum of the free electron will be the same before and after the CP wave has passed it by. Here's a paper by McDonald with more details and references: http://www.hep.princeton.edu/~mcdonald/accel/dressing.pdf I say that a free electron, initially at rest, must interact with a third party in order to be "ejected" (aka. gain net energy/momentum) from a bounded CP wave ... otherwise it would have returned to rest after the wave had passed by. It will be displaced by the wave but will not net any energy/momentum. By "classical description" I mean classical electromagnetic theory plus special relativity. I've done the full analysis for Compton's original experiment (incident photon of 17.49 KeV) and the results are as follows: 1) An electron reaching Compton's drift velocity, where the angular-dependence of scattered em radiation matches his experiment with an incident wavelength of 7.09 x 10^-11m, is moving with the wave at 3.31% of the speed of light. 2) The electron's orbital component, remember that it's under the influence of a circularly-polarized incident wave, is 25.3% of the speed of light. Its orbital radius is 2.95 x 10^-12 m. 3) The electric field in the incident CP wave has to be 1.18 x 10^16 V/m. #3 is the killer here. That's an extraordinarily high electric field strength. Furthermore, the electric field energy varies as the cube of the frequency -- a "Compton Catastrophe", if you will. Just for laughs, I plugged in some other interesting numbers to see what fell out of the equations: a) For an electron absorbing a photon of energy equal to an electron's rest mass (511 KeV), the drift velocity is exactly 50% of the speed of light. I was surprised by this initially but it makes sense when you look at it from a momentum viewpoint. b) For an electron absorbing a photon of energy required for pair production (1.022 KeV), the drift velocity is equal to 66.67% of the speed of light, as expected after seeing (a). However, an interesting number falls out of the equations: the electron's orbital radius is equal to its Compton Wavelength (3.86 x 10^-13 m). A final observation from the analysis is the value of the recoil electron's velocity. In Compton Scattering, the recoil/ejected electron's velocity is a certain value that depends on the recoil angle, which Compton calculated assuming a billiard ball-like interaction between a photon and an electron. For the most part, this ejection velocity is much higher than the drift velocity in the classical analysis. However, there is a one distinct match with the classical analysis: the maximum value of the ejected electron's velocity is equal to the orbital velocity in the classical analysis. The bottom line of this exercise is that classical em theory can offer an explanation of Compton Scattering. It requires ultra-high field strengths and adds in the requirement of an interaction with a third party (which, interestingly enough, adds randomness/probability to the process). 15. Aug 5, 2008 ### cesiumfrog We agreed that the free electron will be longitudinally accelerated (in a "slackening helix") for the duration of the external pulse; by what mechanism do you propose it will be decelerated after the pulse? Do you concede that the electron will scatter part of the EM wave? That the EM wave must therefore end up with less momentum in its original direction? Would not the principle of momentum conservation then be violated if the responsible electron finished up with the same momentum as it started with? (I found the paper you cited to be unclear, incomplete and non-refereed, but I'll look into its Kibble reference...) 16. Aug 5, 2008 ### fizzle The electron isn't decelerated after the pulse, it's decelerated as the pulse's amplitude decreases. Anyway, I figured it out. You have to think in terms of unstable versus stable states between the electron's velocity and the incident wave. Initially, the electron is at rest and inertia keeps it from instantly adjusting its velocity to the incident wave. The wave's E and B fields "lead" the electron's velocity. This accelerates the electron orbitally and longitudinally, and is an unstable state. Eventually, the electron's velocity becomes perfectly in phase with the wave's E and B fields, where E is perpendicular to the velocity and B is parallel. With E perpendicular to v, there's no orbital acceleration and since B is parallel to v, there's no more longitudinal acceleration. This is a stable state and I imagine with more analysis you'd find that any small deviations around this state would lead back to it. Finally, when the amplitude of the incident wave begins to decrease, E and B in the wave begin to "lag" the electron's velocity because of inertia. E has a small component in the opposite direction of v and that reduces the orbital speed. Correspondingly, B is no longer perfectly parallel to v so v x B results in longitudinal deceleration. What's interesting is that the time derivative of the magnitude of E and B is critical here. Obviously, you think about the spatial and temporal derivatives in em theory all the time but here the time derivative of the magnitude is the most important part. No concession necessary. The scattering is an integral part of the whole process and carries away the energy and momentum removed from the incident plane wave. Be careful when accounting for energy and momentum in classical em fields! Doubt McDonald's analysis at your own peril. Better yet, derive the results yourself (as I did before finding his). He is correct. 17. Aug 6, 2008 ### cesiumfrog That is incorrect, but after reading Kibble I understand what you are trying to articulate. Since the acceleration corresponds with the peaks of E, the simple harmonic motion of the charge must have velocity lagging by a quarter of a cycle (where a=E=0), which coincides upon the zero of B. Since the slope of the wave envelope can drive the charge velocity to lag by just slightly more/less than a quarter cycle (i.e., no longer exactly centred upon B=0), the net Lorentz force provides longitudinal acceleration then deceleration. This basic derivation neglects back-reaction of any wave produced (read: back-scattered) by the charge. In this toy-context energy-momentum is conserved, and Kibble writes that the electron will no longer be returned to rest if it "scatters a photon". 18. Aug 6, 2008 ### fizzle No, it is correct. Do you have a link to Kibble's paper? It sounds like you (and he?) are referring to a linearly-polarized incident wave. I'm talking about a circularly-polarized incident wave, where |E| never goes to zero in a cycle. In that case, the electron reaches a stable state where its tangential velocity is perpendicular to E and parallel to B. I call this being perfectly in phase with the wave. If the wave's amplitude decreases a bit, the wave "lags" behind a bit because the electron is moving too fast. This lag results in E having a small component in the opposite direction of the electron's velocity. This is what slows the electron's orbital speed. Simultaneously, the lag causes B to be slightly non-parallel to the electron's velocity and results in a force that decelerates it's longitudinal speed. Note that the electron is not "scattering a photon" when it cycles from rest to interacting to rest again. As best as I can tell, that would be equivalent to the electron absorbing then re-emitting a photon unchanged. What's called "scattering a photon" happens when the electron interacts with a third party while interacting with the incident wave and doesn't return to rest. This causes a discontinuity in the electron's radiated em wave. If this paragraph appears to be vague and "hand waving", that's because it is! I reduced it to the most simple case possible. I don't know what happens when the incident wave has different polarizations. Circular is the easiest and produces the strongest effect since the longitudinal acceleration is the greatest. 19. Aug 6, 2008 ### cesiumfrog I linked Kibble's paper in post #15, but it's properly cited inside your own reference. For the stable state I do understand broadly saying "the electron is in phase with the wave" but note that the velocity lags E by 90 degrees (and may be perfectly in phase or exactly out of phase with B depending on polarisation chirality). Sorry if, earlier, my only describing one transverse component (independent of polarisation) caused confusion. I agree that, if the electron does not scatter any of the external wave, the electron will be longitudinally accelerated as the wave increases in intensity then decelerated back to its initial momentum as the wave decays past. (I hadn't previously been aware of this mechanism, which applies a net longitudinal force whenever the orbit of the charge is mismatched to the intensity of the wave.) However, consider the neglected (synchrotron) radiation produced by the rapid transverse acceleration of the electron. Obviously (since the position of the charge is 180 degrees out of phase with the applied E field) this radiation will be 180 degrees out of phase from the wave (hence the component propagating in the same direction will destructively interfere with the energy of the original wave) and we can call it a reflection (or continuous classical scattering) of part of the original wave, off from the electron. Since part of the wave is scattered, the wave momentum decreases. But the emission of radiation has a back-reaction (e.g., the Abram-Lorentz force) which will damp the transverse motion of the electron. This will maintain such a mismatch (between the external wave and the transverse orbit of the charge) as to re-invoke the earlier mechanism to also maintain a longitudinal acceleration (even now that the intensity of the incident wave is constant). However, when the incident wave intensity finally decreases away, the amount of associated longitudinal deceleration will be no more than before (despite extra intervening acceleration) therefore the free electron will ultimately retain some momentum from the wave (just as was required by global conservation of momentum to balance for the reflected part of the wave). Last edited: Aug 6, 2008 Have something to add? Similar Discussions: Circularly-polarized plane wave and electron
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606323599815369, "perplexity": 724.1291462657341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542288.7/warc/CC-MAIN-20161202170902-00407-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-10-counting-methods-and-probability-10-5-find-probabilities-of-independent-and-dependent-events-10-5-exercises-skill-practice-page-722/30
## Algebra 2 (1st Edition) a) In this case the events are independent: $\frac{4}{52}\frac{4}{52}\frac{4}{52}=\frac{1}{2197}$ b) In this case the events are not independent: the first one has probability $\frac{4}{52}$; the second one has probability $\frac{4}{51}$; and the third one has probability $\frac{4}{50}$. Thus the probability is: $\frac{4}{52}\frac{4}{51}\frac{4}{50}=\frac{8}{16575}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833363890647888, "perplexity": 172.89087443235172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00288.warc.gz"}
http://proceedings.mlr.press/v75/el-alaoui18a.html
# Detection limits in the high-dimensional spiked rectangular model Ahmed El Alaoui, Michael I. Jordan Proceedings of the 31st Conference On Learning Theory, PMLR 75:410-438, 2018. #### Abstract We study the problem of detecting the presence of a single unknown spike in a rectangular data matrix, in a high-dimensional regime where the spike has fixed strength and the aspect ratio of the matrix converges to a finite limit. This setup includes Johnstone’s spiked covariance model. We analyze the likelihood ratio of the spiked model against an “all noise" null model of reference, and show it has asymptotically Gaussian fluctuations in a region below—but in general not up to—the so-called BBP threshold from random matrix theory. Our result parallels earlier findings of Onatski et al. (2013) and Johnstone-Onatski (2015) for spherical spikes. We present a probabilistic approach capable of treating generic product priors. In particular, sparsity in the spike is allowed. Our approach operates through the principle of the cavity method from spin-glass theory. The question of the maximal parameter region where asymptotic normality is expected to hold is left open. This region, not necessarily given by BBP, is shaped by the prior in a non-trivial way. We conjecture that this is the entire paramagnetic phase of an associated spin-glass model, and is defined by the vanishing of the replica-symmetric solution of Lesieur et al. (2015).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9122977256774902, "perplexity": 642.6085595754047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572908.71/warc/CC-MAIN-20220817122626-20220817152626-00368.warc.gz"}
https://lqp2.org/node/1189
# Applications of local gauge covariance: Anomalies and QED in external potentials DPG Spring Conference Jochen Zahn on March 19, 2015 TalkDPG.pdf The framework of locally covariant field theory proved extremely fruitful for QFT on curved space-times. It can be straightforwardly generalized to more general non trivial background fields, in particular gauge connections. I will present two applications of this framework. The first is an elementary computation of the chiral anomalies, directly on Lorentzian space-times. The second is QED in external potentials, where I compare the locally covariant definition of the current to other definitions and consider it in concrete cases. Related papers: Locally covariant chiral fermions and anomalies The current density in quantum electrodynamics in external potentials The current density in quantum electrodynamics in time-dependent external potentials and the Schwinger effect
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287330508232117, "perplexity": 910.4360270204373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00372.warc.gz"}
http://mathoverflow.net/questions/32032/on-a-theorem-of-jacobson/32067
# On a theorem of Jacobson In a comment to an answer to a MO question, in which Bill Dubuque mentioned Jacobson's theorem stating that a ring in which $X^n=X$ is an identity is commutative (theorem which has shown up on MO quite a bit recently, e.g. here), Pierre-Yves Gaillard observed that there is a more general theorem in which $n$ is allowed to be different for each element of the ring, so that in fact we can rephrase the theorem as saying that the set $S=\{X^n-X:n>1\}\subset\mathbb Z[X]$ has the following property: If $A$ is a ring such that for every $a\in A$ there is an $f\in S$ such that $f(a)=0$, then $A$ is commutative. Of course, $S\cup (-S)$ also has this property, and even if we construct $S'$ from $S\cup(-S)$ by closing it under the operation of taking divisors in $\mathbb Z[X]$, it also has the same property. Pierre-Yves then asked: Is $S'$ maximal for this property? So, is it? - @Litt, I guess you are taking minimal in cardinality. But minimal in what sense, such that it implies commutative? x^2-x=0 is enough. Notice that putting more elements in the set doesn't necessarily make things better to get the commutativity because then some a's can satisfy equations from S and some others satisfy the new equations. (or all satisfy the new equations). –  Mlazhinka Shung Gronzalez LeWy Jul 15 '10 at 16:56 Dear Mariano: Thanks for mentioning this question. I think one should ask if $S\cup(-S)$ [and not $S$] is maximal for the property in question. –  Pierre-Yves Gaillard Jul 15 '10 at 16:57 $S$ isn't remotely maximal, as far as I can see. For example any divisor of $X^n-X$ for any $n$ can be added to it, as if $P(X)=0$ for $P$ some divisor of $X^n-X$ then $X^n-X=0$. Moreover, if you have a ring in which every element other than the number 9 satisfies $X^n=X$ for some $n$, then 9 will also satisfy this, because $3^n=3$ implies $9^n=9$. So you can also add $X-9$ to $S$. And so on and so on... –  Kevin Buzzard Jul 15 '10 at 18:17 @Pierre: can you prove that your set works before we start worrying about whether it's maximal? I only added $X-9$, I didn't add all $X-n$ simultaneously. But my gut instinct is that, if your set is OK, it won't be maximal because there are still plenty of other stupid tricks you can try (square of a linear factor etc). Note however that if someone comes up with an enlargement and then someone else says "OK then is this enlarged set maximal" we could be here all year! –  Kevin Buzzard Jul 15 '10 at 19:23 It still might be true though! I certainly don't know a counterexample (as you probably guessed---because if I did I would have played it instantly!). I am not optimistic about finding a "nice" maximal set though. I think the first thing to do is to read the proof and to see what's really going on, and to go from there. –  Kevin Buzzard Jul 15 '10 at 20:38 Herstein proved that $S$ can be enlarged to the set of all $a^2 p(a) - a$ with $p$ a polynomial (with integer coefficients). EDIT. Herstein's set may be maximal. The set can't contain any polynomials whose vanishing would be consistent with the ring containing (nonzero) nilpotent elements, so nothing in $S$ can be divisible by $a^2$. The lower degree terms are also highly constrained by the condition that if there is $p$-torsion then no $p^2$-torsion. I think I found the reference: Herstein, I. N. The structure of a certain class of rings. Amer. J. Math. 75, (1953). 864-871. --- Herstein even proves this: If, for every $a$ a ring $R$, there exists a polynomial $p_a(t)$ (with integral coefficients) such that $a^2p_a(a)-a$ is central, then $R$ is commutative. –  Pierre-Yves Gaillard Jul 15 '10 at 22:56 Yes, I should have explained that the full result only requires central rather than zero, but the latter is enough to answer the question. I never proved the full Jacobson theorem but I did work out the $f(x) = x^n - x$ problem some time ago (using the universal ring generated by the required relations) and if I remember correctly, the key conditions are that $f(x)=x$ mod $p$ and the form of $f$ rules out nilpotent elements. These two conditions easily force a direct sum decomposition into finite fields and this concludes the proof. Herstein apparently showed this is enough in general. –  T.. Jul 15 '10 at 23:07 I hope the following is correct. Let $A$ be the set of $f(X)\in\mathbb Z[X]$ with constant term $\pm1$, let $XA$ be the set of the $Xf(X)$, $f(X)$ in $A$. Let $R$ be a ring and $C$ its center. Herstein's Theorem says that if for any $r$ in $R$ there is an $f(X)$ in $XA$ such that $f(r)$ is in $C$, then $R=C$. It implies trivially the following. Put $B:=\{X-n\ |\ n\in\mathbb Z\}$. If for any $r$ in $R$ there is an $f(X)$ in $XA\cup A$ such that $f(r)=0$, then $R=C$. If for any $r$ in $R$ there is an $f(X)$ in $XA\cup B$ such that $f(r)$ is in $C$, then $R=C$. –  Pierre-Yves Gaillard Jul 16 '10 at 5:41 @T.: Herstein's set isn't maximal because you can add $X-1$. –  Kevin Buzzard Jul 16 '10 at 5:43 If you allow the zero ring as a ring, yes. But the $\pm X-n$ and the ones from A in Pierre-Yves' comment seem to be all you can add to Herstein's set. – T. 2 mins ago –  T.. Jul 16 '10 at 6:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9440668821334839, "perplexity": 204.48909926930187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645241661.64/warc/CC-MAIN-20150827031401-00327-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/38480/how-much-does-symbolic-integration-mean-to-mathematics
# How much does symbolic integration mean to mathematics? (Before reading, I apologize for my poor English ability.) I have enjoyed calculating some symbolic integrals as a hobby, and this has been one of the main source of my interest towards the vast world of mathematics. For instance, the integral below $$\int_0^{\frac{\pi}{2}} \arctan (1 - \sin^2 x \; \cos^2 x) \,\mathrm dx = \pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right).$$ is what I succeeded in calculating today. But recently, as I learn advanced fields, it seems to me that symbolic integration is of no use for most fields in mathematics. For example, in analysis where the integration first stems from, now people seem to be interested only in performing numerical integration. One integrates in order to find an evolution of a compact hypersurface governed by mean curvature flow, to calculate a probabilistic outcome described by Ito integral, or something like that. Then numerical calculation will be quite adequate for those problems. But it seems that few people are interested in finding an exact value for a symbolic integral. So this is my question: Is it true that problems related to symbolic integration have lost their attraction nowadays? Is there no such field that seriously deals with symbolic calculation (including integration, summation) anymore? - Manipulationally, an analytic (closed) form would be terribly convenient, in the sense that we know so much about (special) functions and the identities they satisfy that we would like to be able to exploit that whole body of knowledge for the integral at hand. For numerical work, a closed form may or may not be the best thing to have, depending on the circumstances. –  J. M. May 11 '11 at 15:29 If you broaden the question, allowing multiparameter (in)definite integrals and sums, then the question is not about only (transcendental) constants but, more generally, about the utility of closed form functions solving differential / difference equations, i.e. special functions. Do you intend to ask only about special constants or, more generally, special functions? –  Bill Dubuque May 11 '11 at 15:52 @J.M. : Thanks for your comment. But if you won't mind, can you be a little more specific? –  sos440 May 11 '11 at 15:56 @Bill Dubuque : Oh, that's my point. I am also interested in special functions. –  sos440 May 11 '11 at 15:57 Since you find symbolic integration interesting, you may find useful the references I mention in this answer: math.stackexchange.com/questions/37088/integration-doubt/… –  Andres Caicedo May 12 '11 at 16:39 I think it would be appropriate at this point to quote Forman Acton: ...at a more difficult but less pernicious level we have the inefficiencies engendered by exact analytic integrations where a sensible approximation would give a simpler and more effective algorithm. Thus \begin{align*}\int_0^{0.3}\sin^8\theta\,\mathrm d\theta&=\left[\left(-\frac18\cos\,\theta\right)\left(\sin^4\theta+\frac76\sin^2\theta+\frac{35}{24}\right)\sin^3\theta+\frac{105}{384}\left(\theta-\sin\,2\theta\right)\right]_0^{0.3}\\ &=(-0.119417)(0.007627+0.101887+1.458333)(0.0258085)+0.004341\\ &=-0.0048320+0.0048341=0.0000021\end{align*} manages to compute a very small result as the difference between two much larger numbers. The crudest approximation for $\sin\,\theta$ will give $$\int_0^{0.3}\theta^8\,\mathrm d\theta=\frac19\left[\theta^9\right]_0^{0.3}=0.00000219$$ with considerably more potential accuracy and much less trouble. If several more figures are needed, a second term of the series may be kept. In a similar vein, if not too many figures are required, the quadrature $$\int_{0.45}^{0.55}\frac{\mathrm dx}{1+x^2}=\left[\tan^{-1}x\right]_{0.45}^{0.55}=0.502843-0.422854=0.079989\approx 0.0800$$ causes the computer to spend a lot of time evaluating two arctangents to get a result that would have been more expediently calculated as the product of the range of integration ($0.1$) by the value of the integrand at the midpoint ($0.8$). The expenditure of times for the two calculations is roughly ten to one. For more accurate quadrature, Simpson's rule would still be more efficient than the arctangent evaluations, nor would it lose a significant figure by subtraction. The student that worships at the altars of Classical Mathematics should really be warned that his rites frequently have quite oblique connections with the external world. It may very well be that choosing the closed form approach would still end up with you having to (implicitly) perform a quadrature anyway; for instance, one efficient method for numerically evaluating the zeroth-order Bessel function of the first kind $J_0(x)$ uses the trapezoidal rule! On the other hand, there are also situations where the closed form might be better for computational purposes. The usual examples are the complete elliptic integrals $K(m)$ and $E(m)$; both are more efficiently computed via the arithmetic-geometric mean than by using a numerical quadrature method. But, as I said in the comments, for manipulational work, possessing a closed form for your integral is powerful stuff; there is a whole body of results that are now conveniently at your disposal once you have a closed form at hand. Think of it as "standing on the shoulders of giants". In short, again, "it depends on the situation and the terrain". - +1 for retyping Acton into $\TeX$. –  lhf May 11 '11 at 17:29 For the original problem, things are similar. After realizing that $1-\sin^2 x\, \cos^2 x = [7+\cos(4x)]/8$, i.e., $7/8$ plus something oscillating, one is tempted to replace the integrand by $\arctan(7/8)$. Then one gets $\pi \arctan(7/8)/2 \approx 1.129$ for the integral which coincides with the exact expression $1.126$ up to $3\times 10^{-3}$. I don't know how much time the OP spend for his answer, but Pareto's principle seems to apply. –  Fabian May 11 '11 at 20:36 Thanks to everyone, especially J.M., for giving insightful answers. It's really convincing that distinction between numerical one and symbolic one is subject not to a particular classification of research area, but rather to a situation. –  sos440 May 12 '11 at 10:24 @sos440: You're very much welcome. :) Don't let my answer deter you from the fun you seem to have in teasing out analytic solutions; what I'm merely saying is that in real-world applications, one must eventually develop a "feel" for choosing the "right tool for the job". –  J. M. May 12 '11 at 10:28 I don't think your point of view is the right one. To compute an integral analytically and to compute an integral numerically are different things. A numerical analysis professor of mine once said that, in applications (engineering, physics...) it is often more convenient to directly evaluate integrals by numerical means, even if they are integrable analytically! For example, suppose that you need $$\int_{0}^{\frac{\pi}{2}} \arctan (1 - \sin^2 x \; \cos^2 x) \,\mathrm dx$$ meters of conducting wire. You make a phone call to the wire factory and ask for what? For $\pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right)$ meters of wire? More realistically you will ask for something like $1.13$ meters of wire. To obtain this number $1.13$ you performed an approximation over the non-rational quantity $\pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right)$. In doing so you wasted information. It would have been more convenient (and, maybe, even more accurate) to perform this approximation on the first integral directly, that is, to evaluate it numerically. Of course this does not render analytical methods useless. You could have a family of integrals depending on a parameter, for example. Numerical methods tell you nothing here. You could run across an integral in the middle of a proof, and need its exact value for theoretical purposes. The possibilities are countless. - Also, if you don't need that much accuracy (which is often in applications), even the simple-minded methods like trapezoidal or Simpson's might end up being faster to compute with than having to use a special routine for some exotic transcendental. –  J. M. May 11 '11 at 16:04 In short, if I may borrow the usual piece of military wisdom: "it depends on the situation and the terrain". –  J. M. May 11 '11 at 16:12 Surely you mean something like 1.126 instead of 1.49... Also I'm curious if it is sure or just very likely that this number is not rational, as you state. –  Myself May 12 '11 at 2:58 @Myself: Oh yes, sorry about that, I must have done some mistake in typing that number into Maple. Regarding the irrationality of $\pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right)$, I haven't checked it. Looks like a safe bet, though... Don't you agree? –  Giuseppe Negro May 12 '11 at 17:26 Yes, I completely agree. But for all I know these things are typically hard to prove, so I thought perhaps I had missed something. –  Myself May 12 '11 at 17:33 If you're talking about practical engineering applications, then really only numerical approximations are used (and studied in computer science as 'numerical analysis' or more recently 'scientific computing'). As to an academic mathematical field nowadays that deals with symbolic integration, first some perspective. Newton/Leibniz invented integral calculus in ...hm...late 1600's and was popularized (as much as you can say that) in the 1700's. Some basic symbolic integration even occurred (without that name and system) before then. So let's just say there's been at least 300 years of work there. Also, there's more to inverting derivatives than just integrals. Solving systems of partial differential equations seem to be the big thing (both numerically and symbolically) for almost as long as simple single variable integrals. That said, there is a small academic group of people working in 'symbolic computation' (with their own journals), and one subarea is symbolic integration. There are proofs of impossibility (i.e. proving that given certain restrictions there is no 'closed form' for a particular integral), and there are algorithms for computing integrals given other certain restrictions (the Risch algorithm). The latter are often implemented in computer algebra packages (Mathematica, Maple, etc.). There is surely room for solving particular integrals (in the AMM there don't seem to be many integrals though in the Problems section) and for finding patterns. I'd look at those journals to see what particular interest there is for integrals. - Back in the day, SIAM Reviews used to maintain a "problems column" where people submitted various requests for simple proofs, and yes, integrals/sums/whatever to evaluate. With the computing environments now, I suppose there is now less reason to submit those sorts of problems. –  J. M. May 11 '11 at 17:25 @J.M.: It's been a longstanding (15 years?) challenge by Doron Zeilberger that he (and his computer) can solve automatically any summation sent in to the AMM Problems section. Surely there are summations (and integrals) that are people-solvable and currently not computer-solvable, but as the technology progresses, these will be harder for people to come by. –  Mitch May 11 '11 at 17:44 I would also note that it may very well be that the symbolic output of current computing environments may be less than optimal, and some further human massaging may be needed. For instance, Mathematica sucks at producing optimal elliptic integral expressions... –  J. M. May 12 '11 at 1:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568170070648193, "perplexity": 769.3314789172158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833115.44/warc/CC-MAIN-20140820021353-00183-ip-10-180-136-8.ec2.internal.warc.gz"}
https://planetmath.org/boundsfore
# bounds for e If $n$ and $m$ are positive integers and $n>m$, we have the following inequality: $\left(1+{1\over n}\right)^{n}<{n\over n+1}\left(1+{1\over m}\right)^{m+1}$ Taking the limit as $n\to\infty$, we obtain an upper bound for $e$. Combining this with the fact that the $(1+1/n)^{n}$ is an increasing sequence, we have the following bounds for $e$: $\left(1+{1\over m}\right)^{m} This can be used to show that $e$ is not an integer – if we take $m=5$, we obtain $2.48832, for instance. Title bounds for e BoundsForE 2013-03-22 15:48:48 2013-03-22 15:48:48 rspuzio (6075) rspuzio (6075) 7 rspuzio (6075) Theorem msc 33B99
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 12, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997248649597168, "perplexity": 498.11718518846254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737172.50/warc/CC-MAIN-20200807083754-20200807113754-00562.warc.gz"}
https://indico.cern.ch/event/292366/
Workshop on the determination of centrality in pA collisions at the LHC February 14, 2014 CERN Europe/Zurich timezone The analysis of pPb collisions allows for the study of cold nuclear effects in order to establish a baseline for the interpretation of heavy ion results. Nuclear modification factors in minimum bias pPb collisions with respect to the pp reference have been measured at LHC for a large variety of probes ranging from inclusive hadrons to heavy flavors and jets. To measure the centrality dependence of these observables, the binary scaling factors (Ncoll) between pp and p-Pb have to be determined for each centrality class. Since the pA@LHC workshop organized in June 2012 in front of the first pPb data taking at the LHC, it has become clear that due to the looser correlation between centrality estimators and Ncoll, centrality determination in pPb is a delicate task. Kinematic biases on the centrality dependent observables can not be excluded and need special attention. Two recent conferences, IS2013 and Hard Probes 2013, had discussion sessions on centrality determination in p(d)A collisions as part of their program demonstrating the importance of this topic. These discussions have not yet come to a conclusion. The procedures developped by the LHC experiments are quite different, while first preliminary results on centrality dependent nuclear modification factors have already been presented. The aim of this Workshop is to start a more formal discussion, to be carried on by the studies of this inter-experiments Working Group, with the goal of formulating a common approach to the definition, determination and use of centrality in pA events. Working Group conveners: ALICE: Andreas Morsch (chair) ATLAS: Brian Cole and Dennis Perepelitsa CMS: Shengquan Tuo LHCb: Michael Schmelling and Burkhard Schmidt
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9622011184692383, "perplexity": 2246.970208095046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00592.warc.gz"}
http://math.stackexchange.com/questions/411066/typo-in-crandall-and-pomerance-pp-121
Typo in Crandall and Pomerance pp. 121 In Crandall and Pomerance "Prime Numbers: A Computational Perspective" Second Ed., pp. 121 just before Eq. (3.1) it says: "The number of steps in the sieve of Eratosthenes is proportional to $\sum_{p\leq N} N/p$, where $p$ runs over primes." However, in the sieve of Eratosthenes one only sieves with primes $p \leq \sqrt{N}$, so I would say the sum should be $\sum_{p\leq \sqrt{N}} N/p$. Now, except for a constant term, the sum turns out to be the same in the two cases, so for the purposes of the complexity estimate in the book, it makes no difference. Nevertheless, I'd say this constitutes a typo.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930490851402283, "perplexity": 195.0584138061298}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/geometry/12988-help-plz.html
# Math Help - help plz 1. ## help plz May someone help me solve this problem. In a circle whose diameter is 20 inches, a chord is 6 inches from the center. What is the length of the chord? I do not understand this problem. Thank you for the help. 2. Originally Posted by JayJay1206 May someone help me solve this problem. In a circle whose diameter is 20 inches, a chord is 6 inches from the center. What is the length of the chord? ... Hello, JayJay, I've attached an image of the situation: length of chord = 2s Use the Pythagoran theorem: r² = s² + 6² with r = 10'' because r is the half diameter: 100 = s² + 36 ===> s² = 64 ===> s = 8'' Therefore the length of the chord is 16'' EB Attached Thumbnails
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8838316798210144, "perplexity": 1034.6911000843272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651148/warc/CC-MAIN-20140305060731-00050-ip-10-183-142-35.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/28948/positive-permutation-tensor?answertab=oldest
# Positive Permutation Tensor I have seen that one can make an operator with $$L^i=\epsilon^{ijk}x_{j}\partial_{k}$$ But what if I want to make instead items that are sums, instead of differences. For instance $$\mathcal{L}^z=x\partial_y +y\partial_x$$ Is there an object like $|\epsilon_{ijk}|$ that has only 1s (no -1s)? Thanks, - The short answer is No. The difference between the right $\epsilon_{abc}$ that changes the sign under transpositions – the antisymmetric one – and the wrong $\epsilon_{abc}$ that doesn't – yours – is that the right one is invariant under rotations while yours is not. The right epsilon tensor is a privileged invariant tensor; yours is just a random tensor, a random collection of components. One may demonstrate it in this way. The $\epsilon$ symbol is a set of coefficients that may be used to construct a single quantity – scalar $Q$ – out of three vectors $S,T,U$: $$Q = \sum_{a,b,c=1}^3 \epsilon_{abc} S_a T_b U_c$$ If you think about it, $Q=(S\times T)\cdot U$ which is the inner product of a vector and a cross product of two other vectors. Its value is given purely geometrically; it doesn't depend on the coordinates. This $Q$ may also be interpreted as the determinant of a matrix with vectors $S,T,U$ as its rows (or columns). If you replaced the mixed-sign $\epsilon$ tensor by the "purely positive ones", the determinant would change to the sum over permutations without the sign and it wouldn't be rotationally invariant. Consequently, all objects constructed out of your version of $\epsilon$ would depend on the choice of the coordinates. For example, consider $S=e_x$, $T=e_y$, $U=e_z$, unit vectors in the direction of the axes. You get $Q=1$. But you may also rotate $S,T,U$ by 90 degrees around the $z$-axis. Then you will have $S=e_y$, $T=-e_x$, $U=e_z$. The $Q$ constructed out of these vectors will still give $Q=1$ but yours would change to $Q=-1$ so it isn't rotationally symmetric. All tensors that are symmetric under rotations are polynomials in $\epsilon_{abc}$ and $\delta_{ab}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513155579566956, "perplexity": 117.94377551185401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771716.117/warc/CC-MAIN-20141217075251-00019-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.dummies.com/how-to/content/realworld-signals-and-systems-case-solving-the-dac.html
The zero-order-hold (ZOH), which is inherent in many digital-to-analog converters (DACs), holds the analog output constant between samples. The action of the ZOH introduces frequency droop, a roll off of the effective DAC frequency response on the frequency interval zero to one-half the sampling rate fs, in reconstructing y(t) from y[n]. Two possible responses are to • Apply an inverse sinc function shaping filter in the continuous-time domain. • Correct for the droop before the signal emerges from the DAC. The system block diagram is shown here. Imagine that a senior engineer asks you to investigate the effectiveness of the simple infinite impulse response (IIR) and finite impulse response (FIR) digital filters as a way to mitigate ZOH frequency droop. You need to verify just how well these filters really work. The filter system functions are To solve this problem, you need to use the frequency-domain relationship from the discrete- to continuous-time domains. The relationship, relative to the notation of the figure, is You can assume that the analog reconstruction filter removes signal spectra beyond fs/2. The frequency response of interest turns out to be the cascade of Follow these steps to justify this outcome: 1. Let From the convolution theorem for frequency spectra in the discrete-time domain, get 2. Use the discrete to continuous spectra relationship to discover that the output side of the DAC is 3. Use the convolution theorem for frequency spectra in the continuous-time domain to push the DAC output spectra through the ZOH filter: The cascade result is now established. To view the equivalent frequency response for this problem in the discrete-time domain, you just need to change variables according to the sampling theory: Rearranging the variables in the cascade result viewed from the discrete-time domain perspective is The ZOH frequency response is Putting the pieces together and considering only the magnitude response reveals this equation: To verify the performance, evaluate the sinc function and the FIR responses by using the SciPy signal.freqz() function approach of the frequency domain recipe. Check out the results in the following figure. ```In [393]: w = linspace(0,pi,400) In [394]: H_ZOH_T = sinc(w/(2*pi)) In [395]: w,H_FIR = signal.freqz(array([-1, 18,-1])/16.,1,w) In [396]: w,H_IIR = signal.freqz([-9/8.],[1, 1/8.],w) In [402]: plot(w/(2*pi),20*log10(abs(H_ZOH_T))) In [403]: # other plot cammand lines similar In [412]: plot(w/(2*pi),20*log10(abs(H_FIR)*abs(H_ZOH_T)))``` These results are quite impressive for such simple correction filters. The goal is to get flatness that’s near 0 dB from 0 to π rad/sample (0 to 0.5 normalized). The response is flat to within 0.5 dB out to 0.4 rad/sample for the IIR filter; it’s a little worse for the FIR filter.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599238991737366, "perplexity": 1392.4867040520674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461863352151.52/warc/CC-MAIN-20160428170912-00058-ip-10-239-7-51.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/151939/angles-in-chemfig
# Angles in Chemfig I want to draw this formula using chemfig: This is what I've got so far: \chemfig{Cu^+(-[:180]N**6(-**6(------)--(-OOC-)---))(-[1]N)(-[5]N)(-[7]N)} I don't get the angle from N to Cu+ right. I tried to rotate parts of the formula, but there is always at least one angle that does not fit. Any idea? - Welcome to TeX.SX! – strpeter Dec 31 '13 at 15:34 Yes, I have an idea: \documentclass{article} \usepackage{chemfig} \begin{document} \definesubmol\cc{**6(---!\ff-!\ee--)} \definesubmol\dd{**6(-!\ee--!\ff---)} \definesubmol\ee{**6(-----)} \definesubmol\ff[(-^{-}OOC)]{(-COO^{-})} \chemfig{Cu^+(-[1]N([0,.5]!\cc))(-[3]N([:180,.5]!\dd))(-[5]N([:180,.5]!\cc))(-[7]N([:0,.5]!\dd))} \end{document} - The following code is similar to what unbonpetit wrote. The term .5 in [0,.5] and [180,.5] is chosen such that the aromatic cycles do not get too big. \documentclass{standalone} \usepackage{chemfig} \begin{document} \chemfig{Cu^+ (-[:+ 45]N([0,.5]**6(---(-COO^{-})-**6(------)--))) (-[:+135]N([180,.5]**6(-**6(------)--(-^{-}OOC)---))) (-[:+225]N([180,.5]**6(---(-^{-}OOC)-**6(------)--))) (-[:+315]N([0,.5]**6(-**6(------)--(-COO^{-})---)))} \end{document} - @unbonpetit: I don't see the point... – strpeter Dec 31 '13 at 16:28 It is not exactly the same since your inner ring has 6 bonds instead of 5. Moreover, your code is not more or less explicit than mine, even after several edits. – unbonpetit Dec 31 '13 at 16:29 I must admit that the term "explicit" is misleading. – strpeter Dec 31 '13 at 16:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837473392486572, "perplexity": 1045.9346414761987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160950.71/warc/CC-MAIN-20160205193920-00337-ip-10-236-182-209.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2020_AMC_10A_Problems/Problem_6&diff=126660&oldid=119734
# Difference between revisions of "2020 AMC 10A Problems/Problem 6" The following problem is from both the 2020 AMC 12A #4 and 2020 AMC 10A #6, so both problems redirect to this page. ## Problem How many -digit positive integers (that is, integers between and , inclusive) having only even digits are divisible by ## Solution 1 The ones digit, for all numbers divisible by 5, must be either or . However, from the restriction in the problem, it must be even, giving us exactly one choice () for this digit. For the middle two digits, we may choose any even integer from , meaning that we have total options. For the first digit, we follow similar intuition but realize that it cannot be , hence giving us 4 possibilities. Therefore, using the multiplication rule, we get . ~ciceronii swrebby ## Solution 2 The ones digit, for all the numbers that have to divisible be 5, must be a or a . Since the problem states that we can only use even digits, the last digit must be . From there, there are no other restrictions since the divisibility rule for 5 states that the last digit must be a or a . So there are even digit options for the first number then for the middle 2. So when we have to do . ~bobthefam ~IceMatrix
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386790752410889, "perplexity": 543.1570628752296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00066.warc.gz"}
http://www.gradesaver.com/aristotles-metaphysics/q-and-a/work-physics-help-135690
# WORK PHYSICS HELP! A 2.6 kg wagon moves in a straight line on a frictionless horizontal surface with an initial velocity of 3.0 m/s. It is then pushed for 4.0 m by a force of 2.5 N In the same direction as the initial velocity a) Use energy techniques to determine the final velocity b) Check the answer to part a) using kinematics ##### Answers 1 Hey sorry, but this is a literature site. If I tried to help you, I'd be wrong.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623492360115051, "perplexity": 596.3203558923315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00494-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-7-exponential-and-logarithmic-functions-7-2-graph-exponential-decay-functions-7-2-exercises-skill-practice-page-490/28
## Algebra 2 (1st Edition) $f(x)=5(4)^{-x}=5\frac{1}{4^x}=5(\frac{1}{4})^x=5(0.25)^x=g(x)$ Thus they represent the same function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918275475502014, "perplexity": 4139.1263169701015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610004.56/warc/CC-MAIN-20200123101110-20200123130110-00039.warc.gz"}
https://www.physicsforums.com/threads/calcluated-density-value-different-from-literature.465936/
# Calcluated density value different from literature 1. ### ana111790 42 1. The problem statement, all variables and given/known data A blood pressure cuff is used to measure the gage pressure associated with blood flow in the body. “Normal”, systolic blood pressure is commonly reported as 120 mm of mercury. This value represents the vertical displacement of mercury (h) resulting from the gage pressure within the device. (The density of mercury is ρ = 1.38 x 10^4 x kg/m^3 ) a. Calculate the gage pressure within the device (in Pa) that corresponds to a vertical displacement of 120 mmHg. b. The fluid in the device is replaced with a glycerin solution and the gage pressure from part b is applied. The displacement in the column corresponding to this gage pressure is 166 mm of glycerin. What is the density of this glycerin solution? 2. Relevant equations Pgage= ρ*g*h 3. The attempt at a solution a) Pgage= ρ*g*h = (1.38 x 10^4 kg/m^3)*(9.8 m/s^2)*(120mm)* (1m/1000mm) Pgage=16200 kPa b) ρglycerin=Pgage= 16200Pa/[(9.8 m/s^2)*(166mm)*(1m/1000mm) ρglycerin=9960 kg/m^3 which is different from the density of glycerin in literature (1250 kg/m^3) So I am wondering these calculations are right or if I am missing something. Thanks! 2. ### SammyS 8,747 Staff Emeritus The height of the column is inversely proportional to the density of the fluid, so your answer appears to be consistent with the data given. I agree with you that the glycerin solution is unrealistically dense.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892248272895813, "perplexity": 1361.858243874969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096579.89/warc/CC-MAIN-20150627031816-00148-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.conservapedia.com/Newton's_Laws_of_Motion
Newton's Laws of Motion Isaac Newton's 3 laws of motion form the basis for classical mechanics. They are: 1) An object in motion will remain in motion in a straight line unless acted upon by an external force. An object at rest will remain at rest unless acted upon by an external force. (Or, alternatively, an object's velocity remains constant unless the object is acted upon by an external force.) 2) The rate of change of an object's momentum is equal to the net force acting on it ($\vec F = d{\vec p}/dt$, sometimes written as $\vec F = m \times \vec a$ when mass can be assumed to be constant). 3) For every action there is an equal and opposite reaction; or, more precisely, the total momentum of any isolated system is always constant. Explanation The first law defines an inertial frame of reference as one which is acted upon by no outside forces. In general, inertial frames are far easier to understand conceptually and deal with mathematically than accelerated frames. The second law relates force and momentum. Mathematically, $\vec F = d{\vec p}/dt = d(m \times \vec v)/dt = m \times d{\vec v}/dt + \vec v \times dm/dt$. Usually dm / dt = 0, so the law is simplified to $\vec F = m \times d{\vec v}/dt = m \times \vec a$, or mass times acceleration. A notable exception is rocket motion, where dm / dt is not 0, and so $\vec F = m \times \vec a$ does not apply. Note that the quantities F, p, v, and a are all vector quantities--that is, they have an associated direction as well as a magnitude. In general, the second law gives a way to predict the motion of an object by summing all the forces acting on that object. The third law states that momentum is always conserved. If one object imparts a momentum p0 on another, the first object's momentum will change by -p0. This can be viewed as a consequence of Noether's Theorem; the associated symmetry is that the laws of physics do not change under spatial translations (that is, the laws of physics are the same everywhere).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641661643981934, "perplexity": 162.11478235114382}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898779.5/warc/CC-MAIN-20141030025818-00176-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/vacuum-impedance.128081/
# Vacuum impedance 1. Aug 5, 2006 ### Kolahal Bhattacharya The vacuum impedance,as our professor writes in blackboard, equals 377 ohm What is the physical origin of this impedance? 2. Aug 5, 2006 ### J Hann This value arises from the plane wave solution of Maxwell's equations. The ratio of E/H in a plane wave in free space is equal to the square root of the ratio of the magnetic permittivity (mu) to the electric permittivity (epsilon). This value has the dimensions of ohms and is called the characteristic impedance of free space and has the value of 376.6 ohms.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557135105133057, "perplexity": 2119.9460352181745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00025-ip-10-171-10-108.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/390299/what-are-the-advantages-to-the-path-integral-formulation-of-non-relativistic-qua
# What are the advantages to the path integral formulation of non-relativistic quantum mechanics? When I first learned quantum mechanics almost everything was in terms of wave functions or matrix mechanics, not path integrals. Not having learned much about path integrals besides some brief reading I am struggling to see their benefits or the motivation behind them, besides a desire for an approach based on action/lagrangians. It seems that some problems, like the hydrogen atom would even be more difficult in that formulation. However I would expect there has to be some, probably significant, advantages to using path integrals in certain situations. What are the advantages (and disadvantages) to the path integral formulation compared to other approaches?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923920392990112, "perplexity": 346.42516147761967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00443.warc.gz"}
http://math.stackexchange.com/questions/107519/location-of-a-root-of-a-cubic-polynomial
# Location of a root of a cubic polynomial For $\alpha\in(0,\frac12)$, $\beta\in(0,\infty)$, $N\in\mathbb N\backslash\{0\}$ and $n\in\{0,\ldots,N\}$, how can I prove that exactly one zero of the cubic polynomial $$(N+2\beta)x^3-(N+n+3\beta)x^2+(n+\beta+N\alpha-N\alpha^2)x+n\alpha^2-n\alpha$$ lies in $[\alpha,1-\alpha]$? - I would evaluate the polynomial at $\alpha$ and $1-\alpha$, hoping to find different signs, and then I would compute the discriminant, hoping to find zero. Did you try? –  Giovanni De Gaetano Feb 9 '12 at 17:30 Evaluating at $\alpha$, we have: \begin{align*} f(\alpha) &= (N+2\beta)\alpha^3 - (N+n+3\beta)\alpha^2 + (n+\beta+N\alpha-N\alpha^2)\alpha + n\alpha^2 - n\alpha\\ &= (N+2\beta - N)\alpha^3 + (-N-n-3\beta+N+n)\alpha^2 + (n+\beta-n)\alpha\\ &= 2\beta\alpha^3 - 3\beta\alpha^2 + \beta\alpha\\ &= \alpha\beta(2\alpha^2 -3\alpha + 1). \end{align*} Evaluating at $1-\alpha$ gives \begin{align*} f(1-\alpha) &= (N+2\beta)(1-3\alpha + 3\alpha^2-\alpha^3) - (N+n+3\beta)(1-2\alpha+\alpha^2)\\ &\qquad \mathop{+} (n+\beta+N\alpha-N\alpha^2)(1-\alpha) + n\alpha^2 - n\alpha\\ &= (-N-2\beta +N)\alpha^3 + (3N+6\beta - N-n-3\beta -N-N+n)\alpha^2\\ &\qquad \mathop{+}(-3N-6\beta+2N+2n+6\beta-n-\beta+N-n)\alpha\\ &\qquad \mathop{+} (N+2\beta-N-n-3\beta+n+\beta)\\ &= -2\beta\alpha^3 +3\beta\alpha^2-\beta\alpha\\ &= -\alpha\beta(2\alpha^2 - 3\alpha + 1). \end{align*} So, unless $2\alpha^2-3\alpha+1$ is $0$, the two values have opposite signs. But the roots of $2x^2-3x+1$ are $1$ and $\frac{1}{2}$, so $\alpha$ cannot be a root. Thus, there is at least one root for the polynomial in $[\alpha,1-\alpha]$ (in fact, in $(\alpha,1-\alpha)$. Since $f(x)$ has opposite signs on $\alpha$ and on $1-\alpha$, if $f(x)$ has more than one (distinct) root on $[\alpha,1-\alpha]$, then it must have three distinct roots in the interval (why?). Can all three roots be in that interval? Great! The discriminant is identically zero so that implies that there is at most two distinct roots. But if there were two distinct roots in $(\alpha,1-\alpha)$, it could not change sign. So there is exactly one root in the interval. –  Chris Ferrie Feb 9 '12 at 19:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997289776802063, "perplexity": 725.2726727786944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096991.38/warc/CC-MAIN-20150627031816-00094-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/problems-with-integration-contours-and-residues.226590/
# Problems with integration contours and residues 1. Apr 4, 2008 ### nullus 1er 1. Hi there, I have problems in finding the correct results for these integrals. I have to say I m not a great expert in residues... 2. Int[-inf;+inf;exp(-x^2)/(z-x)] variable is x, constant is z Int[-inf;0;exp(x)/(z-x)] with z<0 3. I think the 1st one gives 2i pi exp(-z^2) and the second one 2i pi exp(-z)+something 2. Apr 4, 2008 ### Avodyne These are not simple integrals that can be done by residues. The yields an error function and the second an incomplete gamma function.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600264430046082, "perplexity": 2316.8294686857216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720154.20/warc/CC-MAIN-20161020183840-00350-ip-10-171-6-4.ec2.internal.warc.gz"}
https://topospaces.subwiki.org/w/index.php?title=Homotopy_type_of_connected_sum_depends_on_choice_of_gluing_map&diff=cur&oldid=3903
# Difference between revisions of "Homotopy type of connected sum depends on choice of gluing map" ## Statement It is possible to find an example of compact connected orientable manifolds $M_1$ and $M_2$ such that the homotopy type of the connected sum $M_1 \# M_2$ is not well defined, i.e., we can get connected sums of different homotopy types depending on the choice of the gluing map. ## Facts used 1. Complex projective space has orientation-reversing self-homeomorphism iff it has odd complex dimension ## Proof To construct an example, we need to find a case where both $M_1$ and $M_2$ are orientable but neither of them has an orientation-reversing self-homeomorphism. One simple choice, by Fact (1), is to set both $M_1$ and $M_2$ as homeomorphic to the complex projective plane $\mathbb{P}^2(\mathbb{C})$ which has real dimension 4. There are two possible connected sums: • Connected sum of two complex projective planes with same orientation: This has cohomology ring isomorphic to $\mathbb{Z}[x,y]/(x^2 - y^2,xy,x^3,y^3)$, where $x,y$ are additive generators of the free abelian group $H^2$ and $x^2 = y^2$ is the additive generator for $H^4$. • Connected sum of two complex projective planes with opposite orientation: This has cohomology ring isomorphic to $\mathbb{Z}[x,y]/(x^2 + y^2,xy,x^3,y^3)$, where $x,y$ are additive generators of the free abelian group $H^2$ and $x^2 = -y^2$ is the additive generator for $H^4$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326461553573608, "perplexity": 176.20761625220996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00606.warc.gz"}
https://data.mendeley.com/research-data/?page=0&type=SLIDES&type=IMAGE&type=OTHER&search=qubit%20oscillator%20frequency
Filter Results 53376 results We see from Eq. ( e62) that for non-interacting qubits, the non-vanishing qubit bias just shifts the frequency position of the liner peaks ( e57) without qualitatively changing their shape. If both the bias and the qubit-qubit interaction are finite, the bias splits each of the linear peaks in two simple Lorentzians bringing the total number of the finite-frequency peaks in the spectrum of the detector output to six as it should be in the generic situation (see, e.g., Fig.  fig3).... Output spectra of the non-linear detector measuring two different unbiased qubits. Solid line is the spectrum in the case of non-interacting qubits. The two larger peaks are the “linear” peaks that correspond to the oscillations in the individual qubits, while smaller peaks are non-linear peaks at the combination frequencies. Dashed line is the spectrum for interacting qubits. Interaction shifts the lower-frequency liner peak down and all other peaks up in frequency. Parameters of the detector-qubit coupling are: δ 1 = 0.12 t 0 , δ 2 = 0.09 t 0 , λ = 0.08 t 0 .... Finite qubit bias should lead to averaging of the two spectra S I ± ( e27) similar to that discussed in the case of non-interacting qubits and illustrated in Fig.  fig4.... The two spectral densities ( e20) correspond to two possible outcomes of measurement: the qubits found in one or the other subspace D ± , the probability of the outcomes being determined by the initial state of the qubits. Each of the spectral densities coincides with the spectral density of the linear detector measuring coherent oscillations in one qubit . Similarly to that case, the maximum of the ratio of the oscillation peak versus noise S 0 for each spectrum S I ± ω is 4. As one can see from Eq. ( e20), this maximum is reached when the measurement is weak: | λ | ≪ | t 0 | , and the detector is “ideal”: arg t 0 λ * =0, and only Γ + or Γ - is non-vanishing. If, however, there is small but finite transition rate between the two subspaces that mixes the two outcomes of measurement, the peak height is reduced by averaging over the two spectral densities ( e20). This situation is illustrated in Fig.  fig4 which shows the output spectra of the purely quadratic detector, when the subspaces D ± are mixed by small qubit bias ε . Since the stationary density matrix ( e14) is equally distributed over all qubit states, the two peaks of the spectral densities ( e20) are mixed with equal probabilities, and the maximum of the ratio of the oscillation peak heights versus noise S 0 for the combined spectrum S I ω is 2. Spectrum shown in Fig.  fig4 for ε = 0.1 Δ 1 (solid line) is close to this limit.... An example of the output spectrum of the non-linear detector measuring unbiased qubits with different tunneling amplitudes is shown in Fig.  fig6. One can see that when the linear and non-linear coefficient of the detector-qubit coupling are roughly similar, the linear peaks are more pronounced than the peaks at combination frequencies. Qubit-qubit interaction shifts all but the lower-frequency linear peak up in frequency and reduces both the amplitudes of the higher-frequency peaks and the distance between them.... Evolution of the output spectrum of the non-linear detector measuring two identical unbiased qubits with the strength ν of the qubit-qubit interaction. The qubit-detector coupling constants δ 1 , 2 are taken to be slightly different to average the spectrum over all qubit states. The three solid curves correspond to ν / Δ = 0.0 , 0.1 , 0.2 . In agreement with Eqs. ( e42) – ( e44), the peak at ω ≃ Δ is at first suppressed and then split in two by increasing ν , while the peak at ω ≃ 2 Δ is not changed noticeably by such a weak interaction. Dashed and dotted lines show the regime of relatively strong interaction: ν / Δ = 0.5 and ν / Δ = 1.0 , respectively, that is described by Eqs. ( e46) and ( e47).... Figure fig5 illustrates evolution of the output spectrum of the non-linear detector measuring identical qubits due to changing interaction strength. We see that this evolution agrees with the analytical description developed above. Weak qubit-qubit interaction ν ≃ κ ≪ Δ suppresses and subsequently splits the spectral peak at ω ≃ Ω while not changing the peak ω ≃ 2 Ω . Stronger qubit-qubit interaction ν ≃ Δ ≫ κ shifts the ω ≃ 2 Ω peak to higher frequencies while moving the two peaks around ω ≃ Ω further apart.... Output spectrum of a nonlinear detector measuring two qubits with “the most general” set of parameters. Six peaks in the spectrum at finite frequencies correspond to six different energy intervals in the energy spectrum of the two-qubit system. The zero-frequency peak reflects dynamics of transitions between energy levels. Detector parameters are: δ 1 = 0.1 , δ 2 = 0.07 , λ = 0.09 (all normalized to t 0 ). In this Figure, and in all numerical plots below we take Γ + | t 0 | 2 = Δ 1 , Γ - = 0 , and assume that the detector tunneling amplitudes are real.... Diagram of a mesoscopic detector measuring two qubits. The qubits modulate amplitude t of tunneling of detector particles between the two reservoirs.... Output spectra of a purely quadratic detector measuring two non-interacting qubits. Small qubit bias ε 1 = ε 2 ≡ ε (solid line) creates transitions that lead to averaging of the two main peaks at combination frequencies Δ 1 ± Δ 2 [see Eq. ( e20)]. Further increase of ε (dashed line) makes additional spectral peaks associated with these transitions more pronounced. The strength of quadratic qubit-detector coupling is taken to be λ = 0.15 t 0 . Data Types: • Image (color online) Qubit’s final excited state probability P obtained from the semiclassical calculation as a function of temperature k B T and coupling strength g , both measured relative to the minimum qubit gap Δ . The different panels correspond to different values of the harmonic oscillator frequency: ℏ ω / Δ = 0.2 (top), 1 (middle) and 5 (bottom).... (color online) Energy level diagram of a coupled qubit-oscillator system with the qubit bias conditions varied according to the LZ protocol.... We can also see in Fig.  Fig:ExcitationProbability02 that for g / Δ 1 the temperature dependence is non-monotonic. In particular, for low temperatures we obtain the intuitively expected increase in excitation probability with increasing temperature, but this trend reverses for higher temperatures. In order to investigate this feature further, we calculate the qubit’s final excited-state probability as a function of the number n of excitation quanta present in the initial state of the oscillator (Note that this calculation differs from the ones described above in that here we do not use the Boltzmann distribution for the oscillator’s initial state). The results are plotted in Fig.  Fig:ExcitationProbabilityAsFunctionOfInitialOscillatorExcitationNumber. These results explain the non-monotonic dependence on temperature. For intermediate values of g / Δ (e.g. for g / Δ = 1 ), there is a peak at a small but finite excitation number followed by a steady decrease. As the temperature is increased from zero, the qubit’s final excited-state probability samples the probabilities for increasingly high excitation numbers, and a peak at intermediate values of temperature is obtained. Note that for large excitation numbers, the increase in P as a function of n resumes, and this increase will also be reflected in the temperature dependence.... where ω is the characteristic frequency of the harmonic oscillator, â and â † are, respectively, the oscillator’s annihilation and creation operators, and g is the qubit-oscillator coupling strength. The energy level diagram of this problem is illustrated in Fig.  Fig:EnergyLevelDiagram.... Another feature worth noting is the temperature dependence of P close to zero temperature. As can be seen clearly in Figs.  Fig:ExcitationProbability10 and Fig:ExcitationProbability50, the initial increase in P with temperature is very slow, indicating that it probably follows an exponential function that corresponds to the probability of populating the excited states in the harmonic oscillator (and the same dependence is probably present but difficult to see because of the scale of the x axis in Fig.  Fig:ExcitationProbability02). After this initial slow rise, and in particular when k B T ℏ ω , we see a steady rise that in the case of Fig.  Fig:ExcitationProbability02 can be approximated as a linear increase in P with increasing T . Importantly, the slope of this increase can be quite large for intermediate g values. From the results shown in Figs.  Fig:ExcitationProbability02- Fig:ExcitationProbability50, we find that the maximum slope d P / d k B T / Δ m a x = 0.18 × ℏ ω / Δ -0.57 , and results for other parameter values extending up to ℏ ω / Δ = 20 follow this dependence. The implication of this result can be seen clearly in the middle panel of Fig.  Fig:ExcitationProbability02: even when the temperature is substantially smaller than the qubit’s minimum gap Δ , the initial excitation of the low-frequency oscillator (stemming from the finite temperature) can cause a large increase in the qubit’s final excited-state probability. This result is in contrast with the exact result of Ref.  stating that at zero temperature the qubit’s final excited-state probability is given by P L Z regardless of the value of g . The typical temperature scale at which deviations from the LZ formula occur can therefore be much lower than Δ / k B . This result is relevant for adiabatic quantum computing, because it contradicts the expectation that having a minimum gap that is large compared to the temperature might provide automatic protection for the ground state population against thermal excitation. Another point worth noting here is that when ℏ ω qubit and oscillator are resonant with each other, yet the initial thermal excitation of the oscillator can result in exciting the qubit at the final time. The excitations in the oscillator are in some sense up-converted into excitations in the qubit as a result of the sweep through the avoided crossing.... In addition to solving the Schrödinger equation, we have performed semiclassical calculations where we assume that there is no quantum coherence between the different LZ processes. (Note here that when we replace the isolated qubit with the coupled qubit-oscillator system the single avoided crossing is replaced by a complex network of avoided crossings.) Under this approximation, we only need to calculate the occupation probabilities of the different states, and these probabilities change (according to the LZ formula) only at the points of avoided crossing. This approach greatly simplifies the numerical calculations because the locations and gaps for the different avoided crossings can be determined easily (see e.g.~Fig.~ Fig:EnergyLevelDiagram). The results are shown in Fig.  Fig:ExcitationProbabilityFromIncoherentCalculation. The results of this calculation agree generally well with those obtained by solving the Schrödinger equation when ℏ ω / Δ = 1 . For ℏ ω / Δ = 5 , the semiclassical calculation consistently underestimates the excited-state probability, but the overall dependence on temperature and coupling strength is remarkably similar to that shown in Fig.  Fig:ExcitationProbability50. We should note that higher values of ℏ ω (not shown) exhibit more pronounced deviations, with side peaks appearing in the dependence of P on g / Δ . The most striking deviation from the results of the fully quantum calculation is seen in the case ℏ ω / Δ = 0.2 (i.e. the case of a low-frequency oscillator). In the semiclassical calculation, there is a rather high peak at a small value of the coupling strength (and sufficiently high temperatures), and the excited-state probability starts decreasing when the coupling strength g becomes larger than ℏ ω . In the fully quantum calculation, however, the peak is located at a much higher value, somewhere between 0.5 and 1 depending on the temperature.... (color online) Top: Qubit’s final excited-state probability P as a function of temperature k B T and coupling strength g , both measured relative to the qubit’s minimum gap Δ . Middle: P as a function of k B T / Δ for four different values of g / Δ : 0.1 (red solid line), 0.3 (green dashed line), 1 (blue dotted line) and 2 (magenta dash-dotted line). Bottom: P as a function of g / Δ for three different values of k B T / Δ : 1 (red solid line), 3 (green dashed line), and 5 (blue dotted line). In all the panels, the harmonic oscillator frequency is ℏ ω / Δ = 0.2 . The sweep rate is chosen such that P L Z = 0.1 , and this value is the baseline for all of the results plotted in this figure.... (color online) The final excited state probability P as a function of the number of excitation quanta n present in the initial state of the oscillator. Here we take ℏ ω / Δ = 0.2 . The different lines correspond to different values of the coupling strength: g / Δ = 0.1 (red solid line), 0.5 (green dashed line), 1 (blue dotted line) and 2 (magenta dash-dotted line).... The probability for the qubit to end up in the excited state at the final time as a function of temperature and coupling strength is plotted in Figs.  Fig:ExcitationProbability02- Fig:ExcitationProbability50. As expected from known results , the final excited-state occupation probability P remains equal to 0.1 whenever the temperature or the coupling strength is equal to zero. Otherwise, the coupling to the oscillator causes this probability to increase. A common, and somewhat surprising, trend for all values of ℏ ω / Δ is the non-monotonic dependence on the coupling strength g . As the coupling strength is increased from zero to finite but small values, P increases. But when the coupling strength is increased further, P starts decreasing. Based on the results that are plotted in Figs.  Fig:ExcitationProbability02- Fig:ExcitationProbability50, one can expect that in the limit of large g / Δ (and assuming not-very-large values of k B T / Δ ) the excited-state occupation probability will go back to its value in the uncoupled case, i.e.  P = 0.1 . This phenomenon is probably a manifestation of the superradiance-like behaviour in a strongly coupled qubit-oscillator system . In the superradiant regime (i.e. the strong-coupling regime), the ground state is highly entangled exactly at the symmetry point (which corresponds to the bias conditions at t = 0 in the LZ problem), but even small deviations from the symmetry point can lead to an effective decoupling between the qubit and resonator with the exception of some state-dependent mean-field shifts. Indeed the maximum values of P reached in Figs.  Fig:ExcitationProbability10 and Fig:ExcitationProbability50 occur at coupling strength values that are comparable to the expression for the uncorrelated-to-correlated crossover value, namely g ∼ ℏ ω (and we have verified that the near-linear increase in peak location as a function of oscillator frequency continues up to ℏ ω / Δ = 20 ). This relation does not apply in the case ℏ ω / Δ = 0.2 , shown in Fig.  Fig:ExcitationProbability02. In this case, the peak occurs when the coupling strength g is comparable to the minimum gap Δ . It is in fact quite surprising that the excitation peak in the case ℏ ω / Δ = 0.2 occurs at a higher coupling strength than that obtained in the case ℏ ω / Δ = 1 . In order to investigate this point further, we tried values close to ℏ ω / Δ = 1 and found that this value gives a minimum in the peak location (i.e. the peak in P when plotted as a function of g / Δ ). Data Types: • Image The first term has a peak at zero frequency, while the second term has a peak at ω = Ω , with width 3 Γ / 2 , and signal -1 / 3 Γ . Bounding this signal in relation to the noise in the individual twin detectors gives | S 1 , 2 Ω | ≤ 2 / 3 S I . The interesting feature of this correlator is that it changes sign as a function of frequency. The low frequency part describes the incoherent relaxation to the stationary state, while the high frequency part describes the out of phase, coherent oscillations of the z and x degrees of freedom. The measured correlator S z x , as well as S x x , S z z are plotted as a function of frequency in Fig. combo(b,c,d) for different values of ϵ . These correlators all describe different aspects of the time domain destruction of the quantum state by the weak measurement, visualized in Fig. comboa. We note that the cross-correlator changes sign for ϵ = - Δ .... (color online). (a) Time domain destruction of the quantum state by the weak measurement process for ϵ = Δ . The elapsed time is parameterized by color, and (x,y,z) denote coordinates on the Bloch sphere. (b) The measured cross-correlator S z x ω changes sign from positive at low frequency (describing incoherent relaxation) to negative at the qubit oscillation frequency (describing out of phase, coherent oscillations). (c,d) The correlators S x x , S z z have both a peak at zero frequency and at qubit oscillation frequency. We take Γ = Γ x = Γ z = .07 Δ / ℏ . S i j are plotted in units of Γ -1 .... Cross-correlated quantum measurement set-up: Two quantum point contacts are measuring the same double quantum dot qubit. As the quantum measurement is taking place, the current outputs of both detectors can be averaged or cross-correlated with each other. Data Types: • Image (Color online) Energy spectrums for lowest eight levels under the situation with three high-frequency qubits: ℏ w 0 / E q = 0.01 . The rescaled energy E k / ℏ w 0 with k = 1 , 2 , 3 , . . . , 8 versus the rescaled coupling strength λ / ℏ w 0 is plotted: (a) θ = 0 ; (b) θ = π / 6 ; (c) θ = π / 3 .... (Color online) Schematic of four displaced oscillators. The horizontal and vertical axises represent the position and displaced oscillator’s eigenenergy E d o , respectively. Four displaced oscillators are shifted to the left or right from the equilibrium position with a specific constant, where the shift direction is determined by the state of three qubits. The eigenstates (plotted with n no more than 2 ) that have the same value of n are degenerate for the states | A ± 1 (or | A ± 3 ), and have the symmetry divided by the origin point in horizontal axis.... adiabatic approximation, three qubits, ultrastrongly coupled, harmonic oscillator... (Color online) Energy spectrums for lowest eight levels under the situation with a high-frequency oscillator: ℏ w 0 / E q = 10 . The rescaled energy E k / ℏ w 0 with k = 1 , 2 , 3 , . . . , 8 versus the rescaled coupling strength λ / ℏ w 0 is plotted: (a) θ = 0 ; (b) θ = π / 6 ; (c) θ = π / 3 .... (Color online) Schematic of the system with three identical qubits coupled to a harmonic oscillator. The j th ( j = 1 , 2 , 3 ) qubit with one ground ( | g j ) and one excited states ( | e j ) is coupled to the oscillator with frequency w 0 , where the qubit-oscillator coupling strength is denoted by g or λ .... (Color online) The Q function (upside) and the Wigner function (underside) of the oscillator’s state with three high-frequency qubits (i.e., ℏ w 0 / Δ = 0.1 and ϵ = 0 ): (a,d) λ / ℏ w 0 = 0.5 , (b,e) λ / ℏ w 0 = 1 , (c,f) λ / ℏ w 0 = 1.25 . Data Types: • Image The systems considered are shown in Fig.  fig:system. To be specific we first analyze the Rabi driven flux qubit coupled to an LC-oscillator (Fig.  fig:systema) with Hamiltonian... Average number of photons in the resonator as function of the driving detuning δ ω and amplitude Ω R 0 . Peaks at δ ω > 0 correspond to lasing, dips at δ ω qubit are Δ / 2 π = 1  GHz, ϵ = 0.01 Δ , and Γ 0 / 2 π = 125  kHz, the frequency and line-width of the resonator are ω T / 2 π = 6 MHz and κ / 2 π = 1.7  kHz, the coupling constant is g / 2 π = 3.3  MHz and the temperature of the resonator T = 10  mK. The inset shows the bistability of the photon number for Ω R 0 / 2 π = 7 MHz. The dashed line represents the unstable solution.... So far we described a flux qubit coupled to an LC oscillator, but our analysis applies equally to a nano-mechanical resonator capacitively coupled to a Josephson charge qubit (see Fig.  fig:systemb). In this case σ z stands for the charge of the qubit, and both the coupling to the oscillator and the driving are capacitive, i.e., involve σ z . To produce capacitive coupling between the qubit and the oscillator, the latter is metal coated and charged by a voltage source . The dc component of the gate voltage V g puts the system near the charge degeneracy point where the dephasing due to the 1 / f charge noise is minimal. Rabi driving is induced by an ac component of V g . Realistic experimental parameters are expected to be very similar to the ones used in the examples discussed above, except that a much higher quality factor of the resonator ( ∼ 10 5 ) and a much higher number of quanta in the oscillator can be reached. This number will easily exceed the thermal one, thus a proper lasing state with Poisson statistics, appropriately named SASER , is produced. One should then observe the usual line narrowing with line width given by κ N t h / 4 n ̄ ∼ κ 2 N t h / Γ 1 . Experimental observation of this line-width narrowing would constitute a confirmation of the lasing/sasing.... In Fig.  3dphoton we summarize our main results obtained by solving the Langevin (Fokker-Plank) equations . The number of photons n ̄ is plotted as a function of the detuning δ ω of the driving frequency and driving amplitude Ω R 0 . It exhibits sharp extrema along two curves corresponding to the one- and two-photon resonances, Ω R = ω T - 4 g 3 n ̄ and Ω R = 2 ω T - 4 g 3 n ̄ . Blue detuning, δ ω > 0 , induces a strong population inversion of the qubit levels, which in resonance leads to one-qubit lasing. In experiments the effect can be measured as a strong increase of the photon number in the resonator above the thermal values. On the other hand, red detuning produces a one-qubit cooler with photon numbers substantially below the thermal value. Near the resonances we find regions of bi-stability illustrated in the inset of Fig.  3dphoton. In these regions we expect a telegraph-like noise due to random switching between the two solutions.... Several recent experiments on quantum state engineering with superconducting circuits  realized concepts originally introduced in the field of quantum optics and stimulated substantial theoretical activities  . Josephson qubits play the role of two-level atoms, while oscillators of various kinds replace the quantized light field. Motivated by one such experiment , we investigate a Josephson qubit coupled to a slow LC oscillator (Fig.  fig:system a) with eigenfrequency (in the MHz range) much lower than the qubit’s energy splitting (in the GHz range), ω T ≪ Δ E . The qubit is ac-driven to perform Rabi oscillations, and the Rabi frequency Ω R is tuned close to resonance with the oscillator. For this previously unexplored regime of frequencies we study both one-photon (for Ω R ≈ ω T ) and two-photon (for Ω R ≈ 2 ω T ) qubit-oscillator couplings. The latter is dominant at the “sweet" point of the qubit, where due to symmetry the linear coupling to the noise sources is tuned to zero and dephasing effects are minimized . When the qubit driving frequency is blue detuned, δ ω = ω d - Δ E > 0 , we find that the system exhibits lasing behavior; for red detuning the qubit cools the oscillator. Similar behavior is expected in an accessible range of parameters for a Josephson qubit coupled to a nano-mechanical oscillator (Fig.  fig:systemb), thus providing a realization of a SASER  (Sound Amplifier by Stimulated Emission of Radiation).... The systems. a) In the circuit QED setup of Ref.  an externally driven three-junction flux qubit is coupled inductively to an LC oscillator. b) In an equivalent setup a charge qubit is coupled to a mechanical resonator. Data Types: • Image Average number of photons in the resonator as function of the driving detuning δ ω and amplitude Ω R 0 . Peaks at δ ω > 0 correspond to lasing, while dips at δ ω qubit: Δ / 2 π = 1  GHz, ϵ = 0.01 Δ , Γ 0 / 2 π = 125  kHz, the resonator: ω T / 2 π = 6 MHz, κ / 2 π = 0.34  kHz, and the coupling: g / 2 π = 3.3  MHz. The bath temperature is T = 10  mK.... Dressed states of a driven qubit near resonance. Here m is the number of photons of the driving field, which is assumed to be quantized.... In experiments with the same setup as shown in Fig.  fig:systema) but in a different parameter regime the mechanisms of Sisyphus cooling and amplification has recently been demonstrated . Due to the resonant high-frequency driving of the qubit, depending on the detuning, the oscillator is either cooled or amplified with a tendency towards lasing. The Sisyphus mechanism is most efficient when the relaxation rate of the qubit is close to the oscillator’s frequency. In contrast, in the present paper we concentrate on the “resolved sub-band" regime where the dissipative transition rates of the qubits are much lower than the oscillator’s frequency.... Average number of photons n ̄ versus the detuning. The blue curves are obtained from the Langevin equations ( dot alpha) and ( dot alpha2). They show the bistability with the solid curve denoting stable solutions, while the dashed curve denotes the unstable solution. The red curve is obtained from a numerical solution of the master equation ( eq:Master_Equation). The driving amplitude is taken as Ω R 0 / 2 π = 5 MHz. The parameters of the qubit: Δ / 2 π = 1  GHz, ϵ = 0.01 Δ , Γ 0 / 2 π = 125  kHz, the resonator: ω T / 2 π = 6 MHz, κ / 2 π = 1.7  kHz, N t h = 5 , and the coupling: g / 2 π = 3.3  MHz.... So far we described an LC oscillator coupled to a flux qubit. But our analysis equally applies for a nano-mechanical resonator coupled capacitively to a Josephson charge qubit (see Fig.  fig:systemb). In this case σ z stands for the charge of the qubit and both the coupling to the oscillator as well as the driving are capacitive, i.e., involve σ z . To produce the capacitive coupling between the qubit and the oscillator, the latter could be metal-coated and charged by the voltage source V x . The dc component of the gate voltage V g puts the system near the charge degeneracy point where the dephasing due to the 1 / f charge noise is minimal. Rabi driving is induced by an a c component of V g . Realistic experimental parameters are expected to be very similar to the ones used in the examples discussed above, except that a much higher quality factor of the resonator ( ∼ 10 5 ) and a much higher number of quanta in the oscillator can be reached. This number will easily exceed the thermal one, thus a proper lasing state with Poisson statistics, appropriately named SASER , is produced. One should then observe the usual line narrowing with line width given by κ N t h / 4 n ̄ ∼ κ 2 N t h / Γ ~ 1 . Experimental observation of this line-width narrowing would constitute a confirmation of the lasing/sasing.... Average number of photons in the resonator as function of the qubit’s relaxation rate, Γ 0 at the one-photon resonance, Ω R = ω T for g 3 = 0 and N t h = 5 . The dark blue line shows the numerical solution of the master equation, the light blue solid line represents the solution of the Langevin equation, Eq. ( dot alpha ). The green and red dashed curves represent respectively the saturation number n 0 and the thermal photon number N t h . The parameters are as in Fig.  fig:compar (except for Γ 0 ).... Also in situations where the qubit, e.g., a Josephson charge qubit, is coupled to a nano-mechanical oscillator (Fig.  fig:systemb) it either cools or amplifies the oscillator. On one hand, this may constitute an important tool on the way to ground state cooling. On the other hand, this setup provides a realization of what is called a SASER .... Recent experiments on quantum state engineering with superconducting circuits realized concepts originally introduced in the field of quantum optics, as well as extensions thereof, e.g., to the regime of strong coupling , and prompted substantial theoretical activities . Josephson qubits play the role of two-level atoms while electric or nanomechanical oscillators play the role of the quantized radiation field. In most QED or circuit QED experiments the atom or qubit transition frequency is near resonance with the oscillator. In contrast, in the experiments of Refs. , with setup shown in Fig.  fig:systema), the qubit is coupled to a slow LC oscillator with frequency ( ω T / 2 π ∼ MHz) much lower than the qubit’s level splitting ( Δ E / 2 π ℏ ∼ 10 GHz). The idea of this experiment is to drive the qubit to perform Rabi oscillations with Rabi frequency in resonance with the oscillator, Ω R ≈ ω T . In this situation the qubit should drive the oscillator and increase its oscillation amplitude. When the qubit driving frequency is blue detuned, the driving creates a population inversion of the qubit, and the system exhibits lasing behavior (“single-atom laser"); for red detuning the qubit cools the oscillator . A similar strategy for cooling of a nanomechanical resonator via a Cooper pair box qubit has been recently suggested in Ref. . The analysis of the driven circuit QED system shows that these properties depend strongly on relaxation and decoherence effects in the qubit.... a) In the setup of Ref.  an externally driven three-junction flux qubit is coupled inductively to an LC oscillator. b) A charge qubit is coupled to a mechanical resonator.... The systems to be considered are shown in Fig.  fig:system. A qubit is coupled to an oscillator and driven to perform Rabi oscillations. To be specific we first analyze the flux qubit coupled to an electric oscillator (Fig.  fig:systema) with Hamiltonian Data Types: • Image In the preceeding analysis we neglected the effect of the local environment by setting Y i n t ω = 0 . As a result, the low-frequency value of T 1 is substantially larger than obtained in experiment . By modeling the local environment with R 0 = 5000  ohms and L 0 = 0 we obtain the T 1 versus ω 01 plot shown in Fig.  fig:three. Notice that this value of R 0 brings T 1 to values close to 20 ns at T = 0 . The message to extract from Figs.  fig:two and  fig:three is that increasing R 0 as much as possible and increasing the qubit frequency ω 01 from 0.1 Ω to 2 Ω at fixed low temperature can produce a large increase in T 1 .... Schematic drawing of the phase qubit with an RLC isolation circuit.... The circuit used to describe intrinsic decoherence and self-induced Rabi oscillations in phase qubits is shown in Fig.  fig:one, which correponds to an asymmetric dc SQUID . The circuit elements inside the dashed box form an isolation network which serves two purposes: a) it prevents current noise from reaching the qubit junction; b) it is used as a measurement tool.... In the limit of T = 0 , we can solve for c 1 t exactly and obtain the closed form c 1 t = L -1 s + Γ - i ω 01 2 + Ω 2 - Γ 2 s s + Γ - i ω 01 2 + Ω 2 - Γ 2 - κ Ω 4 π i / Γ where L -1 F s is the inverse Laplace transform of F s , and κ = α / M ω 01 × Φ 0 / 2 π 2 ≈ 1 / ω 01 T 1 , 0 . The element ρ 11 = | c 1 t | 2 of the density matrix is plotted in Fig.  fig:four for three different values of resistance, assuming that the qubit is in its excited state such that ρ 11 0 = 1 . We consider the experimentally relevant limit of Γ ≪ ω 01 ≈ Ω , which corresponds to the weak dissipation limit. Since Γ = 1 / 2 C R the width of the resonance in the spectral density shown in Eq. ( eqn:sd-poles) is smaller for larger values of R . Thus, for large R , the RLC environment transfers energy resonantly back and forth to the qubit and induces Rabi-oscillations with an effective time dependent decay rate γ t = - 2 ℜ c ̇ 1 t / c 1 t .... fig:three T 1 (in nanoseconds) as a function of qubit frequency ω 01 . The solid (red) curves describes an RLC isolation network with parameters R = 50  ohms, L 1 = 3.9 nH, L = 2.25 pH, C = 2.22 pF, and qubit parameters C 0 = 4.44 pF, R 0 = 5000  ohms and L 0 = 0 . The dashed curves correspond to an RL isolation network with the same parameters, except that C = 0 . Main figure ( T = 0 ), inset ( T = 50 mK) with Ω = 141 GHz.... fig:four Population of the excited state of the qubit as a function of time ρ 11 t , with ρ 11 t = 0 = 1 for R = 50  ohms (solid curve), 350  ohms (dotted curve), and R = 550  ohms (dashed curve), and L 1 = 3.9 nH, L = 2.25 pH, C = 2.22 pF, C 0 = 4.44 pF, R 0 = ∞ and L 0 = 0 .... fig:two T 1 (in seconds) as a function of qubit frequency ω 01 . The solid (red) curves describes an RLC isolation network with parameters R = 50  ohms, L 1 = 3.9 nH, L = 2.25 pH, C = 2.22 pF, and qubit parameters C 0 = 4.44 pF, R 0 = ∞ and L 0 = 0 . The dashed curves correspond to an RL isolation network with the same parameters, except that C = 0 . Main figure ( T = 0 ), inset ( T = 50 mK) with Ω = 141 GHz.... In Fig.  fig:two, T 1 is plotted versus qubit frequency ω 01 for spectral densities describing an RLC (Eq.  eqn:spectral-density-isolation) or Drude (Eq.  eqn:sd-drude) isolation network at fixed temperatures T = 0 (main figure) and T = 50 mK (inset), for J i n t ω = 0 corresponding to R 0 ∞ . In the limit of low temperatures k B T / ℏ ω 01 ≪ 1 , the relaxation time becomes T 1 ω 01 = M ω 01 / J ω 01 . From Fig.  fig:two (main plot) several important points can be extracted. First, in the low frequency regime ( ω 01 ≪ Ω ) the RL (Drude) and RLC environments produce essentially the same relaxation time T 1 , R L C 0 = T 1 , R L 0 = T 1 , 0 ≈ L 1 / L 2 R C 0 , because both systems are ohmic. Second, near resonance ( ω 01 ≈ Ω ), T 1 , R L C is substantially reduced because the qubit is resonantly coupled to its environment producing a distinct non-ohmic behavior. Third, for ( ω 01 > Ω ), T 1 grows very rapidly in the RLC case. Notice that for ω 01 > 2 Ω , the RLC relaxation time T 1 , R L C is always larger than T 1 , R L . Furthermore, in the limit of ω 01 ≫ m a x Ω , 2 Γ , T 1 , R L C grows with the fourth power of ω 01 behaving as T 1 , R L C ≈ T 1 , 0 ω 01 4 / Ω 4 , while for ω 01 ≫ Ω 2 / 2 Γ , T 1 , R L grows only with second power of ω 01 behaving as T 1 , R L ≈ 4 T 1 , 0 Γ 2 ω 01 2 / Ω 4 . Thus, T 1 , R L C is always much larger than T 1 , R L for sufficiently large ω 01 . Notice, however, that for parameters in the experimental range such as those used in Fig  fig:two, T 1 , R L C is two orders of magnitude larger than T 1 , R L , indicating a clear advantage of the RLC environment shown in Fig  fig:one over the standard ohmic RL environment. Thermal effects are illustrated in the inset of Fig.  fig:two where T = 50 mK is a characteristic temperature where experiments are performed . The typical values of T 1 at low frequencies vary from 10 -5 s at T = 0 to 10 -6 s at T = 50 mK, while the high frequency values remain essentially unchanged as the thermal effects are not important for ℏ ω 01 ≫ k B T .... These environmentally-induced Rabi oscillations are a clear signature of the non-Markovian behavior produced by the RLC environment, and are completely absent in the RL environment because the energy from the qubits is quickly dissipated without being temporarily stored. These environmentally-induced Rabi oscillations are generic features of circuits with resonances in the real part of the admittance. The frequency of the Rabi oscillations Ω R a = π κ Ω 3 / 2 Γ is independent of the resistance since Ω R a ≈ Ω π L 2 C / L 1 2 C 0 , and has the value of Ω R a = 2 π f R a ≈ 360 × 10 6  rad/sec for Fig.  fig:four. Data Types: • Image In the example figure (Fig.  fig:qubosc1d2d), the control bias is varied from left to right for a low frequency oscillator circuit (1.36GHz). For each bias point the simulation is reinitialised, the stochastic time evolution of the system density matrix is simulated over 1500 oscillator cycles. Then the oscillator and qubit charge expectation values are extracted to obtain the power spectrum for each component, with a frequency resolution of 4.01MHz. The power spectra for each time series are collated as an image such that the power axis is now represented as a colour, and the individual power spectra are vertical ‘slices’ through the image. The dominant frequency peaks become line traces, therefore illustrating the various avoided crossings, mergeings and intersections. The example figure shows the PSD ‘slice’ at Bias = 0.5187 , the broadband noise is readily apparent and is due to the discontinuous quantum jumps in the qubit. The bias oscillator peak (1.36GHz) is most prominent in the oscillator PSD, as would be expected, but it is also present in the qubit PSD. It should also be noted that most features are present in both the qubit and oscillator, including the noise which is generated by the quantum jumps and the quantum state diffusion processes. Interestingly, the qubit PSD is significantly stronger than the oscillator PSD, however, a larger voltage is generated by the smaller charge due to the extremely small island capacitance, V q = q / C q .... fig:mwRamp (Color online) Oscillator PSD as a function of the applied microwave drive frequency f m w , for microwave amplitudes A m w = 0.0050 (A) and A m w = 0.0100 (B). It is important to notice that there are now two frequency axes per plot, a drive (H) and a response (V). Of particular interest is the magnified section which shows clearly the distinct secondary splitting in the sub-GHz regime. This occurs due to a high frequency interaction seen in the upper plots, where the lower Rabi sideband of the microwave drive passes through the high frequency oscillator signal. The maximum splitting occurs when the Rabi amplitude is a maximum, hence this is observed for a very particular combination of bias and drive, which is beneficial for charactering the qubit. Most importantly, this would not be observed with a conventional low frequency oscillator configuration as the f m w - f o s c separation would be too large for the Rabi frequency. ( κ = 5 × 10 -5 ).... Fig.  fig:mwRamp is presented in a similar manner as Fig.  fig:BiasRamp. However there are now two frequency axes: the horizontal axis represents the frequency of the applied microwave drive field, and the vertical axis is the frequency response. It should be remembered that the microwave frequency axis is focused near the qubit transition frequency ( f q u b i t ≈ 3.49GHz) and the diagonally increasing line is now the microwave frequency.... Autler Townes effect, charge qubit, characterisation, frequency spectrum... fig:QubitOscEnergy A two level qubit is coupled to a many level harmonic oscillator, investigated for two different oscillator energies. Firstly, the oscillator resonant frequency is set to 1.36GHz, this more resembles the conventional configuration such that the fundamental component of the oscillator does not drive the qubit. However, we also investigate the use of a high frequency oscillator of 3.06GHz which can excite this qubit. In addition, qubit is constantly driven by a microwave field at 3.49GHz to generate Rabi oscillations and in this paper we examine the relation between these three fields.... fig:qubosc1d2d (Color online) Oscillator and Qubit power spectra slices for Bias = 0.5187, using the low frequency oscillator circuit f o s c = 1.36 GHz. The solid lines overlay the energy level separations found in Fig.  fig:EnergyLevel. ( κ = 5 × 10 -5 ). As one would expect, the bias oscillator peak at 1.36GHz is clearly observed in the oscillator PSD, but only weakly in the qubit PSD. Likewise the qubit Rabi frequency is found to be stronger in the qubit PSD. However it is important to note that the qubit dynamics such as the Rabi oscillations are indeed coupled to the bias oscillator circuit and so can be extracted. In addition, it is recommended to compare the layout of the most prominent features with Fig.  fig:BiasRamp.... fig:BiasRamp (Color online) Oscillator PSD as a function of bias, for microwave amplitudes A m w = 0.0025 (A) and A m w = 0.0050 (B). The red lines track the positions (in frequency) of significant power spectrum peaks (+10dB to +15dB above background), the overlaid black and blue lines are the qubit energy and microwave transition (Fig.  fig:EnergyLevel). Unlike Fig.  fig:qubosc1d2d, in these figures the 3.06GHz oscillator circuit can now drive the qubit (Fig.  fig:EnergyLevel) and so creates excitations which mix with the microwave driven excitations creating a secondary splitting centred on f m w - f o s c (430MHz). This feature contains the Rabi frequency information in the sidebands of the splitting, but now in a different and controllable frequency regime. In addition, the intersection of the two differently driven excitations (illustrated in the magnified sections), opens the possibility of calibrating the biased qubit against a fixed engineered oscillator circuit, using a single point feature. ( κ = 5 × 10 -5 ).... In a previous paper , a method was proposed by which the energy level structure of a charge qubit can be obtained from measurements of the peak noise in the bias/control oscillator, without the need of extra readout devices. This was based on a technique originally proposed for superconducting flux qubits but there are many similarities between the two technologies. The oscillator noise peak is the result of broadband noise caused by quantum jumps in the qubit being coupled back to the oscillator circuit. This increase in the jump rate becomes a maximum when the Rabi oscillations are at peak amplitude, this should only occur when the qubit is correctly biased and the microwave drive is driving at the transition frequency. Therefore by monitoring this peak as a function of bias, we can associate a bias position with a microwave frequency equal to that of the energy gap, hence constructing the energy diagram (Fig.  fig:EnergyLevel).... fig:Jumps (Color online) (A) Oscillator power spectra when the coupled qubit is driven at f m w = 5.00 GHz. An increase in bias noise power ( f o s c = 1.36 GHz) can be observed when Rabi oscillations occur, the more frequent quantum jump noise couples back to the oscillator. (B) Bias noise power peak position changes as a function of f m w , the microwave drive frequency. Therefore, it is possible to probe the qubit energy level structure by using the power increase in the oscillator which is already in place, eliminating the need for additional measurement devices. However, it should be noted that the surrounding oscillator harmonics may mask the microwave driven peak. ( κ = 1 × 10 -3 ). Data Types: • Image • Tabular Data where we have defined the total spin operators J ̂ α = ∑ σ ̂ α / 2 . In the limit ℏ ω 0 / Δ → 0 , all the results concerning the low-energy spectrum of the resonator remain unchanged; one could say that the reduction of the coupling strength by the factor N is compensated by the strengthening of the spin raising and lowering operators by the same factor because of the collective behaviour of the qubits. In particular, the transition occurs at the critical coupling strength given by Eq. ( Eq:CriticalCouplingStrength). Because the qubits now have a larger total spin (when compared to the single-qubit case), spin states that are separated by small angles can be drastically different (i.e. have a small overlap). In particular, the overlap for N qubits is given by cos 2 N θ / 2 . By expanding this function to second order around θ = 0 , one can see that for small values of θ the relevant overlap is lower than unity by an amount that is proportional to N . This dependence translates into the dependence of the qubit-oscillator entanglement on the coupling strength just above the critical point. The entanglement therefore rises more sharply in the multi-qubit case (with the increase being by a factor N ), as demonstrated in Fig.  Fig:EntropyLogLog.... (Color online) The logarithm of the von Neumann entropy S as a function of the logarithm of the quantity λ / λ c - 1 , which measures the distance of the coupling strength from the critical value. The red solid line corresponds to the single-qubit case, whereas the other lines correspond to the multi-qubit case: N = 2 (green dashed line), 3 (blue short-dashed line), 5 (purple dotted line) and 10 (dash-dotted cyan line). All the lines correspond to ℏ ω 0 / Δ = 10 -7 . The slope of all lines is approximately 0.92 when λ / λ c - 1 = 10 -4 . The ratio of the entropy in the multi-qubit case to that in the single-qubit case approaches N for all the lines as we approach the critical point.... The energy level structure in the single-qubit case is simple in principle. In the limit ℏ ω 0 / Δ → 0 , one can say that the energy levels form two sets, one corresponding to each qubit state. Each one of these sets has a structure that is similar to that of a harmonic oscillator with some modifications that are not central in the present context. In particular the density of states has a weak dependence on energy, a situation that cannot support a thermal phase transition. If the temperature is increased while all other system parameters are kept fixed, qubit-oscillator correlations (which are finite only above the critical point) gradually decrease and vanish asymptotically in the high-temperature limit. No singular point is encountered along the way. This result implies that the transition point is independent of temperature. In other words, it remains at the value given by Eq. ( Eq:CriticalCouplingStrength) for all temperatures. If, for example, one is investigating the dependence of the correlation function C on the coupling strength (as plotted in Fig.  Fig:SpinFieldSignCorrelationFunction), the only change that occurs as we increase the temperature is that the qubit-oscillator correlations change more slowly when the coupling strength is varied.... where p ̂ is the oscillator’s momentum operator, which is proportional to i â † - â in our definition of the operators. The squeezing parameter mirrors the behaviour of the low-lying energy levels. In particular we can see from Fig.  Fig:SqueezingParameter that only when ℏ ω 0 / Δ reaches the value 10 -5 does the squeezing become almost singular at the critical point.... (Color online) The von Neumann entropy S as a function of the oscillator frequency ℏ ω 0 and the coupling strength λ , both measured in comparison to the qubit frequency Δ . One can see clearly that moving in the vertical direction the rise in entropy is sharp in the regime ℏ ω 0 / Δ ≪ 1 , whereas it is smooth when ℏ ω 0 / Δ is comparable to or larger than 0.1.... The tendency towards singular behaviour (in the dependence of various physical quantities on λ ) in the limit ℏ ω 0 / Δ → 0 is illustrated in Figs.  Fig:ColorPlot- Fig:SqueezingParameter. In these figures, the entanglement, spin-field correlation function, low-lying energy levels (measured from the ground state) and the oscillator’s squeezing parameter are plotted as functions of the coupling strength. It is clear from Figs.  Fig:EntropyLinear and Fig:SpinFieldSignCorrelationFunction that when ℏ ω 0 / Δ ≤ 10 -3 both the entanglement (which is quantified through the von Neumann entropy S = T r ρ q log 2 ρ q with ρ q being the qubit’s reduced density matrix) and the correlation function C = σ z s i g n a + a † rise sharply upon crossing the critical point . The low-lying energy levels, shown in Fig.  Fig:EnergyLevels, approach each other to form a large group of almost degenerate energy levels at the critical point before they separate again into pairs of asymptotically degenerate energy levels. This approach is not complete, however, even when ℏ ω 0 / Δ = 10 -3 ; for this value the energy level spacing in the closest-approach region is roughly ten times smaller than the energy level spacing at λ = 0 . The squeezing parameter is defined by the width of the momentum distribution relative to that in the case of an isolated oscillator. For consistency with Ref. , we define it as Data Types: • Image We performed a spectroscopy measurement of the qubit with long (50 ns) single-frequency microwave pulses. We observed multi-photon resonant peaks ( Φ q b 1.5 Φ 0 ) in the dependence of P s w on f M W 1 at a fixed magnetic flux Φ q b . We obtained the qubit energy diagram by plotting their positions as a function of Φ q b / Φ 0 (Fig.  Fig2(a)). We took the data around the degeneracy point Φ q b ≈ 1.5 Φ 0 by applying an additional dc pulse to the microwave line to shift Φ q b away from 1.5 Φ 0 just before the readout, because the dc-SQUID could not distinguish the qubit states around the degeneracy point. The top solid curve in Fig.  Fig2(a) represents a numerical fit to the resonant frequencies of one-photon absorption. From this fit, we obtain the qubit parameters E J / h = 213 GHz, Δ / 2 π = 1.73 GHz, and α = 0.8. The other curves in Fig.  Fig2(a) are drawn by using these parameters for n 1 = 2, 3, and 4.... Next, we used short single-frequency microwave pulses with a frequency of 10.25 GHz to observe the coherent quantum dynamics of the qubit. Figures  Fig2(b) and (c) show one- and four-photon Rabi oscillations observed at the operating points indicated by arrows in Fig.  Fig2(a) with various microwave amplitudes V M W 1 . These data can be fitted by damped oscillations ∝ exp - t p / T d cos Ω R a b i t p , except for the upper two curves in Fig.  Fig2(b). Here, t p and T d are the microwave pulse length and qubit decay time, respectively. To obtain Ω R a b i , we performed a fast Fourier transform (FFT) on the curves that we could not fit by damped oscillations. Although we controlled the qubit environment, there were some unexpected resonators coupled to the qubit, which could be excited by the strong microwave driving or by the Rabi oscillations of the qubit. We consider that these resonators degraded the Rabi oscillations in the higher V M W 1 range of Fig.  Fig2(b). Figure  Fig2(d) shows the V M W 1 dependences of Ω R a b i / 2 π up to four-photon Rabi oscillations, which are well reproduced by Eq. ( eq2). Here, we used only one scaling parameter a (10.25 GHz) = 0.013 defined as a f M W 1 ≡ 4 g 1 α 1 / ω M W 1 V M W 1 , because it is hard to measure the real amplitude of the microwave applied to the qubit at the sample position. The scaling parameter a f M W 1 reflects the way in which the applied microwave is attenuated during its transmission to the qubit and the efficiency of the coupling between the qubit and the on-chip microwave line. In this way, we can estimate the real microwave amplitude and the interaction energy between the qubit and the microwave 2 ℏ g 1 α 1 by fitting the dependence of Ω R a b i / 2 π on V M W 1 . These results show that we can reach a driving regime that is so strong that the interaction energy 2 ℏ g 1 α 1 is larger than the qubit transition energy ℏ ω q b .... Experimental results with single-frequency microwave pulses. (a) Spectroscopic data of the qubit. Each set of the dots represents the resonant frequencies f r e s caused by the one to four-photon absorption processes. The solid curves are numerical fits. The dashed line shows a microwave frequency f M W 1 of 10.25 GHz. (b) One-photon Rabi oscillations of P s w with exponentially damped oscillation fits. Both the qubit Larmor frequency f q b and the microwave frequency f M W 1 are 10.25 GHz. The external flux is Φ q b / Φ 0 = 1.4944. (c) Four-photon Rabi oscillations when f q b = 41.0 GHz, f M W 1 = 10.25 GHz, and Φ q b / Φ 0 = 1.4769. (d) The microwave amplitude dependence of the Rabi frequencies Ω R a b i / 2 π up to four-photon Rabi oscillations. The solid curves represent theoretical fits. Fig2... The measurements were carried out in a dilution refrigerator. The sample was mounted in a gold plated copper box that was thermalized to the base temperature of 20 mK ( k B T frequency microwave pulses, we added two microwaves MW1 and MW2 with frequencies of f M W 1 and f M W 2 , respectively by using a splitter SP (Fig.  Fig1(b)). Then we shaped them into microwave pulses through two mixers. We measured the amplitude of MW k V M W k at the point between the attenuator and the mixer with an oscilloscope. We confirmed that unwanted higher-order frequency components in the pulses, for example | f M W 1 ± f M W 2 | , 2 f M W 1 , and 2 f M W 2 are negligibly small under our experimental conditions. First, we choose the operating point by setting Φ q b around 1.5 Φ 0 , which fixes the qubit Larmor frequency f q b . The qubit is thermally initialized to be in | g by waiting for 300 μ s, which is much longer than the qubit energy relaxation time (for example 3.8 μ s at f q b = 11.1 GHz). Then a qubit operation is performed by applying a microwave pulse to the qubit. The pulse, with an appropriate length t p , amplitudes V M W k , and frequencies f M W k , prepares a qubit in the superposition state of | g and | e . After the operation, we immediately apply a dc readout pulse to the dc-SQUID. This dc pulse consists of a short (70 ns) initial pulse followed by a long (1.5 μ s) trailing plateau that has 0.6 times the amplitude of the initial part. For Φ q b qubit is detected as being in | e , the SQUID switches to a voltage state and an output voltage pulse should be observed; otherwise there should be no output voltage pulse. By repeating the measurement 8000 times, we obtain the SQUID switching probability P s w , which is directly related to P e t p for the dc readout pulse with a proper amplitude. For Φ q b > 1.5 Φ 0 , P s w is directly related to 1 - P e t p .... We next investigated the coherent oscillations of the qubit through the parametric processes by using short two-frequency microwave pulses. Figure  Fig3(a) [(b)] shows the Rabi oscillations of P s w when the qubit Larmor frequency f q b = 26.45 [7.4] GHz corresponds to the sum of the two microwave frequencies f M W 1 = 16.2 GHz, f M W 2 = 10.25 GHz [the difference between f M W 1 = 11.1 GHz and f M W 2 = 18.5 GHz] and the microwave amplitude of MW2 V M W 2 was fixed at 33.0 [50.1] mV. They are well fitted by exponentially damped oscillations ∝ exp - t p / T d cos Ω R a b i t p . The Rabi frequencies obtained from the data in Fig. 3(a) [(b)] are well reproduced by Eq. ( eq3) without any fitting parameters (Fig.  Fig3(c) [(d)]). Here, we used Δ , which was obtained from the spectroscopy measurement (Fig.  Fig2(a)) and used a (10.25 GHz) = 0.013 and a (16.2 GHz) = 0.0074 [ a (11.1 GHz) = 0.013 and a (18.5 GHz) = 0.0082], which had been obtained from Rabi oscillations by using single-frequency microwave pulses with each frequency. Those results provide strong evidence that we can achieve parametric control of the qubit with two-frequency microwave pulses.... (a) Scanning electron micrograph of a flux qubit (inner loop) and a dc-SQUID (outer loop). The loop sizes of the qubit and SQUID are 10.2 × 10.4  μ m 2 and 12.6 × 13.5  μ m 2 , respectively. They are magnetically coupled by the mutual inductance M ≈ 13 pH. (b) A circuit diagram of the flux qubit measurement system. On-chip components are shown in the dashed box. L ≈ 140 pH, C ≈  9.7 pF, R I 1 = 0.9 k Ω , R V 1 =  5 k Ω . Surface mount resistors R I 2 = 1 k Ω and R V 2 = 3 k Ω are set in the sample holder. We put adequate copper powder filters CP and LC filters F and attenuators A for each line. Fig1... Experimental results with two-frequency microwave pulses. (a) [(b)] Two-photon Rabi oscillations due to a parametric process when f q b = f M W 2 + -   f M W 1 . The solid curves are fits by exponentially damped oscillations. (c) [(d)] Rabi frequencies as a function of V M W 1 , which are obtained from the data in Fig.  Fig3(a) [(b)]. The dots represent experimental data when V M W 2 = 16.9, 23.5, 33.0, and 52.0 [50.1, 62.9, 79.1, and 124.7] mV from the bottom set of dots to the top one. The solid curves represent Eq. ( eq3). The inset is a schematic of the parametric process that causes two-photon Rabi oscillation when f q b = f M W 2 + -   f M W 1 . Fig3 Data Types: • Image
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582261443138123, "perplexity": 1143.3073840629265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810617.95/warc/CC-MAIN-20200408041431-20200408071931-00021.warc.gz"}
http://physics.stackexchange.com/questions/70963/about-dirac-cones
This nice image of Dirac cones (from this article), in a ($E,\vec k$ graph) will be an introduction for several questions, in the realm of topological insulators. 1) Does the Dirac cone appears only at the surface ? 2) Is the shape (the cone) important ? 3) The Dirac cone is gapless, so is it only stable by symmetry-protection ? 4) Suppose a Dirac cone is opened, then closed, then re-opened. In the open situation, there is a energy gap, so there is a possible non-trivial topology. So it is possible to change the topology in the open->close->open process ? - I hope somebody will provide more details, but let me just give some quick answers. 1) Yes, since topological insulators (TI) are per definition gapped in the bulk. The existence of gapless boundary modes can be heuristically argued for as follows: phase transition between to different TI's can only happen if the bulk gap closes. If one put two different TI's next to each other, the gap must close on the boundary between them such that there can be a transition. –  Heidar Jul 13 '13 at 11:34 2) Depends on with respect to what it should be important. As far as stability of the edge modes are concerned, that's not important. However the shape will be important for questions about the detailed dynamics for example. If one takes higher energy/momentum contributions into account in the boundary low-energy effective theory, then the Dirac equation will get non-relativistic corrections in general. This is also clear from your picture. See for example arxiv.org/abs/0908.1418. Eq. (4) contain the first correction to the dispersion and thus the change of shape of the Dirac cone. –  Heidar Jul 13 '13 at 11:35 3) Yes, the gaplessness of the boundary mode is protected by a symmetry (as is the case for all TI's). 4) I am not sure I understand this question. –  Heidar Jul 13 '13 at 11:36 Forewords: As Heidar remarked in the associated comments, my answers were not dedicated to the topological insulator situation. I'll try to correct myself in some edits I'll write [-> into brackets <-] and into answer-bis, but I let my answers about topological superconductors, since they may be helpful. 1) Dirac cone's on surface: Some emergent Dirac cones appears in the bulk of the $p$-wave chiral superconductor, see the book by Volovik for more details, available freely on his homepage at Aalto University. I'm not at ease with the notion of band structure on the surface. I have no idea what it means... That's just the closure of the gap which happens on the surface/edge for me. [-> Please see the Heidar comments for a clever discussion <-]. 1-bis: the topological insulator situation. The topological insulator case is easier to discuss, since a bulk insulator has no closure of the gap by definition. Then, the Dirac-linear-closure can only happens at the edge. See also point 4 below, and the Heidar's comments about the Jackiw-Rebbi model below. 2) Shape of the cone: The shape, per se is not important. What you need is a linear dispersion relation with a crossing point. (NB: Without crossing, the dispersion corresponds to the Weyl fermion particles.) The cone structure is the simplest structure like this. 3) Symmetry protected topology: I don't know the full answer to this question. I would say no, not for the emergent Dirac cones in superconducting/superfluid phase: the cone can there be topologically protected as well. But the topology depends strongly on the symmetry for quadratic Hamiltonians, especially the three discrete ones of particle-hole $P$ such that $\left\{ P,H\right\} =0$ with $P^{2}=\pm1$, time-reversal $T$ such that $\left[T,H\right]=0$ with $T^{2}=\pm1$ (both $P$ and $T$ have anti-unitary representation, and $H$ is a representation of the Hamiltonian), and the chiral $C\equiv PT$ ones (a situation exists when $C$ is present without neither $P$ nor $T$). This is still troubling for me. I think it's essentially a matter of convention whether you want to call these discrete symmetries some kind of topology (whatever it means) or not. Topology for me means you've got a Chern number $\nu\neq0$, and you will keep it until you change one of the discrete symmetries I mentioned. But some Chern numbers are protected by symmetry as well, so it is a mess to disentangle all these notions at the end. 3-bis: the topological insulator situation. For the topological insulator once again, the situation is easier, since the topological classification is crystal clear: the topological characteristic are provided by symmetry. These symmetries are just the three discrete symmetries I discussed in point 3. 4) Opening <--> closure of the gap I think the answer to this question has been answered long ago by Volkov, and Pankratov, Two-dimensional massless electrons in an inverted contact JETP, 42 178 (1985) (article for free) or I misunderstood it. The answer is yes, and you get an instanton solution at the boundary, as in the Jackiw-Rebbi. Volkov and Pankratov discuss the Dirac dispersion relation, not a relativistic model. - 1) Since the question was in the context of topological insulators (TI), you cannot have gapless modes in the bulk per definition. If you have, then you are not in the phase of a TI. Its easy to get a Dirac cone in the bulk. Write down a simple model for a TI, say the one in physics.stackexchange.com/questions/3282/… . The low-energy theory will be a massive Dirac equation in the bulk. When the mass is zero and you have a Dirac cone in the bulk. That's the point of phase-transition, and therefore not a TI phase. –  Heidar Jul 13 '13 at 12:16 Band structure on the surface actually do make sense. In the above mentioned model, I assumed translational symmetry and thus no edge and therefore $\mathbf k = (k_x,k_y)$ is a good quantum number. The eigenvalues of $H(\mathbf k)$ are the bulk band structure. Now assume that there is an edge at $x = 0$ and $x=L$. Now $k_x$ is not a good quantum number anymore but $k_y$ still is. Fourier transform $k_x$ to real space: $H(k_x,k_y)\rightarrow H_e(k_y)$. Our $2\times 2$ matrix is now turned into a $2L\times 2L$ matrix depending only on $k_y$. –  Heidar Jul 13 '13 at 12:24 The eigenvalues of $H_e(k_y)$ (there will be $2L$ of them parametrized by $k_y$) is what you can call the edge band structure (although it also contain the bulk part). There one will see gapless bands, inside many gapped bands. Finding the eigenvector corresponding to the gapless modes, one will find that they are localized at the boundaries. Alternatively one a take the low-energy effective theory of the bulk and do the same, solving the diff equations one will get the boundary modes (similar to the Jakiw-Rebbi analysis). These are sometimes called Kaplan fermions in lattice gauge theory. –  Heidar Jul 13 '13 at 12:29 @Heidar Thanks for your comments. I've tried to correct correspondingly. Please tell me if there are still mistakes. Thanks a lot for the bulk-edge-band-structure discussion, too. –  FraSchelle Jul 13 '13 at 15:24 @Oaoa : +1 for the detailed answer –  Trimok Jul 15 '13 at 8:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888364434242249, "perplexity": 391.4077341386808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299236.74/warc/CC-MAIN-20150323172139-00229-ip-10-168-14-71.ec2.internal.warc.gz"}
http://clay6.com/qa/15722/a-small-square-loop-of-wire-of-side-i-is-placed-inside-a-large-square-loop-
# A small square loop of wire of side i is placed inside a large square loop of slide $L( L >>1)$. If the loops are coplanar and there centers coinside. the mutual induction of the system is directly proportional to : $\begin {array} {1 1} (a)\;\frac{L}{l} & \quad (b)\;\frac{l}{L} \\ (c)\;\frac{L^2}{l} & \quad (d)\;\frac{l^2}{L} \end {array}$ $(d)\;\frac{l^2}{L}$ answered Nov 7, 2013 by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991680145263672, "perplexity": 748.1868762091913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947654.26/warc/CC-MAIN-20180425001823-20180425021823-00503.warc.gz"}
http://mathhelpforum.com/calculus/120239-sequence-type-r-n-print.html
# is this sequence of the type r^n. • December 13th 2009, 11:20 AM swatpup32 is this sequence of the type r^n. Would this sequence [(1+3/n)^4n] fall under the rule of r^n -1 < r < = 1. If so would it be convergent. If you take the limit it eventually becomes {(1+0)^4n}. • December 13th 2009, 11:25 AM skeeter Quote: Originally Posted by swatpup32 Would this sequence [(1+3/n)^4n] fall under the rule of r^n -1 < r < = 1. If so would it be convergent. If you take the limit it eventually becomes {(1+0)^4n}. you should be familiar with this limit ... $\lim_{n \to \infty} \left(1 + \frac{k}{n}\right)^n = e^k $
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803428649902344, "perplexity": 986.2161928548497}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293692.32/warc/CC-MAIN-20160823195813-00263-ip-10-153-172-175.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/223511-trig-question-pre-calc.html
# Math Help - Trig question pre calc! 1. ## Trig question pre calc! Hi there, the question states Prove algebraically: $\cos\theta/1-sin\theta$ = $sec\theta +(sec\theta)(csc\theta)-cot\theta$ Here are the steps i took to solving this i worked on the right side making it equal the left 1) change all sec/ csc/ cot to their respective forms ( $\1/sin\theta)$ and so forth and when i get to this point I'm lost $\frac{sin\theta +1 - cos\theta^2} {cos\theta sin\theta}$ After this point I'm lost please help, i apologize if i have asked too many questions but i am trying to study for my final. Thanks! 2. ## Re: Trig question pre calc! Hey Gurp925. Is it meant to be cos(theta) / [1 - sin(theta)] or cos(theta) - sin(theta)? 3. ## Re: Trig question pre calc! $\frac{sin\theta+1-cos^2\theta}{cos\theta sin\theta}$ $=\frac{sin\theta+sin^2\theta}{cos\theta sin\theta}$ $=\frac{sin\theta(1+sin\theta)}{cos\theta sin\theta}$ $=\frac{1+sin\theta}{cos\theta}$ $=\frac{(1+sin\theta)(1-sin\theta)}{cos\theta(1-sin\theta)}$ $=\frac{1-sin^2\theta}{cos\theta(1-sin\theta)}$ $=\frac{cos^2\theta}{cos\theta(1-sin\theta)}$ ... Hope this helps 4. ## Re: Trig question pre calc! Hey Acc100jt almost understood the question just wondering how did you multiply the denominator by (1-sin) ?thus multiplying the top? if i understand that then the question is solved. Thanks everyone for their help. 5. ## Re: Trig question pre calc! Originally Posted by Gurp925 Hey Acc100jt almost understood the question just wondering how did you multiply the denominator by (1-sin) ?thus multiplying the top? if i understand that then the question is solved. Thanks everyone for their help. Multiplying top and bottom by the bottom's conjugate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985832154750824, "perplexity": 2224.725891070796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900397.29/warc/CC-MAIN-20141030025820-00169-ip-10-16-133-185.ec2.internal.warc.gz"}
http://ronaldocoisanossa.com.br/american-pie-wltq/symbolab-definite-integral-406534
# symbolab definite integral Advanced Math Solutions – Integral Calculator, common functions. Partial fractions decomposition is the opposite of adding fractions, we are trying to break a rational expression... High School Math Solutions – Polynomial Long Division Calculator. View integral x^2(2x^3+3)^3dx - Indefinite Integral Calculator - Symbolab from MATH 122 at Oakland University. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. Now at first this might seem daunting, I have this rational expression, I have xs in the numerators and xs in the denominators, but we just have to remember, we just have to do some algebraic manipulation, and this is going to seem a lot more attractable. Help De‫צ‬nite Integral Calculator Solve de‫ﺡ‬nite integrals step by step Enter a topic Algebra Matrices & Vectors Functions & Graphing Like full pad » x 2 x d dx (☐)' x log √☐ √☐ ≤ ≥ ∂ ∂x ∫ ∫ lim ∑ ∞ ( f g) x It highlights that the Integration's variable is x. Advanced Math Solutions – Integral Calculator, integration by parts, Part II. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. Enter your function in line 2 below... 1. f x = xsinx. Conic Sections. In the previous post we covered integrals involving powers of sine and cosine, we now continue with integrals involving... partial\:fractions\:\int_{0}^{1} \frac{32}{x^{2}-64}dx, substitution\:\int\frac{e^{x}}{e^{x}+e^{-x}}dx,\:u=e^{x}. =ln() ∫ | =√. Definite Integral Calculator Added Aug 1, 2010 by evanwegley in Mathematics This widget calculates the definite integral of a single-variable function given certain limits of integration. = limx → b − ( F ( x)) − limx → a + ( F ( x)) Odd function. Definite integral could be represented as the signed area in the XY-plane bounded by the function graph as shown on the image below. Make your first steps in evaluating definite integrals, armed with the Fundamental theorem of calculus. Free Series Comparison Test Calculator - Check convergence of series using the comparison test step-by-step Symbolab: equation search and math solver - solves algebra, trigonometry and calculus problems step by step This website uses cookies to ensure you get the best experience. This website uses cookies to ensure you get the best experience. Free definite integral calculator - solve definite integrals with all the steps. The usual stuff, solve the problems to discover the punchline to the joke. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. \int x\left (x^2-3\right)dx ∫ x(x2 −3)dx by applying integration by substitution method (also called U-Substitution). By using this website, you agree to our Cookie Policy. … ∫ is the Integral Symbol and 2x is the function we want to integrate. Definite Integrals Calculator. Related Symbolab blog posts Advanced Math Solutions - Integral Calculator, integration by parts, Part II In the previous post we covered integration by parts ; Free definite integral calculator - solve definite integrals with all the steps. u. u u, instead of. x. x x. U-substitution in definite integrals is just like substitution in indefinite integrals except that, since the variable is changed, the limits of integration must be changed as well. Example: A definite integral of the function f (x) on the interval [a; b] is the limit of integral sums when the diameter of the partitioning tends to zero if it exists independently of the partition and choice of points inside the elementary segments.. change password email address. Free calculus calculator - calculate limits, integrals, derivatives and series step-by-step This website uses cookies to ensure you get the best experience. 2. Pre-Álgebra. Orden (jerarquía) de operaciones Factores y números primos Fracciones Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación científica. Log InorSign Up. Related Symbolab blog posts Advanced Math Solutions – Ordinary Differential Equations Calculator, Bernoulli ODE Last post, we learned about separable differential equations. The definite integral of from to , denoted , is defined to be the signed area between and the axis, from to . setting up the definite integral. Advanced Math Solutions – Integral Calculator, trigonometric substitution. u. u u ), which when substituted makes the integral easier. Homework later than 1 class period won't be accepted. Free definite integral calculator - solve definite integrals with all the steps. ∫ x ( x 2 − 3) d x. 3. Indefinite Integral Calculator - Symbolab Solutions My Notebook Practice Blog English New! If you don’t change the limits of integration, then you’ll need to back-substitute for the original variable at the en Your private math tutor, solves any math problem with steps! Show Instructions. Example: A definite integral of the function f (x) on the interval [a; b] is the limit of integral sums when the diameter of the partitioning tends to zero if it exists independently of the partition and choice of points inside the elementary segments.. To create your new password, just click the link in the email we sent you. Related Symbolab blog posts Advanced Math Solutions – Limits Calculator, Infinite limits In the previous post we covered substitution, where the limit is simply the function value at the point. You can check your own solution or get rid of unnecessary labour-intensive calculations and to confide in a high-tech automated machine when solving the definite integral with us. If one or both integration bounds a and b are not numeric, int assumes that a <= b unless you explicitly specify otherwise. This website uses cookies to ensure you get the best experience. Pre-Álgebra. Enter your function in line 2 below... 1. f x = xsinx. This website and its content is subject to our Terms and Conditions. U-substitution in definite integrals is just like substitution in indefinite integrals except that, since the variable is changed, the limits of integration must be changed as well. Get the free "Triple Integral Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. - [Instructor] What we're gonna do in this video is introduce ourselves to the notion of a definite integral and with indefinite integrals and derivatives this is really one of the pillars of calculus and as we'll see, they're all related and we'll see that more and more in future videos and we'll also get a better appreciation for even where the notation of a definite integral comes from. Type in any integral to get the solution, free steps and graph. Both types of integrals are tied together by the fundamental theorem of calculus. this website uses cookies to ensure you get the best experience. Solve definite integrals with us! You can also check your answers! Solved exercises of Improper integrals. History! A free graphing calculator - graph function, examine intersection points, find maximum and minimum and much more Advanced Math Solutions – Integral Calculator, integration by parts Integration by parts is essentially the reverse of the product rule. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. Because integral psychotherapy is a wide philosophy, anyone may opt to practice iteven without formal mental wellness training. $\int_a^bf\left (x\right)dx=F\left (b\right)-F\left (a\right)$. Integrals involving... Advanced Math Solutions – Integral Calculator, advanced trigonometric functions, Part II. Definite Integral Calculator ­ Symbolab Solutions My Notebook Practice Blog English New! ∫abf ( x) dx = F ( b) − F ( a) $=\lim_ {x\to b-}\left (F\left (x\right)\right)-\lim_ {x\to a+}\left (F\left (x\right)\right)$. First of all I would like to start off by asking why do they have different change of variable formulas for definite integrals than indefinite...why cant we just integrate using U substitution as we normally do in indefinite integral and then sub the original U value back and use that integrand for definite integral?. ... Symbolab. Show More Show Less. This calculus video tutorial explains how to evaluate definite integrals using u-substitution. ... Related Symbolab blog posts. ∫ = . 2 ∫ b a f x dx. Type in any integral to get the solution, free steps and grap Also, be careful when you write fractions: 1/x^2 ln (x) is 1 x 2 ln ⁡ ( … Definite Integrals . … Definite Integral Calculator. Free definite integral calculator - solve definite integrals with all the steps. Keywords Learn how to evaluate the integral of a function. Symbolab Integrals Cheat Sheet. Advanced Math Solutions – Integral Calculator, integration by … Polynomial long division is very similar to numerical long division where you first divide the large part of the... partial\:fractions\:\int_{0}^{1} \frac{32}{x^{2}-64}dx, substitution\:\int\frac{e^{x}}{e^{x}+e^{-x}}dx,\:u=e^{x}. Type in any integral to get the solution, free steps and graph This website uses cookies to ensure you get the best experience. Free Series Integral Test Calculator - Check convergence of series using the integral test step-by-step This website uses cookies to ensure you get the best experience. You da real mvps! We can solve the integral. Free definite integral calculator - solve definite integrals with all the steps. ∫ 1 2 x 2 d x. By using this website, you agree to our Cookie Policy. Definite Integral Calculator. Matrices & … the task is to set up the definite integral. First, we must identify a section within the integral with a new variable (let's call it. 2. Input a function, the integration variable and our math software will give you the value of the integral covering the selected interval (between the lower limit and the upper limit). Summary. Definite Integrals. partial fractions \int_{0}^{1} \frac{32}{x^{2}-64}dx, Please try again using a different payment method. Detailed step by step solutions to your Improper integrals problems online with our math solver and calculator. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. In the previous posts we covered substitution, but standard substitution is not always enough. For definite integrals, int restricts the integration variable var to the specified integration interval. The definite integral of a non-negative function is always greater than or equal to zero: $${\large\int\limits_a^b\normalsize} {f\left( x \right)dx} \ge 0$$ if $$f\left( x \right) \ge 0 \text{ in }\left[ {a,b} \right].$$ The definite integral of a non-positive function is always less than or equal to zero: Improper integrals Calculator online with solution and steps. By using this website, you agree to our Cookie Policy. In general, you can skip the multiplication sign, so 5 x is equivalent to 5 ⋅ x. High School Math Solutions – Partial Fractions Calculator. Show More Show Less. It is used to transform the integral of a product of functions into an integral that is easier to compute. Integral dx Use latex commands: * is multiplication oo is $\infty$ pi is $\pi$ x^2 is x 2 sqrt(x) is $\sqrt{x}$ sqrt[3](x) is $\sqrt[3]{x}$ (a+b)/(c+d) is $\frac{a+b}{c+d}$ Powered by Sympy. Message received. The calculator will approximate the integral using the trapezoidal rule, with steps shown. Our mission is to provide a free, world-class education to anyone, anywhere. This means . i know for a fact that the answer should be 9/2 because i solved for the horizontal strips. send reset link. This states that if is continuous on and is its continuous indefinite integral, then . Definite Integrals Calculator. Advanced Math Solutions – Integral Calculator, the complete guide. Thanks for the feedback. com, the most comprehensive source for safe, trusted, and spyware-free Symbolab - Math solver. - [Voiceover] So we wanna evaluate the definite integral from negative one to negative two of 16 minus x to the third over x to the third dx. Symbolab – Math solver Pro. Log InorSign Up. Definite Integrals. Functions 3D Plotter is an application to drawing functions of several variables and surface in the space R3 and to calculate indefinite integrals or definite integrals. Interactive graphs/plots help visualize and better understand the functions. Definite integrals calculator. ... Related Symbolab blog posts. The definite integral f(x) from, say, x=a to x= b, is defined as the signed area between f(x) and the x-axis from the point x = a to the point x = b. Definite Integral Calculator. Close. Each part of the symbol makes sense. 2 ∫ b a f x dx. I use for a starter or plenary or occasionally a homework. Show Instructions. Free definite integral calculator - solve definite integrals with all the steps. Input a function, the integration variable and our math software will give you the value of the integral covering the selected interval (between the lower limit and the upper limit). 1. Free definite integral calculator - solve definite integrals with all the steps. Given the condition mentioned above, consider the function F\displaystyle{F}F(upper-case "F") defined as: (Note in the integral we have an upper limit of x\displaystyle{x}x, and we are integrating with respect to variable t\displaystyle{t}t.) The first Fundamental Theorem states that: Proof If you have a table of values, see trapezoidal rule calculator for a table. The integral calculator allows you to solve any integral problems such as indefinite, definite and multiple integrals with all the steps. Advanced Math Solutions – Integral Calculator, integration by parts, Part II. Khan Academy is a 501(c)(3) nonprofit organization. Submit Assignment Start Over Back. - [Voiceover] So we wanna evaluate the definite integral from negative one to negative two of 16 minus x to the third over x to the third dx. When you use IgnoreAnalyticConstraints, int applies these rules: Advanced Math Solutions – Integral Calculator, integration by parts, Part II. The calculus integrals of function f(x) represents the area under the curve from x = a to x = b. Type in any integral to get the solution, free steps and graph. The dx shows the direction alon the x-axis & dy shows the direction along the y-axis. I'm trying to evaluate the following definite integral: $\int_{0}^{1} \frac{x}{\sqrt{x+1}} dx$ Well I put the integral on symbolab to know its value ($4/3$) However when I tried to calculate the Matrices & Vectors * Matrix Add/Subtract Definite Integral Calculator - Symbolab (2 days ago) Free definite integral calculator - solve definite integrals with all the steps. When evaluating definite integrals for practice, you may use your calculator to inspect the answers. 3. i need help. Type in any integral to get the solution, steps and graph This website uses cookies to ensure you get the best experience. ... * Integrals (definite, indefinite, multiple) * Derivatives * Partial derivatives * Series * ODE * Laplace Transform * Inverse Laplace Transform. Summary. We can read the integral sign as a summation, so that we get "add up an infinite number of infinitely skinny rectangles, from x=1 to x=2, with height x^2 times width dx." Line Equations Functions Arithmetic & Comp. the solution shown in the picture is from symbolab. The calculator will evaluate the definite (i.e. Free antiderivative calculator - solve integrals with all the steps. Definite Integrals . Definite integrals calculator. ... Related Symbolab blog posts. Advanced Math Solutions – Integral Calculator, the complete guide. Posted by 4 days ago. This calculator is convenient to use and accessible from any device, and the results of calculations of integrals and solution steps can be easily copied to the clipboard. The definite integral is denoted by a f(x) d(x). Message received. $1 per month helps!! Thanks for the feedback. Example: Proper and improper integrals. There is also the issue that the symbols make more sense in the definite integral. Algorithms. Type in any integral to get the solution, free steps and graph This website uses cookies to ensure you get the best experience. Submit Assignment Start Over Back. Functions. u = sin x. u=\sin {x} u = sinx to find limits of integration in terms of. All online services are accessible even for unregistered users and absolutely free of charge. type in any integral to get the solution, free steps and graph. Orden (jerarquía) de operaciones Factores y números primos Fracciones Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación científica. For more about how to use the Integral Calculator, go to "Help" or take a look at the examples.$\mathrm {If}\:f\left (x\right)=-f\left (-x\right)\Rightarrow\int_ {-a}^af\left (x\right)dx=0$. Definite Integral Boundaries. Thanks to all of you who support me on Patreon. ∫sin() =−cos() ∫cos() =sin() Trigonometric Integrals: The definite integral has both the start value & end value. To create your new password, just click the link in the email we sent you. It is important to note that both the definite and indefinite integrals are interlinked by … Funcions 3D plotter calculates the analytic and numerical integral and too calculates partial derivatives with respect to x and y for 2 variabled functions. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Type in any integral to get the solution, free steps and graph. i have been studying this problem. Definite Integral Calculator Added Aug 1, 2010 by evanwegley in Mathematics This widget calculates the definite integral of a single-variable function given certain limits of integration. Now at first this might seem daunting, I have this rational expression, I have xs in the numerators and xs in the denominators, but we just have to remember, we just have to do some algebraic manipulation, and this is going to seem a lot more attractable. You can learn how to calculate definite integrals by using our free definite integral calculator. Advanced Math Solutions – Integral Calculator, advanced trigonometric functions. Adjust the lower and upper bound of the integral by … setting up the definite integral. ... Symbolab. Please try again using a different payment method. This website uses cookies to ensure you get the best experience. Loading... Definite Integral Calculator. An absolutely free online step-by-step definite and indefinite integrals solver. :) https://www.patreon.com/patrickjmt !! Example: Proper and improper integrals. In this integral equation, dx is the differential of Variable x. ... Related Symbolab blog posts. Definite Integral Calculator computes definite integral of a function over an interval using numerical integration. Find more Mathematics widgets in Wolfram|Alpha. Derivatives Derivative Applications Limits Integrals Integral Applications Riemann Sum Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series. Common Integrals: ∫−1 =ln() ∫ �� . Definite Integrals Rules. Type in any integral to get the solution, free steps and graph In general, you can skip parentheses, but be very careful: e^3x is e 3 x, and e^ (3x) is e 3 x. with bounds) integral, including improper, with steps shown. Definite and Improper Integral Calculator. The definite integral of the function $$f\left( x \right)$$ over the interval $$\left[ {a,b} \right]$$ is defined as the limit of the integral sum (Riemann sums) as the maximum length … By … Make more sense in the previous posts we covered substitution, but standard symbolab definite integral is not always enough evaluating integrals! Improper, with steps shown trigonometric functions, Part II function graph shown! Between and the axis, from to a + ( f ( x ) any... Use your Calculator to inspect the answers by millions of students & professionals better understand the.! Formal mental wellness training but standard substitution is not always enough and Conditions integrals are tied together by the theorem. May opt to Practice iteven without formal mental wellness training symbolab definite integral Pro & knowledgebase, relied on by millions students! The symbols make more sense in the email we sent you Aritmética con notación científica by. The complete guide integrating functions with many variables you agree to our Cookie Policy world-class education to anyone,.. In the previous posts we covered substitution, but standard substitution is not enough! Are accessible even for unregistered users and absolutely free online step-by-step definite integral of a product of functions into integral. ( Company No 02017289 ) with its registered office at 26 Red Square! Transform the integral using the Comparison Test step-by-step definite and indefinite integrals solver Math Solutions integral. Help visualize and better understand the functions let 's call it use for a table, then i use a... Your private Math tutor, solves any Math problem with steps shown calculus Calculator solve... Help visualize and better understand the functions restricts the integration 's variable is x, is to! X and y for 2 variabled functions post, we learned about separable differential Equations Calculator, common functions is. Has both the start value & end value of calculus evaluate definite with. Is not always enough will approximate the integral Symbol and 2x is the function graph as on! A product of functions into an integral that is easier to compute the dx shows the direction the! * x your website, you agree to our Cookie Policy this... To integrate with our Math solver and Calculator about separable differential Equations know for a.. To provide a free graphing Calculator - Symbolab ( 2 days ago free. Differential Equations Calculator, the complete guide, advanced trigonometric functions ( (... Int restricts the integration 's variable is x is also the issue that the integration variable var to the integration! When substituted makes the integral Symbol and 2x is the differential of variable.... Of students & professionals called U-Substitution ) including improper, with steps shown dx=F\left ( b\right ) -F\left a\right! Parts, Part II take a look at the examples evaluate the integral Symbol and 2x is the integral.! Solution shown in the picture is from Symbolab accessible even for unregistered users and absolutely free of charge improper... Com, the most comprehensive source for safe, trusted, and spyware-free Symbolab - symbolab definite integral. 'S call it, go to help '' or take a at. Symbolab ( 2 days ago ) free definite integral Calculator, go to help '' or take look... Integrals ( antiderivatives ) as well as integrating functions with many variables or a! Website and its content is subject to our Cookie Policy its content is subject to Cookie... Use the integral of from to, denoted, is defined to be signed. With all the steps i know for a fact that the symbols make more sense in the is... Integral using the trapezoidal rule, with steps shown symbolab definite integral, anywhere is subject our! ( f ( x ) with steps call it free online step-by-step definite Calculator! Is its continuous indefinite integral Calculator, the most comprehensive source for safe, trusted, and spyware-free -! By applying integration by parts, Part II with the fundamental theorem of calculus axis! Series step-by-step this website uses cookies to ensure you get the free Triple Calculator... Your private Math tutor, solves any Math problem with steps shown Matrix Add/Subtract this website cookies! Defined to be the signed area between and the axis, from to to set up definite! Website uses cookies to ensure you get the best experience occasionally a homework mission! World-Class education to anyone, anywhere and minimum and much more Symbolab – Math solver Pro guide... Primos Fracciones Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación científica, dx is the of... By … an absolutely free online step-by-step definite integral Calculator, advanced trigonometric functions, Part II more how... Step Solutions to your improper integrals problems online with our Math solver Calculator... Solutions My Notebook Practice Blog English new all the steps than 1 class wo... D ( x 2 − 3 ) d x a f ( x ) ) Odd function indefinite Calculator! Dx by applying integration by parts, Part II that the symbols make more sense in the integral. Wordpress, Blogger, or iGoogle are accessible even for unregistered users and absolutely online! - Symbolab from Math 122 at Oakland University use your Calculator to inspect the answers jerarquía ) operaciones... Is used to transform the integral Calculator - solve definite integrals, armed with the fundamental theorem calculus! Parts, Part II not always enough of function f ( x ) −... First, we learned about separable differential Equations Calculator, Bernoulli ODE post... D x - Math solver and Calculator standard substitution is not always enough intersection points, find maximum and and. ) represents the area under the curve from x = xsinx ( jerarquía ) de Factores., trigonometric substitution into an integral that is easier to compute the XY-plane bounded the. A section within the integral Calculator the direction alon the x-axis & dy shows direction... 5x is equivalent to 5 * x – Math solver Pro by step Solutions your... Use your Calculator to inspect the answers Terms and Conditions variable ( let call! And Conditions dy shows the direction alon the x-axis & dy shows direction... And absolutely free online step-by-step definite and indefinite integrals ( antiderivatives ) as well as integrating functions with many.., anyone may opt to Practice iteven without formal mental wellness training derivatives and series step-by-step this website you... ( b\right ) -F\left ( a\right )$ 5 * x applying integration substitution... Steps in evaluating definite integrals by using this website, you can learn how to evaluate definite integrals all... Calculus video tutorial explains how to use the integral Calculator - graph,... And indefinite integrals ( antiderivatives ) as well symbolab definite integral integrating functions with many variables graphing Calculator - (. Of integrals are tied together by the function we want to integrate graph as shown on image. Integral has both the start value & end value i know for starter. And Conditions with bounds ) integral, then integrals, int restricts integration... Any integral to get the best experience the differential of variable x Comparison Test Calculator - Symbolab from Math at! Advanced trigonometric functions ) with its registered office at 26 Red Lion Square London WC1R 4HQ 02017289 ) its. The XY-plane bounded by the function graph as shown on the image.... Is subject to our Cookie Policy Calculator for a table of values, trapezoidal... Of functions into an integral that is symbolab definite integral to compute start value & end value be! Solution, steps and graph this website uses cookies to ensure you get the best experience common.... Value & end value philosophy, anyone may opt to Practice iteven formal... With bounds ) integral, then 's variable is x armed with the fundamental theorem calculus... I know for a starter or plenary or occasionally a homework 26 Red Lion Square London WC1R 4HQ understand functions! ) ( 3 ) nonprofit organization or occasionally a homework Cookie Policy just click link! Not always enough uses cookies to ensure you get the best experience 3D plotter the... Com, the complete guide private Math tutor, solves any Math with. Integrals ( antiderivatives ) as well as integrating functions with many variables as the signed area and... Relied on by millions of students & professionals an interval using numerical.! Series step-by-step this website uses cookies to ensure you get the best.! Blogger, or iGoogle, with steps are tied together by the fundamental theorem of calculus – differential! Series step-by-step this website uses cookies to ensure you get the best experience services are accessible even for users... Widget for your website, Blog, Wordpress, Blogger, or iGoogle 's breakthrough technology &,! The x-axis & dy shows the direction along the y-axis, relied on by millions of students &.! Into an integral that is easier to symbolab definite integral with all the steps is not always enough safe trusted!, go to help '' or take a look at the examples trapezoidal rule Calculator for a table 122. X restricts the integration variable var to the joke this calculus video tutorial how. Identify a section within the integral using the trapezoidal rule, with steps advanced trigonometric functions, Part.... Integral using the trapezoidal rule Calculator for a fact that the integration 's variable is x is denoted a. 5X is equivalent to 5 * x con notación científica equation, dx is integral! To be the signed area between and the axis, from to, denoted, is to. All online services are accessible even for unregistered users and absolutely free online step-by-step definite integral Calculator ­ Symbolab My! Our mission is to set up the definite integral Calculator supports definite and integrals. The free ` Triple integral Calculator - graph function, examine intersection points, find maximum and and.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9633703231811523, "perplexity": 1224.0719784303735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150307.84/warc/CC-MAIN-20210724160723-20210724190723-00438.warc.gz"}
https://plainmath.net/11016/tell-whether-the-function-represents-exponential-growth-or-decay-equal
# Tell whether the function represents exponential growth or decay. z(x)=47(0.55)^x Question Exponential growth and decay Tell whether the function represents exponential growth or decay. z(x)=47(0.55)^x 2021-01-07 The given function is: $$\displaystyle{z}{\left({x}\right)}={47}{\left({0.55}\right)}^{{x}}$$ Is in the form: $$\displaystyle{y}={a}{\left({b}\right)}^{{x}}$$ Since b=0.55 Then the function represents exponential decay. ### Relevant Questions Determine whether each function represents exponential growth or exponential decay. Identify the percent rate of change. $$\displaystyle{g{{\left({t}\right)}}}={2}{\left({\frac{{{5}}}{{{4}}}}\right)}^{{t}}$$ Tell whether the function represents exponential growth or exponential decay. Then graph the function. $$f(x)=(1.5)^{x}$$ Write an exponential growth or decay function to model each situation. Then find the value of the function after the given amount of time. A new car is worth \$25,000, and its value decreases by 15% each year; 6 years. Determine whether each equation represents exponential growth or exponential decay. Find the rate of increase or decrease for each model. Graph each equation. $$y=5^x$$ Determine whether the function represents exponential growth or exponential decay. Identify the percent rate of change. Then graph the function. $$y=5^x$$ Tell whether the function represents exponential growth or decay. $$\displaystyle{k}{\left({x}\right)}={22}{\left({0.15}\right)}^{{x}}$$ $$\displaystyle{y}={80}{\left({0.25}\right)}^{{{x}}}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640903830528259, "perplexity": 1073.3960464167442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00208.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ph/0606133/
# Improved Analysis of J/ψ Decays into a Vector Meson and Two Pseudoscalars Timo A. Lähde Helmholtz-Institut für Strahlen- und Kernphysik (HISKP), Bonn University, Nußallee 14-16, D-53115 Bonn, Germany Ulf-G. Meißner Helmholtz-Institut für Strahlen- und Kernphysik (HISKP), Bonn University, Nußallee 14-16, D-53115 Bonn, Germany Forschungszentrum Jülich, Institut für Kernphysik (Th), D-52425 Jülich, Germany ###### Abstract Recently, the BES collaboration has published an extensive partial wave analysis of experimental data on , , and . These new results are analyzed here, with full account of detection efficiencies, in the framework of a chiral unitary description with coupled-channel final state interactions between and pairs. The emission of a dimeson pair is described in terms of the strange and nonstrange scalar form factors of the pion and the kaon, which include the final state interaction and are constrained by unitarity and by matching to the next-to-leading-order chiral expressions. This procedure allows for a calculation of the -wave component of the dimeson spectrum including the resonance, and for an estimation of the low-energy constants of Chiral Perturbation Theory, in particular the large suppressed constants and . The decays in question are also sensitive to physics associated with OZI violation in the channel. It is found that the -wave contributions to , and given by the BES partial-wave analysis may be very well fitted up to a dimeson center-of-mass energy of  GeV, for a large and positive value of and a value of compatible with zero. An accurate determination of the amount of OZI violation in the decay is achieved, and the -wave contribution to near threshold is predicted. decays; unitarity; chiral perturbation theory; OZI violation ###### pacs: 13.20.Gd, 12.39.Fe preprint: HISKP-TH-06/16 ## I Introduction The decays of the into a vector meson such as or , via emission of a pair of light pseudoscalar mesons, may yield insight into the dynamics of the pseudo-Goldstone bosons of QCD Au ; Isgur1 ; Speth , and in particular into the final state interaction (FSI) between and pairs, which is an essential component in a realistic description of the scalar form factors (FFs) of pions and kaons. Additionally, such decays can yield insight into violation of the Okubo-Zweig-Iizuka (OZI) rule OZI ; NC ; Isg in the scalar () sector of QCD, since the leading order contributions to such decays are OZI suppressed. Furthermore, as shown in Fig. 1, a doubly OZI-violating component may contribute to these decays, which was demonstrated already in Refs. UGM1 ; Roca although the data from the DM2 DM2 , MARK-III MK3 and BES BES0 collaborations available at that time had rather low statistics. However, since then the BES collaboration has published far superior data on and  BES1 , as well as for and  BES2 ; BES3 . Additionally, a comprehensive partial-wave analysis (PWA) has been performed for those data, which is particularly significant since an explicit determination of the -wave and event distributions is thus available. Thus, a much more precise analysis of the issues first touched upon in Ref. UGM1 is clearly called for. A key ingredient in such an analysis is a realistic treatment of the final state interaction (FSI). It has been demonstrated in Ref. OO1 that the FSI in the system can be well described by a coupled-channel Bethe-Salpeter approach using the lowest order CHPT amplitudes for meson-meson scattering Wein1 ; GL1 ; GL2 . In such an approach, the lowest resonances in the sector are of dynamical origin, i.e. they arise due to the strong rescattering effects in the or system. Such dynamically generated states include the and mesons, which are prominent in the BES data on dimeson emission from the  BES1 ; BES2 ; BES3 . It is useful, in view of the controversial nature of the  Au ; Isgur1 ; Speth ; UGM2 ; Tqvist ; Rijken ; Jaffe , to recall Ref. OO2 , which generalizes the work of Ref. OO1 . There, explicit -wave resonance exchanges were included together with the lowest order CHPT contributions in a study of the partial wave amplitudes for the whole scalar sector with and . It was noted that the results of Ref. OO1 could be recovered when the explicit tree-level resonance contributions were dropped. The conclusion of Ref. OO2 was that the lowest nonet of scalar resonances, which includes the and states, is of dynamical origin, while a preexisting octet of scalar resonances is present at  GeV. It was also noted that the physical state obtains a strong contribution of dynamical origin, and may also receive one from a preexisting singlet state. This analysis uses the formalism introduced in Ref. UGM1 , where the expressions for the scalar FFs of the pions and kaons were obtained using the results of Ref. OO1 . This allows for a description of the scalar FFs which takes into account the FSI between pions and kaons up to  GeV. At higher energies, a number of preexisting scalar resonances such as the have to be accounted for, as well as the effects of multiparticle intermediate states, most importantly the state. These scalar FFs may then be constrained by matching to the next-to-leading-order (NLO) chiral expressions. This allows for a fit of the large suppressed Low-Energy Constants (LECs) and of CHPT to the dimeson spectra of the and decays, using the Lagrangian model of Ref. UGM1 . The amount of direct OZI violation present in these decays may also be accurately estimated. Finally, it should be noted that the present treatment of the FSI has been proven successful in describing, in the same spirit, a wide variety of processes, such as the photon fusion reactions and  phot , the decays and  gammadec , and more recently the hadronic decays and  Ddec . This paper is organized in the following manner: In Section II the description of the decays is briefly reviewed, along with the FSI in the system, as applied to the scalar FFs of the pseudo-Goldstone bosons. Some minor corrections to the NLO CHPT expressions in Ref. UGM1 are also pointed out. Section III describes the analysis of the experimental BES data, along with a discussion of the fitted parameter values, with emphasis on the LECs of CHPT and the evidence for OZI violation. In Section IV, the results are summarized along with a concluding discussion. ## Ii Theoretical Framework The theoretical tools required for the calculation of the scalar FFs of the pseudo-Goldstone bosons using CHPT and unitarity constraints have, as discussed in the Introduction, already been extensively treated in the existing literature, and therefore only the parts directly relevant to the present analysis are repeated here. For convenience, the NLO expressions for the scalar FFs of the pseudo-Goldstone bosons are explicitly given here. Also, the expressions for the scalar FFs of the pion in Ref. UGM1 require a minor correction111The scalar FFs of the pion in Ref. UGM1 require a minor correction in the values at . The authors thank J. Bijnens and J.A. Oller for their assistance in pinpointing this. The effect on the numerics of Ref. UGM1 is negligible., such that an updated version is called for. A derivation of the scalar FFs using unitarity and the methods of Ref. OO1 is given in Ref. UGM1 , and introductions to CHPT can e.g. be found in Ref. ChPT . ### ii.1 Amplitude for ππ and K¯K emission This work makes use of the SU(3) and Lorentz invariant Lagrangian of Ref. UGM1 to describe the decay of a into a pair of pseudoscalar mesons and a light vector meson. This Lagrangian can be written as L = gΨμ(⟨Vμ8Σ8⟩+νVμ1Σ1), (1) where the and denote the lowest octet and singlet of vector meson resonances. Similarly, the and refer to the corresponding sets of scalar sources, as defined in Ref. UGM1 . In the above equation, denotes a coupling constant, the precise value of which is not required in the present analysis, while the real parameter will be shown to play the role of an OZI violation parameter in the channel. Furthermore, the angled brackets in Eq. (1) denote the trace with respect to the SU(3) indices of the matrices and . Evaluation of that trace yields L = gΨμ(Vμ8S8+νVμ1Σ1+⋯), (2) where only the terms relevant for the present considerations have been written out. Here denotes the state of the octet of vector meson resonances, while again refers to the corresponding operator in the matrix of scalar sources. The and fields, along with the associated scalar sources and are then introduced according to V8=ω√3−√23ϕ,S8=Sω√3−√23Sϕ, (3) V1=√23ω+ϕ√3,Σ1=√23Sω+Sϕ√3, (4) which corresponds to the ansatz of ideal mixing between and . The departure from this situation is reviewed, using different models, in Ref. phimix . The amount of deviation from ideal mixing in the system has been estimated UGM1 to influence, in an analysis of the present kind, the determination of the magnitude of the OZI violation at the level. This should be compared with the expected departure from unity UGM1 of the parameter in Eq. (2). In view of this, the relations given in Eqs. (3) and (4) will be considered adequate for the present analysis. The scalar sources and may, in terms of a quark model description, be expressed as and . By means of these relations and Eqs. (3) and (4), the Lagrangian of Eq. (2) may be written in the form L = ΨμϕμCϕ(ν)[¯ss+λϕ(ν)¯nn] (5) + ΨμωμCω(ν)[¯ss+λω(ν)¯nn], where the coupling constant is taken, as further elaborated in Sect. II.2, to be absorbed into and . These and the in Eq. (5) are given in terms of the parameter according to λϕ=√2(ν−1)2+ν,Cϕ=2+ν3, (6) λω=1+2ν√2(ν−1),Cω=√2(ν−1)3, (7) which shows that the parameters for the decay operator can be expressed in terms of those which control the decay. The explicit relations are given by Cω=λϕCϕ,λω=λϕ+√2√2λϕ. (8) From now on, the dependence of the and on will be suppressed. The quantities to be determined from fits to the experimental dimeson spectra of Refs. BES1 ; BES2 ; BES3 are taken to be and . It is worth noting that the limit corresponds to the value , and in that case the dimeson spectra for and decays are driven entirely by the strange scalar source and the nonstrange scalar source , respectively. From the Lagrangian in Eq. (5), the matrix elements for and decay of the are given by Mππϕ =√23Cϕ⟨0|(¯ss+λϕ¯nn)|ππ⟩∗I=0, (9) MKKϕ =√12Cϕ⟨0|(¯ss+λϕ¯nn)|K¯K⟩∗I=0, (10) in terms of the and states with , which are related to the physical and states by the Clebsch-Gordan (CG) coefficients in front of the above expressions. It should be noted that in Ref. UGM1 , the CG-coefficient for decay was incorrectly written as . The full transition amplitudes also contain the polarization vectors of the and mesons, which have not been included in the above definitions. They introduce an additional, weakly energy dependent factor which is given explicitly in Sect. II.2. The matrix elements for and , may be obtained by replacement of the labels in Eqs. (9) and (10) according to . The matrix elements of the scalar sources are given in terms of the scalar FFs which are discussed in Sect. II.3. ### ii.2 Decay rates and dimeson event distributions The differential decay rate of a into a vector meson and a pair of pseudoscalar mesons is, for the case of decay, given by dΓdWππ=Wππ|→pϕ||→pπ|4M3J/ψ(2π)3Fpol|Mππϕ|2, (11) where . The decay rates for the other combinations of and mesons and , final states can be obtained by appropriate replacement of the indices in Eq. (11). The moduli of the and momenta in Eq. (11) are given by |→pϕ| = √E2ϕ−m2ϕ,Eϕ=M2J/ψ−W2ππ−m2ϕ2Wππ, (12) |→pπ| = √E2π−m2π,Eπ=Wππ/2, (13) in the rest frame of the system. The factor in Eq. (11) depends on the dipion energy and is generated by properly averaging and summing over the polarizations of the and mesons, respectively. It may be expressed as Fpol ≡ 13∑ρ,ρ′εμ(ρ)εμ(ρ′)ε∗ν(ρ)εν∗(ρ′) (14) = 23⎡⎢ ⎢ ⎢⎣1+(M2J/ψ+M2ϕ−W2ππ)28M2J/ψM2ϕ⎤⎥ ⎥ ⎥⎦. Again, it should be noted that the corresponding expressions for for the other decay channels considered can be obtained straightforwardly from Eq. (14) by the substitutions and . The results for decay into a vector meson and a dimeson pair published by the BES collaboration are given in terms of event distributions as a function of the dimeson center-of-mass energy . The relation between the differential event distribution and the decay rate given by Eq. (11) is defined to be dNdWππ ∼ η(Wππ)dΓdWππ, (15) where the function represents the detection efficiency, shown in Fig. 3, which also takes into account the effects of the various cuts imposed on the data in order to reduce the background to an acceptable level. It should be noted that the detection efficiencies cannot be neglected, since it is evident from Fig. 3 that a sizeable difference exists between the efficiencies for all four decays considered. Furthermore, the detection efficiencies exhibit significant nonlinear behavior. The overall constant of proportionality in Eq. (15) is not relevant to the present analysis, since it may be absorbed in the definitions of and , along with the coupling constant of Eq. (2) and a factor from the scalar FFs in Eq. (19). Thus, the quantities with and the proportionality factor absorbed are denoted by and . While and are dimensionless, has dimension []. ### ii.3 Scalar Form Factors from CHPT The nonstrange and strange scalar FFs of the pseudo-Goldstone bosons of CHPT are defined in terms of the -wave states with , |ππ⟩I=0 =1√3∣∣π+π−⟩+1√6∣∣π0π0⟩, (16) |K¯K⟩I=0 =1√2∣∣K+K−⟩+1√2∣∣K0¯K0⟩, (17) |ηη⟩I=0 =1√2|ηη⟩, (18) where denotes the symmetrized combination of and . Following the conventions of Refs. Pel1 ; Pel2 ; Pel3 , an extra factor of has been included for the states composed of members of the same isospin multiplet. This takes conveniently into account the fact that the pions behave as identical particles in the isospin basis. In terms of the above states, the scalar FFs for the , and mesons are defined as √2B0Γn1(s) = ⟨0|¯nn|ππ⟩I=0, (19) √2B0Γn2(s) = ⟨0|¯nn|K¯K⟩I=0, √2B0Γn3(s) = ⟨0|¯nn|ηη⟩I=0, where the notation ( = 1, = 2, = 3) has been introduced for simplicity. The expressions for the strange scalar FFs may be obtained by the substitutions and . As discussed above, the expressions given in Ref. UGM1 are updated here with minor corrections to and . With these definitions, the scalar FFs may be expressed in terms of the meson loop function  GL1 and the tadpole factor , given in Eqs. (31) and (33) respectively. The expressions so obtained up to NLO in CHPT are, in agreement with Refs. GL1 ; HB1a ; HB1b , Γn1(s)= √32[1+μπ−μη3+16m2πf2(2Lr8−Lr5)+8(2Lr6−Lr4)2m2K+3m2πf2+8sf2Lr4+4sf2Lr5 (20) +(2s−m2π2f2)Jrππ(s)+s4f2JrKK(s)+m2π18f2Jrηη(s)], Γs1(s)= √32[16m2πf2(2Lr6−Lr4)+8sf2Lr4+s2f2JrKK(s)+29m2πf2Jrηη(s)], (21) for the pion, and Γn2(s)= 1√2[1+8Lr4f2(2s−m2π−6m2K)+4Lr5f2(s−4m2K)+16Lr6f2(6m2K+m2π)+32Lr8f2m2K+23μη (23) +(9s−8m2K36f2)Jrηη(s)+3s4f2JrKK(s)+3s4f2Jrππ(s)], Γs2(s)= 1+8Lr4f2(s−m2π−4m2K)+4Lr5f2(s−4m2K)+16Lr6f2(4m2K+m2π)+32Lr8f2m2K+23μη +(9s−8m2K18f2)Jrηη(s)+3s4f2JrKK(s), for the kaon. Finally, for the one finds Γn3(s)= 12√3[1+24Lr4f2(s+13m2π−103m2K)+4Lr5f2(s+43m2π−163m2K)+16Lr6f2(10m2K−m2π) (25) +128Lr7f2(m2π−m2K)+32Lr8f2m2π−μη3+4μK−3μπ+(16m2K−7m2π18f2)Jrηη(s) +(9s−8m2K4f2)JrKK(s)+32m2πf2Jrππ(s)], Γs3(s)= 23[1+6Lr4f2(s−23m2π−163m2K)+4Lr5f2(s+43m2π−163m2K)+8Lr6f2(8m2K+m2π) +(9s−8m2K8f2)JrKK(s)]. ### ii.4 Matching of FSI to NLO CHPT The constraints imposed by unitarity on the pion and kaon scalar FFs, the inclusion of the FSI via resummation in terms of the Bethe-Salpeter (BS) equation, the channel coupling between the and systems, and the matching of the scalar FFs to the NLO CHPT expressions have all been elaborated in great detail in Refs. UGM1 ; OO1 , and will thus be only briefly touched upon here. Within that framework, consideration of the unitarity constraints yields a scalar FF in terms of the algebraic coupled-channel equation Γ(s) = [I+K(s)g(s)]−1R(s) = [I−K(s)g(s)]R(s)+O(p6), where in the second line, the equation has been expanded up to NLO, the NNLO contribution defined as being of in the chiral expansion. This expansion is instructive since it allows for the integrals from the NLO scalar FF expressions to be absorbed into the above equation. Here denotes the kernel of -wave projected meson-meson scattering amplitudes from the leading order chiral Lagrangian. Using the notation defined in Sect. II.3, they are given by K11=2s−m2π2f2,K12=K21=√3s4f2, (27) K22=3s4f2, where the constant is taken to equal the pion decay constant , with the convention  GeV. The components given above are sufficient for the two-channel formalism of Ref. UGM1 used in this paper, where only the FSI in the and channels is considered. The chiral logarithms associated with the channel can thus not be reproduced by Eq. (II.4) and are therefore removed from the chiral expressions given in Sect. II.3, while the contribution of that channel to the values of the form factors at is retained. For completeness, it should be noted that if the channel is also included, then the matrix should be augmented by the elements K13=K31=m2π2√3f2,K33=16m2K−7m2π18f2, K23=K32=9s−8m2K12f2. (28) The elements of the diagonal matrix in Eq. (II.4) are given by the cutoff-regularized loop integral gi(s) = 116π2⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩σi(s)log⎛⎜ ⎜ ⎜ ⎜⎝σi(s)√1+m2iq2max+1σi(s)√1+m2iq2max−1⎞⎟ ⎟ ⎟ ⎟⎠ (29) − 2log⎡⎢⎣qmaxmi⎛⎜⎝1+ ⎷1+m2iq2max⎞⎟⎠⎤⎥⎦⎫⎪⎬⎪⎭, where σi(s)=√1−4m2is, (30) and denotes a three-momentum cutoff, which has to be treated as an a priori unknown model parameter, which is however expected to be of the order of  GeV. Since the above expressions are calculated in a cutoff-regularization scheme, it is useful to note that within the modified subtraction scheme commonly employed in CHPT, the meson loop function is given by Jrii(s) ≡ 116π2[1−log(m2iμ2)−σi(s)log(σi(s)+1σi(s)−1)] (31) = −gi(s), for which it has been shown in App. 2 of Ref. Pel1 that an optimal matching between the two renormalization schemes requires that μ=2qmax√e, (32) in which case the differences between the two forms are of . Furthermore, the expressions for the logarithms generated by the chiral tadpoles in the NLO scalar FFs are given by μi=m2i32π2f2log(m2iμ2). (33) As demonstrated in Ref. UGM1 , the quantity in Eq. (II.4) is a vector of functions free of any cuts or singularities, since the right-hand or unitarity cut has been removed by construction. The information provided by CHPT can then be built into the formalism by fixing to the NLO CHPT expressions for the scalar FFs. Consideration of Eq. (II.4) yields the defining relations Γni(s)=Rni(s)−2∑j=1Kij(s)gj(s)Rnj(s)+O(p6), (34) where it is understood that only contributions up to are to be retained in the product . The analogous expressions for the vectors associated with the strange scalar FFs can be obtained from the above relations by the substitutions and . The above procedure is equivalent to the intuitive result obtained by dropping, in the expressions for the , all occurrences of the loop integrals and , and keeping only the parts of the which do not depend on . Nevertheless, the explicit evaluation of Eq. (34) provides a useful check on the consistency of the normalization used for the NLO scalar FFs and the LO meson-meson interaction kernel . It should also be noted that the expressions for and correspond to the corrected scalar FFs, as explained in the beginning of Sect. II. The expressions for the so obtained are Rn1(s)= √32{1+μπ−μη3+16m2πf2(2Lr8−Lr5)+8(2Lr6−Lr4)2m2K+3m2πf2+8sf2Lr4+4sf2Lr5 (35) −m2π288π2f2[1+log(m2ημ2)]}, Rs1(s)= √32{16m2πf2(2Lr6−Lr4)+8sf2Lr4−m2π72π2f2[1+log(m2ημ2)]}, (36) for the pion, and for the kaon one finds Rn2(s)= 1√2{1+8Lr4f2(2s−6m2K−m2π)+4Lr5f2(s−4m2K)+16Lr6f2(6m2K+m2π)+32Lr8f2m2K+23μη (38) +m2K72π2f2[1+log(m2ημ2)]}, Rs2(s)= 1+8Lr4f2(s−4m2K−m2π)+4Lr5f2(s−4m2K)+16Lr6f2(4m2K+m2π)+32Lr8f2m2K+23μη +m2K36π2f2[1+log(m2ημ2)]. The expressions for the and are valid when only the and channels are considered in the FSI. On the other hand, if the full three-channel interaction kernel of Eq. (27) is used, then the above equations should be modified such that the terms in square brackets are dropped. With respect to the omission of the channel, it was noted in Ref. OO2 that reproduction of the data on the inelastic cross section requires the addition of a preexisting contribution to the if the channel is included. On the other hand, no such contribution was found to be necessary if the channel is dropped. Furthermore, the effect of this channel is known OO2 to be very small for energies less than  GeV. It should be stressed here, with respect to the above mentioned issues, that the main concern in the present analysis is the use of a meson-meson interaction kernel which is known to give a realistic description of the phase shift close to the threshold. Since none of the adjustable model parameters have any influence on the behavior of the phase shift, any model which has the proper chiral behavior for low energies and faithfully reproduces the should therefore give similar results. In view of this, the inclusion or omission of the channel, or the question of a preexisting contribution to the resonance, are issues of secondary importance. Nevertheless, the uncertainties introduced by these issues into the determination of the are discussed in Sect. III. Finally, it should be noted that in order to minimize the dependence on , the Gell-Mann - Okubo (GO) relation has been applied throughout in the polynomial terms of the and the . ## Iii Fits to BES Data The event distributions given by Eq. (15) can be simultaneously fitted to the dimeson spectra in the , and channels. The parameters to be determined via a fit are the LECs and which influence the scalar FFs, as well as the model parameters and . Due to the accuracy of the BES data BES1 ; BES2 ; BES3 , all of the model parameters can be well constrained, which is especially true for and , while the sensitivity of the fit to and turns out to be somewhat less. Once all the model parameters are determined by a fit to the three decay channels mentioned above, the event distribution in the remaining channel in essentially fixed. This channel is thus not included in the fit and is treated instead as a prediction. To a large extent, this also turns out to be true for the shape of the fitted distribution. In spite of the above mentioned positive issues, the fitting of the predicted event distribution to the -wave contribution from the PWA of the BES collaboration is complicated by several issues: Firstly, the detection efficiencies, determined by BES via Monte Carlo simulation and shown in Fig. 3, are different for each decay channel, and furthermore vary appreciably over the range of dimeson energies considered. Secondly, the -wave contribution in the BES PWA cannot be considered as strict experimental information, since it is inevitably biased to some extent by the parameterizations chosen there for the pole. Thirdly, an unambiguous fit requires a highly precise description of the resonance generated by the FSI, which to a large extent cannot be adjusted in the present model. All of these issues, as well as the fitted parameter values and the associated error analysis are elaborated in detail below. ### iii.1 Definitions In order for the fit results to be well reproducible, the various constant parameters which enter the expressions for the decay amplitudes should be accurately defined. These parameters include the masses of the light pseudoscalar mesons, for which the current experimental values PDG have been used. These are  GeV,  GeV and  GeV, while the value  GeV has been adopted for the pion decay constant. The physical masses of the charged pions and kaons have been used in order for the and thresholds to coincide with the physical ones. Also, the physical meson mass has been used rather than the one given by the GO relation. The meson mass appears in relatively few places in the expressions, and checks on the fits have indicated that replacement of the physical mass with that given by the GO relation has a minimal effect. Further parameters are the masses of the vector mesons and . The mass is used in the evolution of the , and has been taken as  GeV. The other vector meson masses appear in various phase-space factors, and have been given the values  GeV,  GeV and  GeV PDG . It is not a priori obvious how the individual deviations, required for the fit, are to be treated for the BES data. The individual deviations for each bin , are given by the BES collaboration as the square root of the number of events . These numbers represent the statistical errors of the raw data, uncorrected for detection efficiency and background. Furthermore, what is fitted in the present analysis is not the total signal detected by the BES experiment, but rather the -wave contribution from the accompanying PWA. In view of these considerations, the deviations for each bin used in the fitting procedure have been taken to be of the form Δi≡√Niwi, (39) where the represent weighting factors which have been chosen in a physically motivated way. In principle, individual could be introduced in all the decay channels studied and would then represent a “quality factor” for each bin. In practice, to avoid excessive fine-tuning of the fit, a constant value for has been applied for each decay channel, according to the following principles: Since the -wave contribution in the spectrum is likely to have the largest uncertainty, a value of has been adopted for that decay channel, whereas the values of for the
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478265047073364, "perplexity": 911.7815563787944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00368.warc.gz"}
https://www.arxiv-vanity.com/papers/0708.0047/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # High Energy Scattering In Quantum Chromodynamics111Lectures given at the Xth Hadron Physics Workshop, March 2007, Florianopolis, Brazil. FRANCOIS GELIS Theory Division, PH-TH, Case C01600, CERN, CH-1211 Geneva 23, Switzerland TUOMAS LAPPI    RAJU VENUGOPALAN Brookhaven National Laboratory, Physics Department Upton, NY-11973, USA , ###### Abstract In this series of three lectures, we discuss several aspects of high energy scattering among hadrons in Quantum Chromodynamics. The first lecture is devoted to a description of the parton model, Bjorken scaling and the scaling violations due to the evolution of parton distributions with the transverse resolution scale. The second lecture describes parton evolution at small momentum fraction , the phenomenon of gluon saturation and the Color Glass Condensate (CGC). In the third lecture, we present the application of the CGC to the study of high energy hadronic collisions, with emphasis on nucleus-nucleus collisions. In particular, we provide the outline of a proof of high energy factorization for inclusive gluon production. \catchline Preprint CERN-PH-TH/2007-131 ## 1 Introduction Quantum Chromodynamics (QCD) is very successful at describing hadronic scatterings involving very large momentum transfers. A crucial element in these successes is the asymptotic freedom of QCD [1], that renders the coupling weaker as the momentum transfer scale increases, thereby making perturbation theory more and more accurate. The other important property of QCD when comparing key theoretical predictions to experimental measurements is the factorization of the short distance physics which can be computed reliably in perturbation theory from the long distance strong coupling physics related to confinement. The latter are organized into non-perturbative parton distributions, that depend on the scales of time and transverse space at which the hadron is resolved in the process under consideration. In fact, QCD not only enables one to compute the perturbative hard cross-section, but also predicts the scale dependence of the parton distributions. A generic issue in the application of perturbative QCD to the study of hadronic scatterings is the occurrence of logarithmic corrections in higher orders of the perturbative expansion. These logarithms can be large enough to compensate the extra coupling constant they come accompanied with, thus voiding the naive, fixed order, application of perturbation theory. Consider for instance a generic gluon-gluon fusion process, as illustrated on the left of figure 1, producing a final state of momentum . The two gluons have longitudinal momentum fractions given by x1,2=M⊥√se±Y, (1) where ( is the invariant mass of the final state) and . On the right of figure 1 is represented a radiative correction to this process, where a gluon is emitted from one of the incoming lines. Roughly speaking, such a correction is accompanied by a factor αs∫x1dzz∫M⊥d2k⊥k2⊥, (2) where is the momentum fraction of the gluon before the splitting, and its transverse momentum. Such corrections produce logarithms, and , that respectively become large when is small or when is large compared to typical hadronic mass scales. These logarithms tell us that parton distributions must depend on the momentum fraction and on a transverse resolution scale , that are set by the process under consideration. In the linear regime222We use the denomination “linear” here to distinguish it from the saturation regime discussed later that is characterized by non-linear evolution equations., there are “factorization theorems” – -factorization [2] in the first case and collinear factorization [3] in the second case -- that tell us that the logarithms are universal and can be systematically absorbed in the definition of parton distributions 333The latter is currently more rigorously established than the former.. The dependence that results from resumming the logarithms of is taken into account by the BFKL equation [4]. Similarly, the dependence on the transverse resolution scale is accounted for by the DGLAP equation [5]. The application of QCD is a lot less straightforward for scattering at very large center of mass energy, and moderate momentum transfers. This kinematics in fact dominates the bulk of the cross-section at collider energies. A striking example of this kinematics is encountered in Heavy Ion Collisions (HIC), when one attempts to calculate the multiplicity of produced particles. There, despite the very large center of mass energy444At RHIC, center of mass energies range up to  GeV/nucleon; the LHC will collide nuclei at  TeV/nucleon., typical momentum transfers are small555For instance, in a collision at  GeV between gold nuclei at RHIC, 99% of the multiplicity comes from hadrons whose is below 2 GeV., of the order of a few GeVs at most. In this kinematics, two phenomena that become dominant are • Gluon saturation : the linear evolution equations (DGLAP or BFKL) for the parton distributions implicitly assume that the parton densities in the hadron are small and that the only important processes are splittings. However, at low values of , the gluon density may become so large that gluon recombinations are an important effect. • Multiple scatterings : processes involving more than one parton from a given projectile become sizeable. It is highly non trivial that this dominant regime of hadronic interactions is amenable to a controlled perturbative treatment within QCD, and the realization of this possibility is a major theoretical advance in the last decade. The goal of these three lectures is to present the framework in which such calculations can be carried out. In the first lecture, we will review key aspects of the parton model. Our recurring example will be the Deep Inelastic Scattering (DIS) process of scattering a high energy electron at high momentum transfers off a proton. Beginning with the inclusive DIS cross-section, we will arrive at the parton model (firstly in its most naive incarnation, and then within QCD), and subsequently at the DGLAP evolution equations that control the scaling violations measured experimentally. In the second lecture, we will address the evolution of the parton model to small values of the momentum fraction and the saturation of the gluon distribution. After illustrating the tremendous simplification of high energy scattering in the eikonal limit, we will derive the BFKL equation and its non-linear extension, the BK equation. We then discuss how these evolution equations arise in the Color Glass Condensate effective theory. We conclude the lecture with a discussion of the close analogy between the energy dependence of scattering amplitudes in QCD and the temporal evolution of reaction-diffusion processes in statistical mechanics. The third lecture is devoted to the study of nucleus-nucleus collisions at high energy. Our main focus is the study of bulk particle production in these reactions within the CGC framework. After an exposition of the power counting rules in the saturated regime, we explain how to keep track of the infinite sets of diagrams that contribute to the inclusive gluon spectrum. Specifically, we demonstrate how these can be resummed at leading and next-to-leading order by solving classical equations of motion for the gauge fields The inclusive quark spectrum is discussed as well. We conclude the lecture with a discussion of the inclusive gluon spectrum at next-to-leading order and outline a proof of high energy factorization in this context. Understanding this factorization may hold the key to understanding early thermalization in heavy ion collisions. Some recent progress in this direction is briefly discussed. ## 2 Lecture I : Parton model, Bjorken scaling, scaling violations In this lecture, we will begin with the simple parton model and develop the conventional Operator Product Expansion (OPE) approach and the associated DGLAP evolution equations. To keep things as simple as possible, we will use Deep Inelastic Scattering to illustrate the ideas in this lecture. ### 2.1 Kinematics of DIS The basic idea of Deep Inelastic Scattering (DIS) is to use a well understood lepton probe (that does not involve strong interactions) to study a hadron. The interaction is via the exchange of a virtual photon666If the virtuality of the photon is small (in photo-production reactions for instance), the assertion that the photon is a “well known probe that does not involve strong interactions” is not valid anymore. Indeed, the photon may fluctuate, for instance, into a meson.. Variants of this reaction involve the exchange of a or boson which become increasingly important at large momentum transfers. The kinematics of DIS is characterized by a few Lorentz invariants (see figure 2 for the notations), traditionally defined as ν ≡ P⋅q s ≡ (P+k)2 M2X ≡ (P+q)2=m2N+2ν+q2, (3) where is the nucleon mass (assuming that the target is a proton) and the invariant mass of the hadronic final state. Because the exchanged photon is space-like, one usually introduces , and also . Note that since , we must have – the value being reached only in the case where the proton is scattered elastically. The simplest cross-section one can measure in a DIS experiment is the total inclusive electron+proton cross-section, where one sums over all possible hadronic final states : E′dσe−Nd3k′=∑states XE′dσe−N→e−Xd3k′. (4) The partial cross-section associated to a given final state can be written as E′dσe−N→e−Xd3k′=∫[dΦX]32π3(s−m2N)(2π)4δ(P+k−k′−PX)⟨∣∣MX∣∣2⟩spin, (5) where denotes the invariant phase-space element for the final state and is the corresponding transition amplitude. The “spin” symbol denotes an average over all spin polarizations of the initial state and a sum over those in the final state. The transition amplitude is decomposed into an electromagnetic part and a hadronic matrix element as MX=ieq2[¯¯¯u(k′)γμu(k)]⟨X∣∣Jμ(0)∣∣N(P)⟩. (6) In this equation is the hadron electromagnetic current that couples to the photon, and denotes a state containing a nucleon of momentum . Squaring this amplitude and collecting all the factors, the inclusive DIS cross-section can be expressed as E′dσe−Nd3k′=132π3(s−m2N)e2q44πLμνWμν, (7) where the leptonic tensor (neglecting the electron mass) is Lμν ≡ ⟨¯¯¯u(k′)γμu(k)¯¯¯u(k)γνu(k′)⟩spin (8) = 2(kμk′ν+kνk′μ−gμνk⋅k′). and – the hadronic tensor – is defined as 4πWμν ≡ ∑states X∫[dΦX](2π)4δ(P+q−PX) (9) ×⟨⟨N(P)∣∣J†ν(0)∣∣X⟩⟨X∣∣Jμ(0)∣∣N(P)⟩⟩spin = ∫d4yeiq⋅y⟨⟨N(P)∣∣J†ν(y)Jμ(0)∣∣N(P)⟩⟩spin. The second equality is obtained using the complete basis of hadronic states . Thus, the hadronic tensor is the Fourier transform of the expectation value of the product of two currents in the nucleon state. An important point is that this object cannot be calculated by perturbative methods. This rank-2 tensor can be expressed simply in terms of two independent structure functions as a consequence of • Conservation of the electromagnetic current : • Parity and time-reversal symmetry : • Electromagnetic currents conserve parity : the Levi-Civita tensor cannot appear777This property is not true in DIS reactions involving the exchange of a weak current; an additional structure function is needed in this case. in the tensorial decomposition of When one works out these constraints, the most general tensor one can construct from and reads : Wμν=−F1(gμν−qμqνq2)+F2P⋅q(Pμ−qμP⋅qq2)(Pν−qνP⋅qq2), (10) where are the two structure functions888The structure function differs slightly from the defined in [6] : .. As scalars, they only depend on Lorentz invariants, namely, the variables and . The inclusive DIS cross-section in the rest frame of the proton can be expressed in terms of as dσe−NdE′dΩ=α2em4mNE2sin4(θ/2)[2F1sin2θ2+m2NνF2cos2θ2], (11) where represents the solid angle of the scattered electron and its energy. ### 2.2 Experimental facts Two major experimental results from SLAC [7] in the late 1960’s played a crucial role in the development of the parton model. The left plot of figure 3 shows the measured values of as a function of . Even though the data covers a significant range in , all the data points seem to line up on a single curve, indicating that depends very little on in this regime. This property is now known as Bjorken scaling [8]. In the right plot of figure 3, one sees a comparison of with the combination999, the longitudinal structure function, describes the inclusive cross-section between the proton and a longitudinally polarized proton. . Although there are few data points for , one can see that it is significantly lower than and close to zero 101010From current algebra, it was predicted that ; this relation is known as the Callan-Gross relation [9].. As we shall see shortly, these two experimental facts already tell us a lot about the internal structure of the proton. ### 2.3 Naive parton model In order to get a first insight into the inner structure of the proton, it is interesting to compare the DIS cross-section in eq. (11) and the cross-section (also expressed in the rest frame of the muon), dσe−μ−dE′dΩ=α2emδ(1−x)4mμE2sin4θ2[sin2θ2+m2μνcos2θ2]. (12) Note that, since this reaction is elastic, the corresponding variable is equal to , hence the delta function in the prefactor. The comparison of this formula with eq. (11), and in particular its angular dependence, is suggestive of the proton being composed of point like fermions – named partons by Feynman – off which the virtual photon scatters. If the constituent struck by the photon carries the momentum , this comparison suggests that 2F1∼F2∼δ(1−xc)withxc≡Q22q⋅pc. (13) Assuming that this parton carries the fraction of the momentum of the proton, i.e. , the relation between the variables and is . Therefore, we get : 2F1∼F2∼xFδ(x−xF). (14) In other words, the kinematical variable measured from the scattering angle of the electron would be equal to the fraction of momentum carried by the struck constituent. Note that Bjorken scaling appears quite naturally in this picture. Having gained intuition into what may constitute a proton, we shall now compute the hadronic tensor for the DIS reaction on a free fermion carrying the fraction of the proton momentum. Because we ignore interactions for the time being, this calculation (in contrast to that for a proton target) can be done in closed form. We obtain, 4πWμνi ≡ ∫d4p′(2π)42πδ(p′2)(2π)4δ(xFP+q−p′) ×⟨⟨xFP∣∣Jμ†(0)∣∣p′⟩⟨p′∣∣Jν(0)∣∣xFP⟩⟩spin = 2πxFδ(x−xF)e2i[−(gμν−qμqνq2)+2xFP⋅q(Pμ−qμP⋅qq2)(Pν−qνP⋅qq2)], where is the electric charge of the parton under consideration. Let us now assume that in a proton there are partons of type with a momentum fraction between and , and that the photon scatters incoherently off each of them. We would thus have Wμν=∑i∫10dxFxFfi(xF)Wμνi. (15) (The factor in the denominator is a “flux factor”.) At this point, we can simply read the values of , F1=12∑ie2ifi(x),F2=2xF1. (16) We thus see that the two experimental observations of i) Bjorken scaling and ii) the Callan-Gross relation are automatically realized in this naive picture of the proton111111In particular, in this model is intimately related to the spin structure of the scattered partons. Scalar partons, for instance, would give , at variance with experimental results.. Despite its success, this model is quite puzzling, because it assumes that partons are free inside the proton – while the rather large mass of the proton suggests a strong binding of these constituents inside the proton. Our task for the rest of this lecture is to study DIS in a quantum field theory of strong interactions, thereby turning the naive parton model into a systematic description of hadronic reactions. Before we proceed further, let us describe in qualitative terms (see [10] for instance) what a proton constituted of fermionic constituents bound by interactions involving the exchange of gauge bosons may look like. In the left panel of figure 4 are represented the three valence partons (quarks) of the proton. These quarks interact by gluon exchanges, and can also fluctuate into states that contain additional gluons (and also quark-antiquark pairs). These fluctuations can exist at any space-time scale smaller than the proton size ( 1 fermi). (In this picture, one should think of the horizontal axis as the time axis.) When one probes the proton in a scattering experiment, the probe (e.g. the virtual photon in DIS) is characterized by certain resolutions in time and in transverse coordinate. The shaded area in the picture is meant to represent the time resolution of the probe : any fluctuation which is shorter lived than this resolution cannot be seen by the probe, because it appears and dies out too quickly. In the right panel of figure 4, the same proton is represented after a boost, while the probe has not changed. The main difference is that all the internal time scales are Lorentz dilated. As a consequence, the interactions among the quarks now take place over times much larger than the resolution of the probe. The probe therefore sees only free constituents. Moreover, this time dilation allows more fluctuations to be resolved by the probe; thus, a high energy proton appears to contain more gluons than a proton at low energy121212Equivalently, if the energy of the proton is fixed, there are more gluons at lower values of the momentum fraction .. ### 2.4 Bjorken scaling from free field theory We will now derive Bjorken scaling and the Callan-Gross relation from quantum field theory. We will consider a theory involving fermions (quarks) and bosons (gluons), but shall at first consider the free field theory limit by neglecting all their interactions. We will consider a kinematical regime in DIS that involves a large value of the momentum transfer and of the center of mass energy of the collision, while the value of is kept constant. This limit is known as the Bjorken limit. To appreciate strong interaction physics in the Bjorken limit, consider a frame in which the 4-momentum of the photon can be written as qμ=1mN(ν,0,0,√ν2+m2NQ2). (17) From the combinations of the components of q+≡q0+q3√2∼νmN→+∞ q−≡q0−q3√2∼mNx→constant, (18) and because , the integration over in is dominated by y−∼mNν→0,y+∼(mNx)−1. (19) Therefore, the invariant separation between the points at which the two currents are evaluated is . Noting that in eq. (9) the product of the two currents can be replaced by their commutator, and recalling that expectation values of commutators vanish for space-like separations, we also see that . Thus, the Bjorken limit corresponds to a time-like separation between the two currents, with the invariant separation going to zero, as illustrated in figure 5. It is important to note that in this limit, although the invariant goes to zero, the components of do not necessarily become small. This will have important ramifications when we apply the Operator Product Expansion to . For our forthcoming discussion, consider the forward Compton amplitude 4πTμν≡i∫d4yeiq⋅y⟨⟨N(P)∣∣T(J†μ(y)Jν(0))∣∣N(P)⟩⟩spin. (20) It differs from by the fact that the two currents are time-ordered, and as illustrated in figure 6, one can recover from its imaginary part, Wμν=2ImTμν. (21) At fixed , is analytic in the variable , except for two cuts on the real axis that start at . The cut at positive corresponds to the threshold above which the DIS reaction becomes possible, and the cut at negative can be inferred from the fact that is unchanged under the exchange . It is also possible to decompose the tensor in terms of two structure functions  : Tμν=−T1(gμν−qμqνq2)+T2P⋅q(Pμ−qμP⋅qq2)(Pν−qνP⋅qq2), (22) and the DIS structure functions can be expressed in terms of the discontinuity of across the cuts. We now remind the reader of some basic results about the Operator Product Expansion (OPE) [11, 12]. Consider a correlator , where and are two local operators (possibly composite) and the ’s are unspecified field operators. In the limit , this object is usually singular, because products of operators evaluated at the same point are ill-defined. The OPE states that the nature of these singularities is a property of the operators and , and is not influenced by the nature and localization of the ’s. This singular behavior can be expressed as A(0)B(y)=yμ→0∑iCi(y)Oi(0), (23) where the are numbers (known as the Wilson coefficients) that contain the singular dependence and the are local operators that have the same quantum numbers as the product . This expansion – known as the OPE – can then be used to obtain the limit of any correlator containing the product . If ,and are the respective mass dimensions of the operators and , a simple dimensional argument tells us that Ci(y)∼yμ→0|y|d(Oi)−d(A)−d(B)(up\ to\ % logarithms). (24) (Here .) From this relation, we see that the operators having the lowest dimension lead to the most singular behavior in the limit . Thus, only a small number of operators are relevant in the analysis of this limit and one can ignore the higher dimensional operators. Things are however a bit more complicated in the case of DIS, because only the invariant goes to zero, while the components do not go to zero. The local operators that may appear in the OPE of can be classified according to the representation of the Lorentz group to which they belong. Let us denote them , where is the “spin” of the operator (the number of Lorentz indices it carries), and the index labels the various operators having the same Lorentz structure. The OPE can be written as : ∑s,iCs,iμ1⋯μs(y)Oμ1⋯μss,i(0). (25) Because they depend only on the 4-vector , the Wilson coefficients must be of the form131313There could also be terms where one or more pairs are replaced by , but such terms are less singulars in the Bjorken limit. Cs,iμ1⋯μs(y)≡yμ1⋯yμsCs,i(y2), (26) where depends only on the invariant . Similarly, the expectation value of the operators in the proton state can only depend on the proton momentum , and the leading part in the Bjorken limit is141414Here also, there could be terms where a pair is replaced by , but they too lead to subleading contributions in the Bjorken limit. ⟨⟨N(P)∣∣Oμ1⋯μss,i(0)∣∣N(P)⟩⟩spin=Pμ1⋯Pμs⟨Os,i⟩, (27) where the are some non-perturbative matrix elements. Let us now denote by the mass dimension of the operator . Then, the dimension of is , which means that it scales like Cs,i(y2)∼y2→0(y2)(ds,i−s−6)/2. (28) Because the individual components of do not go to zero, it is this scaling alone that determines the behavior of the hadronic tensor in the Bjorken limit. Contrary to the standard OPE, the scaling depends on the difference between the dimension of the operator and its spin, called its twist , rather than its dimension alone. The Bjorken limit of DIS is dominated by the operators that have the lowest possible twist. As we shall see, there is an infinity of these lowest twist operators, because the dimension can be compensated by the spin of the operator. If we go back to the structure functions , we can write Tr(x,Q2)=∑sxar−s∑i⟨Os,i⟩Dr;s,i(Q2)(r=1,2), (29) where and . The difference by one power of (at fixed ) between and comes from their respective definitions from that differ by one power of the proton momentum . Eq. (29) gives the structure functions as a series of terms, each of which has factorized and dependences. (The functions () are related to the Fourier transform of , and thus can only depend on the invariant ). Moreover, for dimensional reasons, the functions must scale like . Therefore, it follows that Bjorken scaling arises from twist 2 operators. It is important to keep in mind that in eq. (29), the functions are in principle calculable in perturbation theory and do not depend on the nature of the target, while the ’s are non perturbative matrix elements that depend on the target. Thus, the OPE approach in our present implementation cannot provide quantitative results beyond simple scaling laws. It is easy to check that is even in while is odd; this means that only even values of the spin can appear in the sum in eq. (29). We shall now rewrite this equation in a more compact form to see what it tells us about the structure functions . Writing Tr=∑even str(s,Q2)xar−s=∑even str(s,Q2)(2Q2)s−arνs−ar, (30) we get (for even) tr(s,Q2)=12πi(Q22)s−ar∫Cdνννar−sTr(ν,Q2), (31) where is a small circle around the origin in the complex plane (see figure 7). This contour can then be deformed and wrapped around the cuts along the real axis, as illustrated in the figure 7. Because the structure function is the discontinuity of across the cut, we can write tr(s,Q2)=2π∫10dxxxs−arFr(x,Q2). (32) Therefore, we see that the OPE gives the -moments of the DIS structure functions. In order to go further and calculate the perturbative Wilson coefficients , we must now identify the twist 2 operators that may contribute to DIS. In a theory of fermions and gauge bosons, we can construct two kinds of twist 2 operators : Oμ1⋯μss,f≡¯¯¯¯ψfγ{μ1∂μ2⋯∂μs}ψf Oμ1⋯μss,g≡Fα{μ1∂μ2⋯∂μs−1Fμs}α, (33) where the brackets denote a symmetrization of the indices and a subtraction of the traced terms on those indices. To compute the Wilson coefficients, the simplest method is to exploit the fact that they are independent of the target. Therefore, we can take as the “target” an elementary object, like a quark or a gluon, for which everything can be computed in closed form (including the ). Consider first a quark state as the target, of a given flavor and spin . At lowest order, one has ⟨f,σ∣∣Oμ1⋯μss,f′∣∣f,σ⟩=δff′¯¯¯uσ(P)γ{μ1uσ(P)Pμ2⋯Pμs} ⟨f,σ∣∣Oμ1⋯μss,g∣∣f,σ⟩=0. (34) Averaging over the spin, and comparing with , we get ⟨Os,f′⟩f=δff′,⟨Os,g⟩f=0. (35) On the other hand, we have already calculated directly the hadronic tensor for a single quark. By computing the moments of the corresponding , we get the for even : t1(s,Q2)=1πe2f,t2(s,Q2)=2πe2f. (36) From this, the bare Wilson coefficients for the operators involving quarks are D1;s,f(Q2)=1πe2f,D2;s,f(Q2)=2πe2f. (37) By repeating the same steps with a vector boson state, those involving only gluons are D1;s,g(Q2)=D2;s,g(Q2)=0, (38) if the vector bosons are assumed to be electrically neutral. Going back to a nucleon target, we cannot compute the . However, we can hide momentarily our ignorance by defining functions and (respectively the quark and antiquark distributions) such that151515DIS with exchange of a photon cannot disentangle the quarks from the antiquarks. In order to do that, one could scatter a neutrino off the target, so that the interaction proceeds via a weak charged current. ∫10dxxxs[ff(x)+f¯f(x)]=⟨Os,f⟩. (39) (The sum is known as the singlet quark distribution of flavor .) Thus, the OPE formulas for and on a nucleon in terms of these quark distributions are (40) We see that these formulas have the required properties: (i) Bjorken scaling and (ii) the Callan-Gross relation. Despite the fact that the OPE in a free theory of quarks and gluons leads to a result which is embarrassingly similar to the much simpler calculation we performed in the naive parton model, this exercise has taught us several important things : • We can derive an operator definition of the parton distributions (albeit it is not calculable perturbatively) • Bjorken scaling can be derived from first principles in a field theory of free quarks and gluons. This was a puzzle pre-QCD because clearly these partons are constituents of a strongly bound state. • The puzzle could be resolved if the field theory of strong interactions became a free theory in the limit , a property known as asymptotic freedom. As shown by Gross, Politzer and Wilczek in 1973, non-Abelian gauge theories with a reasonable number of fermionic fields (e.g. QCD with 6 flavors of quarks) are asymptotically free[1] and were therefore a natural candidate for being the right theory of the strong interactions. ### 2.5 Scaling violations Although it was interesting to see that a free quantum field theory reproduces the Bjorken scaling, this fact alone does not tell much about the detailed nature of the strong interactions at the level of quarks and gluons. Much more interesting are the violations of this scaling that arise from these interactions and it is the detailed comparison of these to experiments that played a crucial role in establishing QCD as the theory of the strong interactions. The effect of interactions can be evaluated perturbatively in the framework of the OPE, thanks to renormalization group equations. In the previous discussion, we implicitly assumed that there is no scale dependence in the moments of the quark distribution functions. But this is not entirely true; when interactions are taken into account, they depend on a renormalization scale . The parton distributions become scale dependent as well. However, since are observable quantities that can be extracted from a cross-section, they cannot depend on any renormalization scale. Thus, there must also be a dependence in the Wilson coefficients, that exactly compensates the dependence originating from the . By dimensional analysis, the Wilson coefficients have an overall power of set by their dimension (see the discussion following eq. (29)), multiplied by a dimensionless function that can only depend on the ratio . By comparing the Callan-Symanzik equations[12] for with those for the expectation values , the renormalization group equation[12] obeyed by the Wilson coefficients is 161616We have used the fact that the electromagnetic currents are conserved and therefore have a vanishing anomalous dimension. Note also that we have exploited the fact that for twist 2 operators depends only on , so that we can replace by . [(−Q∂Q+β(g)∂g)δij−γs,ji(g)]Dr;s,j(Q/μ,g)=0, (41) where is the beta function, and is the matrix of anomalous dimensions for the operators of spin (it is not diagonal because operators with identical quantum numbers can mix through renormalization). In order to solve these equations, let us first introduce the running coupling such that ln(Q/Q0)=∫¯g(Q,g)gdg′β(g′). (42) Note that this is equivalent to and ; in other words, is the value at the scale of the coupling whose value at the scale is . The usefulness of the running coupling stems from the fact that any function that depends on and only through the combination obeys the equation [−Q∂Q+β(g)∂g]F(¯¯¯g(Q,g))=0. (43) It is convenient to express the Wilson coefficients at the scale from those at the scale as (44) In QCD, which is asymptotically free, we can approximate the anomalous dimensions and running coupling at one loop by γs,ij(¯¯¯g)=¯¯¯g2Aij(s),¯¯¯g2(Q,g)=8π2β0ln(Q/ΛQCD). (45) (The are obtained from a 1-loop perturbative calculation.) In this case, the scale dependence of the Wilson coefficients can be expressed in closed form as (46) From this formula, we can write the moments of the structure functions, ∫10dxxxsF1(x,Q2)=∑i,fe2f2⎡⎢ ⎢⎣(ln(Q/ΛQCD)ln(Q0/ΛQCD))−8π2β0A(s)⎤⎥ ⎥⎦fi⟨Os,i⟩Q0, (47) (and a similar formula for ). We see that we can preserve the relationship between and the quark distributions, eq. (40), provided that we let the quark distributions become scale dependent in such a way that their moments read ∫10dxxxs[ff(x,Q2)+f¯f(x,Q2)]≡∑i⎡⎢ ⎢⎣(ln(Q/ΛQCD)ln(Q0/ΛQCD))−8π2β0A(s)⎤⎥ ⎥⎦fi⟨Os,i⟩Q0. (48) By also calculating the scale dependence of , one could verify that the Callan-Gross relation is preserved at the 1-loop order. It is crucial to note that, although we do not know how to compute the expectation values at the starting scale , QCD predicts how the quark distribution varies when one changes the scale . We also see that, in addition to a dependence on , the singlet quark distribution now depends on the expectation value of operators that involve only gluons (when the index in the previous formula). The scale dependence of the parton distributions can also be reformulated in the more familiar form of the DGLAP equations. In order to do this, one should also introduce a gluon distribution , also defined by its moments, ∫10dxxxsfg(x,Q2)≡∑i⎡⎢ ⎢⎣(ln(Q/ΛQCD)ln(Q0/ΛQCD))−8π2β0A(s)⎤⎥ ⎥⎦gi⟨Os,i⟩Q0. (49) Then one can check that the derivatives of the moments of the parton distributions with respect to the scale are given by Q2∂fi(s,Q2)∂Q2=−¯¯¯g2(Q,g)2Aji(s)fj(s,Q2), (50) where we have used the shorthands . In order to turn this equation into an equation for the parton distributions themselves, one can use A(s)f(s)=∫10dxxxs∫1xdyyA(x/y)f(y), (51) that relates the product of the moments of two functions to the moment of a particular convolution of these functions. Using this result, and defining splitting function from their moments, ∫10dxxxsPij(x)≡−4π2Aij(s), (52) it is easy to derive the DGLAP equation[5], Q2∂fi(x,Q2)∂Q2=¯¯¯g2(Q,g)8π2∫1xdyyPji(x/y)fj(y,Q2), (53) that resums powers of . This equation for the parton distributions has a probabilistic interpretation : the splitting function can be seen as the probability that a parton splits into two partons separated by at least (so that a process with a transverse scale will see two partons), one of them being a parton that carries the fraction of the momentum of the original parton. At 1-loop, the coefficients in the anomalous dimensions are Agg(s)=12π2{3[112−1s(s−1)−1(s+1)(s+2)+s∑j=21j]+Nf6} Agf(s)=−14π2{1s+2+2s(s+1)(s+2)} Afg(s)=−13π2{1s+1+2s(s−1)} Aff′(s)=16π2{1−2s(s+1)+4s∑j=21j}δff′, (54) where is the number of flavors of quarks. On can note that, since is flavor independent, the non-singlet171717Here, the word “singlet” refers to the flavor of the quarks. linear combinations ( with ) are eigenvectors of the matrix of anomalous dimensions, with an eigenvalue . These linear combinations do not mix with the remaining two operators,     and , through renormalization. By examining these anomalous dimensions for , we can see that the eigenvalue for the non-singlet quark operators is vanishing : . Going back to the eq. (50), this implies that ∂∂Q2⎧⎨⎩∫10dx∑faf[ff(x,Q2)+f¯f(x,Q2)]⎫⎬⎭=0 (55) for any linear combination such that . This relation implies for instance that the number of
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968519926071167, "perplexity": 570.1425504116979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00004.warc.gz"}
https://www.physicsforums.com/threads/radical-simplification.222671/
1. Mar 17, 2008 ### bacon From the book..." $$\sqrt[4]{(-4)^2}$$=$$\sqrt[4]{16}$$=2. It is incorrect to write $$\sqrt[4]{(-4)^2}$$=$$(-4)}^\frac{2}{4}$$=$$(-4)}^\frac{1}{2}$$=$$\sqrt{-4}$$ ..." I understand the math involved but want to be sure of the exact reason why the first part is correct and the second is not. Is it because of the inner to outer priority of operations when one operation is nested inside another? 2. Mar 17, 2008 ### rocomath Work inner ... outer. 3. Mar 18, 2008 ### CompuChip Clearly, the first method is correct (it actually says $((-4)^2)^{1/4}$, so what it does is work out the brackets in the correct order. Now if the second method were correct, you would get contradictory results. For example, consider this "proof": $$1 = \sqrt{1} = \sqrt{(-1)^2} = ((-1)^2)^{1/2} \stackrel{?!}{=} (-1)^{2/2} = (-1)^1 = -1$$ so 1 = -1, and anything you might want to prove (whether true or false) follows 4. Mar 18, 2008 ### Feldoh My life has been a lie :( 5. Mar 18, 2008 ### bacon Actually, it is not. I could show you a proof of this, but I need to change the oil in the car. Sorry. 6. Mar 19, 2008 ### CompuChip It's not that bad... have some cake.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109909892082214, "perplexity": 1137.0428015200962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00047-ip-10-171-10-70.ec2.internal.warc.gz"}
http://eprint.iacr.org/2004/377
## Cryptology ePrint Archive: Report 2004/377 New Distributed Ring Signatures for General Families of Signing Subsets Javier Herranz and Germán Sáez Abstract: In a distributed ring signature scheme, a subset of users cooperate to compute a distributed anonymous signature on a message, on behalf of a family of possible signing subsets. The receiver can verify that the signature comes from a subset of the ring, but he cannot know which subset has actually signed. In this work we use the concept of dual access structures to construct a distributed ring signature scheme which works with general families of possible signing subsets. The length of each signature is linear on the number of involved users, which is desirable for some families with many possible signing subsets. The scheme achieves the desired properties of correctness, anonymity and unforgeability. The reduction in the proof of unforgeability is tighter than the reduction in the previous proposals which work with general families. We analyze the case in which our scheme runs in an identity-based scenario, where public keys of the users can be derived from their identities. This fact avoids the necessity of digital certificates, and therefore allows more efficient implementations of such systems. But our scheme can be extended to work in more general scenarios, where users can have different types of keys. Category / Keywords: cryptographic protocols / distributed ring signatures, ID-based cryptography, dual access structures
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826224148273468, "perplexity": 1141.5236138564321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157012.30/warc/CC-MAIN-20160205193917-00346-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/the-speed-of-force.11252/
# The Speed of Force 1. Dec 18, 2003 ### The Divine Zephyr Hey I'm new here and I would like to propose a question about the speed of kinetic energy. If lets say we had a newton's cradle one light-year long, will the last ball fly up as soon as the first ball hits the second? Assume all the balls are in perfect contact with each other. Does this happen instantly or at light or sub-light speed? I do not think the speed is limited, but what do you people think? -tdz 2. Dec 18, 2003 ### chroot Staff Emeritus Kinetic energy does have a "speed." Futhermore, the balls communicate force via pressure. Pressure variations propagate at the speed of sound, which is different for different media, and is always less than the speed of light. So no, the ball at the far end won't move until the pressure wave reaches it, which will take some time. - Warren 3. Dec 19, 2003 ### suyver I think the the Newtons cradle example, the shockwave will travel through the material at the speed of sound (about 5-10 km/s). Don't you mean 'does not'?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8457781076431274, "perplexity": 697.6193209237019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864482.90/warc/CC-MAIN-20180622123642-20180622143642-00199.warc.gz"}
http://math.stackexchange.com/questions/34417/prove-that-operatornamegal-mathbbq-sqrt82-i-mathbbq-sqrt-2/34425
# Prove that $\operatorname{Gal}(\mathbb{Q}(\sqrt[8]{2}, i)/\mathbb{Q}(\sqrt{-2})) \cong Q_8$ I seem to have reached a contradiction. I am trying to prove that $\operatorname{Gal}(\mathbb{Q}(\sqrt[8]{2}, i)/\mathbb{Q}(\sqrt{-2})) \cong Q_8$. I could not think of a clever way to do this, so I decided to just list all the automorphisms of $\mathbb{Q}(\sqrt[8]{2}, i)$ that fix $\mathbb{Q}$ and hand-pick the ones that fix $i\sqrt{2}$. By the Fundamental Theorem of Galois Theory, those automorphisms should be a subgroup of the ones that fix $\mathbb{Q}$. I proved earlier that those automorphisms are given by $\sigma: \sqrt[8]{2} \mapsto \zeta^n\sqrt[8]{2}, i \mapsto \pm i$, where $n \in [0, 7]$ and $\zeta = e^\frac{2\pi i}{8}$. However, I am getting too many automorphisms. One automorphism that fixes $i\sqrt{2}$ is $\sigma: \sqrt[8]{2} \mapsto \zeta\sqrt[8]{2}, i \mapsto -i$. However, this means all powers of $\sigma$ fix $i\sqrt{2}$, and I know $Q_8$ does not contain a cyclic subgroup of order $8$. What am I doing wrong? (Please do not give me the answer. I have classmates for that.) - $i \rightarrow i^2=-1$ cannot be an automorphism. Also, what is $\zeta^2$? –  N. S. Apr 21 '11 at 23:58 My bad. I meant $i \mapsto \pm i$. I'll change that in the problem. (And $\zeta^2 = i$, but that still makes it an automorphism that fixes $i\sqrt{2}$!) –  badatmath Apr 22 '11 at 0:05 Check Zev's answer, I was gonna ask you calculate $\sigma(\zeta \sqrt[8]{2})$ . –  N. S. Apr 22 '11 at 0:13 Hint: The order of $\sigma$ is not 8. Note that $\sigma(\sqrt{2})=\sigma((\sqrt[8]{2})^4)=(\sigma(\sqrt[8]{2}))^4=\zeta_8^4(\sqrt[8]{2})^4=-\sqrt{2}$. Note that $\zeta_8=\frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}}$. Now compute $\sigma(\zeta_8)$. Now compute $\sigma^{2}(\sqrt[8]{2})=\sigma(\zeta_8\sqrt[8]{2})=\sigma(\zeta_8)\sigma(\sqrt[8]{2})$, and then $\sigma^4(\sqrt[8]{2})$. - So I got $\sigma^2(\sqrt[8]{2}) = i\sqrt[8]{2}$, and $\sigma^4(\sqrt[8]{2}) = -\sqrt[8]{2}$, and $\sigma^8(\sqrt[8]{2}) = \sqrt[8]{2}$. Why isn't the order $8$? –  badatmath Apr 22 '11 at 1:20 @badatmath - no, $$\sigma(\zeta_8)=\frac{1}{(-\sqrt{2})}+\frac{(-i)}{(-\sqrt{2})}=\frac{1}{\sqrt{‌​2}}+\frac{i}{\sqrt{2}}=\zeta_8^3,$$ so that $\sigma^2(\sqrt[8]{2})=\zeta_8^3\zeta_8\sqrt[8]{2}=-\sqrt[8]{2}$, and thus $\sigma^4(\sqrt[8]{2})=\sqrt[8]{2}$. The order of $\sigma$ is 4. –  Zev Chonoles Apr 22 '11 at 1:24 Huh. It seems I get a different answer every time I calculate this. I must be bad at computation today. Thank you :) –  badatmath Apr 22 '11 at 1:34 No problem, glad to help :) –  Zev Chonoles Apr 22 '11 at 1:37 Would it be easier to notice that extension $\mathbb{Q}(\sqrt[8]{2},i)$ is equal to $\mathbb{Q}(\sqrt[8]{2},\zeta)$ which is a cyclotomic extension followed by Kummer extension? You can then work out which elements of its Galois group fix $\sqrt{-2}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868820905685425, "perplexity": 256.54287480069723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065330.34/warc/CC-MAIN-20150827025425-00347-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.deepdyve.com/lp/oxford-university-press/the-myth-of-the-credit-spread-puzzle-VzhR80QkNQ
### Get 20M+ Full-Text Papers For Less Than \$1.50/day. Start a 14-Day Trial for You or Your Team. The Myth of the Credit Spread Puzzle Abstract Are standard structural models able to explain credit spreads on corporate bonds? In contrast to much of the literature, we find that the Black-Cox model matches the level of investment-grade spreads well. Model spreads for speculative-grade debt are too low, and we find that bond illiquidity contributes to this underpricing. Our analysis makes use of a new approach for calibrating the model to historical default rates that leads to more precise estimates of investment-grade default probabilities. Received October 25, 2016; editorial decision January 12, 2018 by Editor Andrew Karolyi. Authors have furnished an Internet Appendix, which is available on the Oxford University Press Web site next to the link to the final published paper online. The structural approach to credit risk, pioneered by Merton (1974) and others, represents the leading theoretical framework for studying corporate default risk and pricing corporate debt. While the models are intuitive and simple, many studies find that, once calibrated to match historical default and recovery rates and the equity premium, they fail to explain the level of actual investment-grade credit spreads, a result referred to as the “credit spread puzzle.” Papers that find a credit spread puzzle typically use Moody’s historical default rates, measured over a period of around 30 years and starting from 1970, as an estimate of the expected default rate.1 Our starting point is to show that the appearance of a credit spread puzzle strongly depends on the period over which historical default rates are measured. For example, Chen, Collin-Dufresne, and Goldstein (2009) use default rates from 1970 to 2001 and find BBB-AAA model spreads of 57–79 basis points (bps) (depending on maturity), values that are substantially lower than historical spreads of 94–102 bps. If, instead, we use Moody’s default rates for 1920–2001, model spreads are 91–112 bps, a range that is in line with historical spreads. Using simulations, we demonstrate two key points about historical default rates. The first is, over sample periods of around 30 years that are typically used in the literature, there is a large sampling error in the observed average rate. For example, if the true 10-year BBB cumulative default probability were 5.09$$\%$$,2 a 95$$\%$$ confidence band for the realized default rate measured over 31 years would be $$[1.15\%, 12.78\%]$$. Intuitively, the large sample error arises because defaults are correlated and 31 years of data only give rise to three nonoverlapping 10-year intervals. As a result of the large sampling error, when historical default rates are used as estimates of ex ante default probabilities, the difference between actual spreads and model spreads needs to be large—much larger, for example, than that found for the BBB-AAA spread mentioned above—to be interpreted as statistically significant evidence against the model. Second, and equally crucial, distributions of average historical investment-grade default rates are highly positively skewed. Most of the time we see few defaults, but, occasionally, we see many defaults, meaning that there is a high probability of observing a rate that is below the actual mean. Positive skewness is likely to lead to the conclusion that a structural model underpredicts investment-grade spreads even if the model is correct. The reason for the presence of skewness is that defaults are correlated across firms as a result of the common dependence of individual firm values on systematic (“market”) shocks. To see why correlation leads to skewness, we can think of a large number of firms with a default probability (over some period) of 5$$\%$$ and where their defaults are perfectly correlated. In this case we will observe a zero default rate 95$$\%$$ of the time (and a 100$$\%$$ default rate 5$$\%$$ of the time), and so the realized default rate will underestimate the default probability 95$$\%$$ of the time. If the average default rate is calculated over three independent periods, the realized default rate will still underestimate the default probability $$0.95^3=85.74\%$$ of the time. We propose a new approach to estimate default probabilities. Instead of using the historical default rate at a single maturity and rating as an estimate of the default probability for this same maturity and rating, we use a wide cross-section of default rates at different maturities and ratings. We use the Black and Cox (1976) model and what ties default probabilities for firms with different ratings together in the model is that we assume that they will, nonetheless, have the same default boundary. (The default boundary is the value of the firm, measured as a fraction of the face value of debt, below which the firm defaults.) This is reasonable since, if the firm were to default, there is no obvious reason the default boundary would depend on the rating the firm had held previously. We show in simulations that our approach results in much more precise and less skewed estimates of investment-grade default probabilities. For the estimated 10-year BBB default probability, for example, the standard deviation and skewness using the new approach are only 16$$\%$$ and 4$$\%$$, respectively, of those using the existing approach. The improved precision is partly the result of the fact that we combine information across 20 maturities and 7 ratings and default probability estimates from different rating/maturity pairs are imperfectly correlated. But, to a significant extent, it is the result of combining default information on investment-grade and high-yield defaults. Because defaults occur much more frequently in high-yield debt, these firms provide more information on the location of the default boundary. Since the boundary is common to investment-grade and high-yield debt, when we combine investment-grade and high-yield default data, we “import” the information on the location of the default boundary from high-yield to investment-grade debt. The reduction in skewness is also the result of including default rates that are significantly higher than those for BBB debt. While a low default rate for investment-grade debt produces a positive skew in the distribution of defaults, a default rate of 50$$\%$$ produces a symmetric distribution and, for even higher default rates, the skew is actually negative. We use our estimation approach and the Black-Cox model to investigate spreads over the period 1987–2012. Our data set consists of 256,698 corporate bond yield spreads to the swap rate of noncallable bonds issued by industrial firms and is more extensive than those previously used in the literature. Our implementation of the Black-Cox model is new to the literature in that it allows for cross-sectional and time-series variation in firm leverage and payout rate while matching historical default rates. Applying our proposed estimation approach, we estimate the default boundary such that average model-implied default probabilities match average historical default rates from 1920 to 2012. In calibrating the default boundary we use a constant Sharpe ratio and match the equity premium, but, once we have implied out the single firm-wide default boundary parameter, we compute firm- and time-specific spreads using standard “risk-neutral” pricing formulae. We first explore the difference between average spreads in the Black-Cox model and actual spreads. The average model spread across all investment-grade bonds with a maturity between 3 and 20 years is 111 bps, whereas the average actual spread is 92 bps. A confidence band for the model spread that takes into account uncertainty in default probabilities is $$[88$$ bps$$;128$$ bps$$]$$; thus there is no statistical difference between actual and model investment spreads. For speculative-grade bonds, the average model spread is 382 bps, whereas the actual spread is 544 bps, and here the difference is statistically highly significant. We also sort bonds by the actual spread and find that actual and model-implied spreads are similar, except for bonds with a spread of more than 1,000 bps. For example, for bonds with an actual spread between 100–150 bps the average actual spread is 136 bps, whereas the average model-implied spread is 121 bps. Importantly, the results are similar if we calibrate the model using default rates from 1970 to 2012 rather than from 1920 to 2012, thus resolving the problem described above that results in the earlier literature depend significantly on the historical period chosen to benchmark the model. To study the time series, we calculate average spreads on a monthly basis and find that for investment-grade bonds there is a high correlation of 93$$\%$$ between average actual spreads and model spreads. Note that the model-implied spreads are “out-of-sample” predictions in the sense that actual spreads are not used in the calibration. Furthermore, for a given firm only changes in leverage and the payout rate—calculated using accounting data and equity values—lead to changes in the firm’s credit spread. For speculative-grade bonds the correlation is only 40$$\%$$, showing that the model has a much harder time matching spreads for low-quality firms. Although average investment-grade spreads are captured well on a monthly basis, the model does less well at the individual bond level. Regressing individual investment-grade spreads on those implied by the Black-Cox model gives an $$R^2$$ of only 44$$\%$$, so at the individual bond level less than half the variation in investment-grade spreads is explained by the model. For speculative-grade spreads the corresponding $$R^2$$ is only 13$$\%$$. We also investigate the potential contribution of bond illiquidity to credit spreads. We use bond age as the liquidity measure and double sort bonds on liquidity and credit quality. For investment-grade bonds we find no relation between bond liquidity and spreads, consistent with the ability of the model to match actual spreads and the finding in Dick-Nielsen, Feldhütter, and Lando (2012) that outside the 2007–2008 financial crisis illiquidity premiums in investment-grade bonds were negligible. For speculative-grade bonds we find a strong relation between bond liquidity and yield spreads, suggesting that bond liquidity may explain much of the underpricing of speculative-grade bonds. In this paper we use the Black and Cox (1976) model as a lens through which to study the credit spread puzzle. The results in Huang and Huang (2012) show that many structural models which appear very different in fact generate similar spreads once the models are calibrated to the same default probabilities, recovery rates, and the equity premium. The models tested in Huang and Huang (2012) include features such as stochastic interest rates, endogenous default, stationary leverage ratios, strategic default, time-varying asset risk premiums, and jumps in the firm value process, yet all generate a similar level of credit spread. To the extent that different structural models produce similar investment-grade default probabilities under our estimation approach, our finding that the Black-Cox model matches average investment-grade spreads is likely to hold for a wide range of structural models. An extensive literature tests structural models. Leland (2006), Cremers, Driessen, and Maenhout (2008), Chen, Collin-Dufresne, and Goldstein (2009), Chen (2010), Huang and Huang (2012), Chen, Cui, He, and Milbradt (2017), Bai (2016), Bhamra, Kuehn, and Strebulaev (2010), and Zhang, Zhou, and Zhu (2009) use the historical default rate at a given rating and maturity to estimate the default probability at that maturity and rating. We show that this test is statistically weak. Eom, Helwege, and Huang (2004), Ericsson, Reneby, and Wang (2015), and Bao (2009) allow for heterogeneity in firms and variation in leverage ratios, but do not calibrate to historical default rates. Bhamra, Kuehn, and Strebulaev (2010) observe that default rates are noisy estimators of default probabilities, but do not propose a solution to this problem as we do. 1. A Motivating Example There is a tradition in the credit risk literature of using Moody’s average realized default rate for a given rating and maturity as a proxy for the corresponding ex ante default probability. This section provides an example showing that the apparent existence or nonexistence of a credit spread puzzle depends on the particular period over which the historical default rate is measured. Later in the paper we describe an alternative approach for extracting default probability estimates from historical default rates that not only provides much greater precision but is also less sensitive to the sample period chosen. To understand how Moody’s calculates default frequencies, consider the 10-year BBB cumulative default frequency of 5.09$$\%$$ used in Chen, Collin-Dufresne, and Goldstein (2009).3 This number is published in Moody’s (2002) and is based on default data for the period 1970–2001. For the year 1970, Moody’s identifies a cohort of BBB-rated firms and then records how many of these default over the next 10 years. The 10-year BBB default frequency for 1970 is the number of defaulted firms divided by the number in the 1970 cohort. The average default rate of 5.09$$\%$$ is calculated as the average of the twenty-two 10-year default rates for the cohorts formed at yearly intervals over the period 1970–1991. A large part of the literature has focused on the BBB-AAA spread at 4- and 10-year maturities. In our main empirical analysis (Section 3), we study a much wider range of ratings and maturities but for now, to keep our example simple, we also focus on the BBB-AAA spread. For a given sample period we use the BBB and AAA average default rates for the 4- and 10-year horizons reported by Moody’s. Following the literature (e.g., Chen et al. 2009; Huang and Huang 2012; and others) we first benchmark a model to match these default rates, one at a time. Using the benchmarked parameters we then compute risk-neutral default probabilities and, from these, credit spreads. Following Eom, Helwege, and Huang (2004), Bao (2009), Huang and Huang (2012), and others, we assume that if default occurs, investors receive (at maturity) a fraction of the originally promised face value, but now with certainty. The credit spread, $$s$$, is then calculated as \begin{eqnarray} \label{eq:BlackCoxcreditspread} s=y-r=-\frac{1}{T}\log [1-(1-R)\pi^Q(T)], \end{eqnarray} (1) where $$R$$ is the recovery rate, $$T$$ is the bond maturity, and $$\pi^Q(T)$$ is the risk-neutral default probability. Throughout our analysis we employ the Black-Cox model (Black and Cox 1976). Appendix A provides the model details. We use our average parameter values for the period 1987–2012 estimated in Section 3 and Chen, Collin-Dufresne, and Goldstein’s (2009) estimates of the Sharpe ratio and recovery rate. We estimate the default boundary by matching an observed default frequency. The default boundary is the value of the firm, measured as a fraction of the face value of debt, below which the firm defaults. Following Chen, Collin-Dufresne, and Goldstein (2009) and others, we carry this out separately for each maturity and rating such that, conditional on the other parameters, the model default probability matches the reported Moody’s default frequency. For each maturity and rating we then use the benchmarked default boundary and calculate the credit spread using Equation (1). The solid bars in Figure 1 show estimates of the actual BBB-AAA corporate bond credit spread from a number of papers. For both the 4- and 10-year maturities, the estimated BBB-AAA spread is in the range of 98–109 bps with the notable exception of Huang and Huang’s (2012) estimate of the 10-year BBB-AAA of 131 bps. (Huang and Huang use both callable and noncallable bonds in their estimate of the spread and this may explain why it is higher.) Figure 1 View largeDownload slide Actual and model-implied BBB-AAA corporate bond yield spreads when using existing approach in the literature This figure shows actual and model-implied BBB-AAA spreads based on different estimates of actual and model-implied spreads. The actual BBB-AAA yield spreads are estimates from Duffee (1998) (Duf), Huang and Huang (2012) (HH), Chen, Collin-Dufresne, and Goldstein (2009) (CDG), and Cremers, Driessen, and Maenhout (2008) (CDM). The solid lines show spreads in the Black-Cox model based on Moody’s default rates from the period 1920–2002 and 1970–2001, respectively. The dashed lines show spreads in the Merton model based on Moody’s default rates from the period 1920–2002 and 1970–2001, respectively. Figure 1 View largeDownload slide Actual and model-implied BBB-AAA corporate bond yield spreads when using existing approach in the literature This figure shows actual and model-implied BBB-AAA spreads based on different estimates of actual and model-implied spreads. The actual BBB-AAA yield spreads are estimates from Duffee (1998) (Duf), Huang and Huang (2012) (HH), Chen, Collin-Dufresne, and Goldstein (2009) (CDG), and Cremers, Driessen, and Maenhout (2008) (CDM). The solid lines show spreads in the Black-Cox model based on Moody’s default rates from the period 1920–2002 and 1970–2001, respectively. The dashed lines show spreads in the Merton model based on Moody’s default rates from the period 1920–2002 and 1970–2001, respectively. Using Moody’s average default rates from the period 1970–2001, the 4- and 10-year BBB-AAA spreads in the Black-Cox model are 52 and 72 bps, respectively. These model estimates are substantially below actual spreads, a finding that has been coined the “credit spread puzzle.” Figure 1 also shows the model-implied spreads using Moody’s average historical default rates from 1920 to 2002 (default rates from 1920 to 2001 are not available). Using default rates from this longer period, the model-implied spreads are substantially higher: the 4- and 10-year BBB-AAA spreads are 87 and 104 bps, respectively. Thus, when we use default rates from a longer time period the puzzle largely disappears. To emphasize that this conclusion is not specific to the Black-Cox model, Figure 1 also shows the four spreads computed under the Merton model (and using the parameters and method given in Chen et al. 2009). These spreads are very similar to, and just a little higher than, the Black-Cox spreads. What remains unchanged is the finding that the appearance of a credit spread puzzle depends on the sample period. In the example we compare corporate bond yields relative to AAA yields to be consistent with CDG and others. In our later analysis we use bond yields relative to swap rates. The average difference between swap rates and AAA yields is small: over our sample period 1987–2012, the average 5- and 10-year AAA-swap spreads are 4 and 6 bps, respectively. We use swap rates in our later analysis, because the term structure of swap rates is readily available on a daily basis. There are very few AAA-rated bonds in the later part of our sample period, so we would not be able to calculate a AAA yield at different maturities. In summary, realized average default rates vary substantially over time, and, as a result, when these are taken as ex ante default probabilities the historical period over which they are measured has a strong influence on whether or not there will appear to be a credit spread puzzle. In the next section we first explore the statistical uncertainty of historical default rates in more detail and then propose a different approach to estimating default probabilities that exploits the information contained in historical default rates more efficiently than has been the case in the literature so far. 2. Estimating Ex Ante Default Probabilities The existing literature on the credit spread puzzle and, more broadly, the literature on credit risk typically uses the average ex post historical default rate for a single maturity and rating as an estimate of the ex ante default probability for this same maturity and rating.4 We find that the statistical uncertainty associated with these estimates is large and propose a new approach that uses historical default rates for all maturities and ratings simultaneously to extract the ex ante default probability for any given maturity and rating. Simulations show that our approach greatly reduces statistical uncertainty. 2.1 Existing approach: Extracting the ex ante default probability from a single ex post default frequency An ex post realized default frequency may be an unreliable estimate of the ex ante default probability for two significant reasons. The first is that the low level of default frequency, particularly for investment-grade firms, leads to a sample size problem with default histories as short as those typically used in the literature when testing standard models. The second is that, even though the problem of sample size is potentially mitigated by the presence of a large number of firms in the cross-section, defaults are correlated across firms and so the benefit of a large cross-section in improving precision is greatly reduced. How severe are these statistical issues? We address this question in a simulation study and base our simulation parameters on the average 10-year BBB default rate of 5.09$$\%$$ over 1970–2001 used in Chen, Collin-Dufresne, and Goldstein (2009). In an economy in which the ex ante 10-year default probability is 5.09$$\%$$ for all firms, we simulate the ex post realized 10-year default frequency over 31 years. We assume that in year 1 we have 445 identical firms, equal to the average number of firms in Moody’s BBB cohorts over the period 1970–2001. In the Black-Cox (and Merton) model firm $$i$$’s asset value under the natural measure follows a GBM: \begin{eqnarray} \label{BlackCoxGBM2} \frac{dV_{it}}{V_{it}}=(\mu-\delta)dt+\sigma dW^P_{it}, \end{eqnarray} (2) where $$\mu$$ is the drift of firm value before payout of the dividend yield $$\delta$$ and $$\sigma$$ is the volatility of firm value. Like in Section 1, we use our average parameter values for the period 1987–2012 estimated in Section 3: $$\mu=10.05\%$$, $$\delta=4.72\%$$, and $$\sigma=24.6\%$$. We introduce systematic risk by assuming that \begin{eqnarray} \label{BaoGBMsimSystAndIdio} W^P_{it}=\sqrt{\rho}W_{st}+\sqrt{1-\rho}W_{it}, \end{eqnarray} (3) where $$W_i$$ is a Wiener process specific to firm $$i$$, $$W_s$$ is a Wiener process common to all firms, and $$\rho$$ is the pairwise correlation between percentage changes in firm value. All the Wiener processes are independent. The firm defaults the first time asset value hits a boundary equal to a fraction $$d$$ of the face value of debt $$F$$, that is, the first time $$V_{\tau}\leq dF$$. The realized 10-year default frequency in the year 1 cohort is found by simulating one systematic and 445 idiosyncratic processes in Equation (3). In year 2 we form a cohort of 445 new firms. The firms in year 2 have characteristics that are identical to those of the year 1 cohort at the time of formation. We calculate the realized 10-year default frequency of the year 2 cohort as we did for the year 1 cohort. Crucially, the common shock for years 1–9 for the year 2 cohort is the same as the common shock for years 2–10 for firms in the year 1 cohort. We repeat the same process for 22 years and calculate the overall average realized cumulative 10-year default frequency in the economy by taking an average of the default frequencies across the 22 cohorts. Finally, we repeat this entire simulation 25,000 times. To estimate the correlation parameter $$\rho$$, we calculate pairwise equity correlations for rated industrial firms in the period 1987–2012. Specifically, for each year we calculate the average pairwise correlation of daily equity returns for all industrial firms for which Standard $$\&$$ Poor’s provide a rating and then calculate the average of the 26 yearly estimates over 1987–2012. We estimate $$\rho$$ to be 20.02$$\%$$. To set the default boundary, we proceed as follows. First, without loss of generality, we assume that the initial asset value of each firm is equal to one. This means that the firm’s leverage, $$L\equiv\frac{F}{V}=F$$, and we set the default boundary $$dF(=dL)$$ such that the model-implied default probability given in Equation (A2) in the appendix matches the 10-year default rate of 5.09$$\%$$.5 Panel A of Figure 2 shows the distribution of the realized average 10-year default rate in the simulation study and the black vertical line shows the ex ante default probability of 5.09$$\%$$. The 95$$\%$$ confidence interval for the realized average default rate is wide at [1.15$$\%$$; 12.78$$\%$$]. We also see that the default frequency is significantly skewed to the right; that is, the modal value of around 3$$\%$$ is significantly below the mean of 5.09$$\%$$. This means that the default frequency most often observed—for example, the estimate from the rating agencies—is below the mean. Specifically, although the true 10-year default probability is 5.09$$\%$$, the probability that the observed average 10-year default rate over 31 years is half that level or less is 19.9$$\%$$. This skewness means that the number reported by Moody’s (5.09$$\%$$) is more likely to be below the true mean than above it and, in this case, if spreads reflect the true expected default probability, they will appear too high relative to the observed historical loss rate. Figure 2 View largeDownload slide Distribution of estimated 10-year BBB default probability when using default rates measured over 31 years The existing approach in the literature is to use an average historical default rate for a specific rating and maturity as an estimate for the default probability when testing spread predictions of structural models. One example is Chen, Collin-Dufresne, and Goldstein (2009), who use the 10-year BBB default rate of 5.09$$\%$$ realized over the per
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898070454597473, "perplexity": 1511.5183770078736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00355.warc.gz"}
https://www.physicsforums.com/threads/rotational-motion-about-a-fixed-axis-need-help.223375/
# Homework Help: Rotational Motion About a Fixed Axis , NEED HELp :\ 1. Mar 21, 2008 ### keylostman 1. The problem statement, all variables and given/known data A wheel of diameter of .68m roll without slipping. A point at the top of the wheel moves witha tangential speed of 5.4m/s. Whaat speed is the axle of the wheel moving? What is the angular speed of the wheel? How would i approach to do this problem, what equations would i need to use ? 2. Mar 21, 2008 ### keylostman ok for speed of axle i took .5 * 5.4 to get 2.7m/s and for angular speed of wheel i took 5.4mn/s * .34m to get 15.9 rad/s am i correct ? 3. Mar 21, 2008 ### tiny-tim Hi keylostman! Are you the same person as th3plan? Why have you posted the same problem twice?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613032341003418, "perplexity": 1737.8415965446154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746386.1/warc/CC-MAIN-20181120110505-20181120132505-00375.warc.gz"}
http://math.stackexchange.com/questions/129705/orthogonal-latin-squares
# Orthogonal Latin Squares I'm not quite sure how to even start this problem. I'm really just looking for direction on how to begin. The $t$ mutually orthogonal Latin squares $A_1, A_2, ... , A_t$ of side $n$ have mutually orthogonal subsquares $S_1, S_2, ... S_t$ occupying their upper left $s$x$s$ corners. Prove that $n$ is greater than or equal to $(t+1)s$. I know that $t$ must be less than $n$, but I can't find any other information to help me. - Consider the cells in row $s+1$ to the right of column $s$. These $n-s$ cells must contain each of the $s$ symbols of the subsquares, since none of those symbols appear in the $s$ cells in the left part of the row. Furthermore, this must be true for the cells in corresponding positions in each of the $t$ MOLS, and no single position in two distinct orthogonal squares can contain multiple symbols from the subsquares, since this would create a pairing that already exists within the orthogonal subsquares. Thus $t$ copies of $s$ symbols must be distributed among only $n-s$ positions, yielding the desired inequality.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329515099525452, "perplexity": 92.9036814909931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011085177/warc/CC-MAIN-20140305091805-00016-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathematicsi.com/implicit-differentiation/
# Implicit differentiation This chapter explores implicit differentiation. The chapter covers differentiating a function defined implicitly; Fin... This chapter explores implicit differentiation. The chapter covers differentiating a function defined implicitly; Finding equations of tangents and normals to curves defined implicitly. Before attempting this chapter you must have prior knowledge of basic differentiation, and tangent and normals. The functions we’ve explored so far are in the form of; Functions can also be expressed implicitly. These are functions which cannot be rearranged into form; An example is that of a circle shown below. …this can be rearranged into the form y=f(x) but it would look too complicated. Let’s try to do that; It is better if we leave it as; Below are other examples; You may have noticed that the functions are a mixture of x’s and y’s. ## Differentiating implicit functions Below is a function that has been defined implicitly; Suppose we wanted to find dy/dx. Notice this is different from the usual y=f(x). It simply means we find the differential of the function as in; …for the function in question; …it means we differentiate each term one at a time so we have; …we know that; Now we have to find the differential of xy; …we’re trying to find; …we can use the product rule so we get; …so now we have; …we can rearrange it to make dy/dx the subject; Above we have managed to find dy/dx Below are more examples; ### Example …we have; …we can use the chain rule to differentiate y2; ### Example …we first use the product rule; …we then use the chain rule to differentiate y3 according to y; ### Example …we use the product rule first; we use the chain rule for; …thus we get; ### Example …we have to use the chain rule on sin y; ### Example [ …we’re trying to find; …we differentiate each part separately; We do each part using the chain rule; ## Finding the tangent We can also find tangents and normals to curves defined implicitly, for example; Finding the equation of the tangent to the curve at the point (1, 4) To find the gradient we differentiate, that is; …we get; [] …we substitute in the point (1, 4) to get; Now we can use; …we substitute in to get; The answer for the equation of the tangent is; ## Exam question The equation of a circle is given as; • What is the centre and radius of the circle? • Find the coordinates where the circle crosses the line x=4 • Find the equations of the normals to the circle at these points • Where do the normals interest? The centre of the circle is (1, -2) and the radius is √25 = 5 Below is the circle; Next we have to find the coordinates at x=4. To do that we substitute x with 4 into the function. This is shown on the circle below; …we have; we continue to simplify; …remember we also have the negative square root. The coordinates have been shown on the circle below; [IMAGE] So we have; [IMAGE] and [IMAGE] We can conclude that the coordinates are; [IMAGE] The next question to solve is; Find the equation of the normals to the circle at these points. Below are the normals that we need to find; [IMAGE] The normals intersect at the centre; You must know that all normals to circles cross at the centre as a radius is always at 90° to the tangent to the circle. We shall find the equations of the normals below. First we differentiate to find the gradients of the tangents, we have; [IMAGE] …differentiating we get; [IMAGE] …at (4, 2) we have; [IMAGE] [IMAGE] …and at (4, -6) [IMAGE] [IMAGE] So now we know that at (4, 2) the gradient of the normal is 4/3. To find the normal we use; [IMAGE] So we have… [IMAGE] [IMAGE] We also know that the gradient of the normal at (4, -6) is; [IMAGE] Now we can form the equation of the normal; [IMAGE] [IMAGE] [IMAGE] So the two normals are; [IMAGE] and [IMAGE] Next we have to find where the normals intersect. We solve the normals equations simultaneously to find where the normals intersect. So we have; [IMAGE] [IMAGE] [IMAGE] Now we can substitute 1 for x into; [IMAGE] [IMAGE] [IMAGE] The normals intersect at (1, -2) which of course is the centre of the circle as we saw above.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046218991279602, "perplexity": 814.4745110291357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111324.43/warc/CC-MAIN-20160428161511-00072-ip-10-239-7-51.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/530790/drawing-arc-with-pgf
# Drawing arc with pgf Using pgf, I'd like to draw an arc (part of a circle) with a given center. I don't know the exact co-ordinates of the center, because it was obtained as an intersection of other geometric objects. The desired center has a name, but no known co-ordinates. How can I do this? Basically, I'd like to modify an instruction of the form \node (H) [name path=H, draw, circle through=(A')] at (E) {}; to produce just a part of the circle. • You should provide a Minimal Working example starting with \documentclass and ending with \end{document} that provides all the minimal necessary commands and instructions to understand what you want. You should also specify which libraries you are loading and what are you doing. – gjkf Mar 2, 2020 at 14:07 • I'm a bit unsure what's so hard to understand. I have points A' and E defined and would like to draw merely part of the circle through A' at E. Mar 2, 2020 at 14:11 • Of course, if you know the starting location and angle and radius (needed for arc with tikz), you can locate the center. The problem is usually finding the angles. Mar 3, 2020 at 2:06 • Possible duplicate: tex.stackexchange.com/q/66216/14500 Mar 3, 2020 at 6:24 You can use the arc command of Tikz, like so: \documentclass[tikz]{standalone} \begin{document} \begin{tikzpicture} \coordinate (O) at (0,0); % Your coordinate name \draw (O) -- ++(0:1) arc (0:150:1); % start:end:radius \end{tikzpicture} \end{document} which results in something like this In general your last command will be \draw (center) -- ++(start:radius) arc(start:end:radius) -- (center); • That's great, but I'd just like the arc, without the two radii included. Mar 2, 2020 at 14:09 • \documentclass[tikz]{standalone} \begin{document} \begin{tikzpicture} \coordinate (O) at (0,0); % Your coordinate name \draw (O)+(0:1) arc (0:150:1); % start:end:radius \end{tikzpicture} \end{document} ? Mar 2, 2020 at 14:13 I did not really understand the question. I assume you want to create a circular node centered on E, which goes through A but draw partially! I suggest the solution below \documentclass{article} \usepackage{tikz} \usetikzlibrary{calc,intersections} \usetikzlibrary{through} \usepackage{SIunitx} \begin{document} \begin{tikzpicture} \node[label=A] (A) at (1,1) {+}; \node[label=E] (E) at (0,2){+}; \node (H) [name path=H, circle through=(A)] at (E) {}; \draw[red,thick] let \p1=(E.center), \p2=(A), \n1={veclen(\y2-\y1,\x2-\y1)} in (H.45) arc (45:180: \n1); \draw (H) -- ++(3,4); \draw (H) -- ++(-3,5); \end{tikzpicture} \end{document} 1) First of all if the center has a name then you can know its coordinates: 102.6 Extracting Coordinates There are two commands that can be used to “extract” the x- or y-coordinate of a coordinate. \pgfextractx{\pgf@x}{\pgfpointanchor{E}{center}} \pgfextracty{\pgf@y}{\pgfpointanchor{E}{center}} 2) With tkz-euclideyou have: \documentclass{standalone} \usepackage{tkz-euclide} \begin{document} \begin{tikzpicture} \coordinate[label=$O$] (O) at (3,1); \coordinate [label=$A$](A) at (1,5); \coordinate [label=$B$](B) at (2,4); \coordinate [label=$C$](C) at (3,2); \coordinate [label=$D$](D) at (5,0); \coordinate [label=$E$](E) at (5,1); \tkzCompass[thick,blue](O,A) \tkzCompass[thick,red,delta=20](O,B) \tkzCompass[thick,orange,length=2](O,C) \tkzDrawArc[thick,brown](O,D)(E) \foreach \point in {A,...,E,O} \fill [black,opacity=.5] (\point) circle (1pt); \end{tikzpicture} \end{document} a) The macro \tkzCompass can draw an arc with a center through a point. Without option (you can use TikZ's options) the arc has a length of 1 cm; b) you can use the option length to change the default value length =2 for 2cm; c) you can use the option delta. delta=20 means that the ends of the arc makes an angle of 40 degrees with the center; d) more subtle is the last possibility. With \tkzDrawArc(O,D)(E) you draw an arc with center O passing through D and stopping on the half line [OE).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428093433380127, "perplexity": 2436.903682138607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00091.warc.gz"}
https://www.arxiv-vanity.com/papers/cond-mat/9805228/
# Creation of gap solitons in Bose-Einstein condensates O. Zobay, E. M. Wright, and P. Meystre Optical Sciences Center, University of Arizona, Tucson, Arizona 85721 ###### Abstract We discuss a method to launch gap soliton-like structures in atomic Bose-Einstein condensates confined in optical traps. Bright vector solitons consisting of a superposition of two hyperfine Zeeman sublevels can be created for both attractive and repulsive interactions between the atoms. Their formation relies on the dynamics of the atomic internal ground states in two far-off resonant counterpropagating -polarized laser beams which form the optical trap. Numerical simulations show that these solitons can be prepared from a one-component state provided with an initial velocity. ###### pacs: PACS numbers: 03.75.Fi, 05.30.Jp, 32.80.Pj ## I Introduction The Gross-Pitaevskii equation (GPE) has been used successfully in the recent past to explain various experiments on atomic Bose-Einstein condensates (see, e.g., the references in [1, 2]), and its validity for the description of the condensate dynamics at zero temperature is now well accepted. A further confirmation would be provided by the observation of solitary matter waves, the existence of which is generic to nonlinear Schrödinger wave equations such as the GPE [3]. Such solitary waves could also find applications in the future, e.g., in the diffractionless transport of condensates. Various theoretical studies of this problem have already been performed, predicting in particular the existence of bright solitons, with corresponding spatially localized atomic density profiles, for condensates with attractive interactions [2]. Research on condensates with repulsive interactions has focused on the formation of gray solitons which correspond to dips in the atomic density. Their creation was investigated in Refs. [1, 4], their general properties were discussed in [5], and Ref. [2] worked out their analogy to the Josephson effect. Complementary and previous to this work, the formation of atomic solitons was also examined theoretically in the context of nonlinear atom optics [6, 7, 8, 9, 10, 11]. In these studies the interaction between the atoms was assumed to result from laser-induced dipole-dipole forces, but this theory has not been experimentally tested so far. The reliance on attractive interactions to achieve bright matter-wave solitons in Bose-Einstein condensates is of course a serious limitation, due to the difficulties associated with achieving condensation in the first place for such interactions. The purpose of the present article is the theoretical exposition of an experimentally realizable geometry that allows one to create bright gap soliton-like structures in Bose condensates, for both attractive and repulsive signs of the two-body scattering length. Gap solitons result from the balance of nonlinearity and the effective linear dispersion of a coupled system, e.g., counterpropagating waves in a grating structure, and appear in the gaps associated with avoided crossings. Gap solitons have previously been studied in a variety of physical contexts, but particularly in nonlinear optics [12]. They were also studied in the framework of nonlinear atom optics [7], but in this case the two states involved are connected by an optical transition, and the effects of spontaneous emission can cause significant problems [8]. Several main reasons motivate our renewed interest in this problem. First, we already mentioned that bright gap solitons are known to exist in nonlinear systems irrespective of whether the nonlinear interaction is repulsive or attractive [13]. With regard to atomic condensates this means that they should be observable, at least in principle, also for Na and Rb where the positive interatomic scattering length gives rise to a repulsive mean interaction. Further, the study of bright solitary waves is of interest as they might be easier to detect than gray ones, and they could find future applications, e.g. in atomic interferometry [11]. An additional reason to study atomic gap solitons is the fact that they consist inherently of a superposition of two internal states, in our case two different Zeeman sublevels of the atomic ground state. As such, they offer a further example of a multicomponent Bose condensate the study of which has already received much interest recently [14, 15, 16]. Finally, the recent demonstration of far-off resonant dipole traps for condensates opens up the way to the “easy” generation and manipulation of such spinor systems. This paper is organized as follows. Section II describes our model. The physics relevant for the generation of gap solitons as well as orders of magnitudes for the various experimental parameters involved are discussed in Sec. III, while Sec. IV presents a summary of our numerical results. Finally, conclusions are given in Sec. V. ## Ii The model The situation we consider for the generation of atomic gap solitons makes use of the recently achieved confinement of Bose condensates in far off-resonant optical dipole traps [17]. We consider explicitly a trap consisting of two focused laser beams of frequency counterpropagating in the -direction and with polarizations and , respectively. These lasers are used to confine a Bose condensate which is assumed for concreteness to consist of Na atoms. The condensate is initially prepared in the atomic ground state. For lasers far detuned from the resonance frequency of the nearest transition to an excited hyperfine multiplet the dynamics of a single atom in the trap can be described by an effective Hamiltonian of the form [18] Heff = P22m+d0ℏδ′(R)|0⟩⟨0| +d1ℏδ′(R)(|−1⟩⟨−1|+|1⟩⟨1|) +d2ℏδ′(R)(|1⟩⟨−1|e2iKlZ+|−1⟩⟨1|e−2iKlZ), which is derived by adiabatically eliminating the excited states in the dipole and rotating wave approximations. In the Hamiltonian (II), the operators and denote the center-of-mass position and momentum of the atom of mass , the ket labels the magnetic sublevel of the Na ground states, , and . Furthermore, δ′(R)=δs(R)/2, (2) where we have introduced the detuning and the position-dependent saturation parameter s(R)=D2E2(R)δ2+Γ2/4≃D2E2(R)δ2. (3) In this expression, denotes the reduced dipole moment between the states and , is the upper to lower state spontaneous emission rate, and is the slowly varying laser field amplitude at point , the plane-wave factors having already been removed from the counterpropagating waves. In the following, we assume that , which is identical for both fields, varies only in the transverse - and -directions and is constant along the trap axis : This approximation is valid if the longitudinal extension of the confined BEC is much less than the Rayleigh range of the trapping fields, a condition we assume is satisfied. The numerical coefficients , which depend on the specific value of , are of the order of or somewhat less than unity. Note that except insofar as appears in the saturation parameter , the effects of spontaneous emission are neglected in this description111In the discussion of the atomic dynamics and Eq. (II) we have assumed that the initial state is coupled only to one excited hyperfine multiplet. However, in the optical trap the detuning of the laser frequency is large compared even to the fine structure splitting of the excited states, so that in principle several different hyperfine multiplets should be taken into account. Fortunately, the coupling to any of these multiplets gives rise to addititional contributions to Eq. (II) which are of the same analytical structure as the one given above. Only the values of , , and are different. This means that Eq. (II) may still be used in this case, the effects of the additional multiplets being included as modifications of the values of the coefficients . For simplicity, however, we will use the values of the transition in the following, i.e., , , and [18].. The first term in the single-particle Hamiltonian (II) describes the quantized center-of-mass atomic motion, the second and third terms the (position dependent) light-shifts of the states, and the final term, proportional to , describes the coupling between the states by the counterpropagating fields. For example, coupling between the and states arises from the process involving absorption of one photon and subsequent re-emission of a photon. However, since the circularly polarized fields are counterpropagating this process also involves a transfer of linear momentum along the -axis, and this accounts for the appearance of the spatially periodic factors, or gratings, in the coupling terms. As shown below, these gratings provide the effective linear dispersion which allows for gap solitons in combination with the nonlinearity due to many-body effects. To describe the dynamics of the Bose condensate we introduce the macroscopic wave function normalized to the total number of particles . Here is omitted as it is coupled to neither by nor by the nonlinearity if it vanishes initially, which we assume in the following. The time evolution of the spinor is determined by the two-component Gross-Pitaevskii equation iℏ∂Ψ∂t = HeffΨ(R,t) + ([Ua|ψ1(R,t)|2+Ub|ψ−1(R,t)|2]ψ1(R,t)[Ub|ψ1(R,t)|2+Ua|ψ−1(R,t)|2]ψ−1(R,t)). In the following we approximate the nonlinearity coefficients by with the -wave scattering length. To identify the key physical parameters for gap soliton formation, and to facilitate numerical simulations, it is convenient to re-express Eq. (II) in a dimensionless form by introducing scaled variables , and with tc = 1/(d2δ′0), (5) lc = tc⋅ℏK/m, (6) ρc = |d2ℏδ′0/U|,, (7) where . Note that for our choice of , and for red detuning, we have . Equation (II) then reads i∂ψ∂τ = ⎡⎢ ⎢⎣−MΔ+d1δ′(r)d2δ′0e2iklzδ′(r)/δ′0e−2iklzδ′(r)/δ′0−MΔ+d1δ′(r)d2δ′0⎤⎥ ⎥⎦(ψ1ψ−1) (8) +[sgn(d2ℏδ′0/U)(|ψ1|2+|ψ−1|2)](ψ1ψ−1), where is the Laplacian in scaled variables, and we have introduced the dimensionless mass-related parameter M=d2δ′0m/(2ℏK2l), (9) so that . ## Iii Gap solitons In this section we discuss the conditions under which Eqs. (8) yield gap soliton solutions. Rather than reproducing the explicit analytic forms of these solutions, which are readily available in the literature, here we introduce the reduced equations which yield gap solitons, and discuss the physics underlying their formation. Estimates for the orders of magnitude of various parameters characterizing atomic gap solitons are also given. ### iii.1 Reduced soliton equations Two key approximations underly the appearance of gap solitons: First, we neglect all transverse variations of the electromagnetic and atomic fields, thereby reducing the problem to one spatial variable . Furthermore, we can set . Second we express the atomic fields in the form ψ±1(z,t)=exp{i[±klz−(1/(4M)−1)τ]}ϕ±1(z,t), (10) and we assume that the atomic field envelopes vary slowly in space in comparison to the plane-wave factors that have been separated out, so that only first-order spatial derivatives of the field envelopes need be retained and only the spatial harmonics indicated included. Under these assumptions Eqs. (8) reduce to i(∂∂τ±2kl∂∂z)(ϕ1ϕ−1)=(0110)(ϕ1ϕ−1) +sgn(d2ℏδ′0/U)(|ϕ1|2+|ϕ−1|2)(ϕ1ϕ−1). (11) Aceves and Wabnitz [13] have shown that these dimensionless equations have explicit travelling solitary wave solutions of hyperbolic secant form. Thus, the optical trapping geometry we propose here can support atomic gap solitons under the appropriate conditions. Having established that our system can support gap solitons our goal in the remainder of this paper is to demonstrate through numerical simulations that these solitons, or at least a remnant of them, can arise for realistic atomic properties and that they can be created from physically reasonable initial conditions. In particular, the exact gap solitons solutions are coherent superpositions of the states where the phase and amplitude of the superposition varies spatially in a specific manner: it is not a priori clear that these gap solitons can be accessed from an initial state purely in the state for example. Furthermore, inclusion of transverse variations and spatial derivatives beyond the slowly-varying envelope approximation introduced above could, in principle, destroy the solitons [19]. For the numerical simulations to be presented here we work directly with Eqs. (8) which does not invoke these approximations. ### iii.2 Intuitive soliton picture A simple and intuitively appealing explanation of the reason why Eq. (8) supports soliton solutions goes as follows: Consider first the one-dimensional nonlinear Schrödinger equation i˙ψ=−M∂2ψ/∂z2+g|ψ|2ψ. (12) This equation has bright soliton solutions if the effects of dispersion and nonlinearity can cancel each other. For this to happen, it is necessary that . In the usual case the mass-related coefficient is positive, so that bright solitons can only exist in condensates with attractive interactions . But consider now the dispersion relation for the linear part of Eq. (8) obtained after neglecting the transverse dimensions and performing the transformation ψ±1=a±1exp{i[k±1z−ω(k)τ]}. (13) Thereby, for the states, and is a relative longitudinal wave vector. The dispersion relation consists of two branches, which in the absence of linear coupling take the form of two parabolas corresponding to the free dynamics of the internal states . However, the linear coupling between these states results in an avoided crossing at , see Fig. 1. If the system is in a superposition of eigenstates pertaining to the lower branch of the dispersion relation, then at the crossing it can be ascribed a negative effective mass. One can thus expect that in this case the system can support soliton solutions even though the interaction is repulsive. For an attractive interaction, in contrast, soliton creation should be possible in all regions of the spectrum with positive effective mass. From the dispersion relation picture, one can easily infer further properties of repulsive interaction solitons. First, they will only exist for weak enough dispersion, as the lower branch of the dispersion curve has a region with negative curvature only as long as the dimensionless mass . Also, the maximum possible velocity can be estimated to be of the order of which is the group velocity at the points of vanishing curvature in the dispersion relation. Finally, for a soliton at rest the contributions of the internal states and will approximately be equal, but solitons traveling with increasing positive, resp. negative, velocity will be increasingly dominated by the , resp. , contribution. This qualitative discussion is in agreement with the analytic results of Ref.[13]. More precisely, the solutions of Ref. [13] are solitary waves. In the following, we will be concerned with the creation of long-lived localized wave packet structures which are brought about by the interplay between nonlinearity and dispersion described above. We will continue to refer to these structures as gap solitons for simplicity. ### iii.3 Soliton estimates We now turn to a discussion of the typical orders of magnitude which characterize the soliton solutions of Eq. (II). We note from the outset that the analytical solutions of Ref. [13], as well as our numerical simulations, indicate that these characteristic scales can be directly inferred from the scale variables in Eqs. (5)–(7) which bring the Gross-Pitaevskii equations into dimensionless form. For example, the spatial extension of the scaled wave function , as well as the total norm are of order unity.222Note that the precise values of and are only of relevance for the scaling between Eqs. (II) and (8). They do not influence the essential physics of the system. In order to obtain estimates for the characteristic length, time and density introduced in Eqs. (5)–(7) we use the parameter values of the Na experiment of Ref. [17] as a guidance. For Sodium, MHz, the saturation intensity mW/cm and the resonance wavelength nm. Choosing the trap wavelength to be far red-detuned from this value, with nm, and a maximum laser intensity kW/cm one obtains a characteristic scale for the time evolution of the condensate of the order of 50 s. The characteristic length is obtained by multiplication with the recoil velocity =1.8 cm/s, which yields m. This yields the dimensionless mass . Finally, the order of magnitude of the characteristic density cm, which means that a soliton typically contains of the order of atoms. These estimates are confirmed by our numerical simulations, which show that the typical extension of a soliton is several in the -direction, about one in the transverse direction and it contains about 1000 atoms. Our numerical simulations show that the maximum dimensionless atomic density in a soliton is always of the order of , which appears to produce the nonlinearity necessary to balance the effects of dispersion. From this value, it is possible to obtain a first estimate of the transverse confinement of the condensate required in our two-dimensional model: We assume that the transverse spatial dependence of the atomic density can be modeled as the normalized ground state density of the harmonic trap potential [5]. The soliton density can hence be roughly estimated as , where is the Heaviside step function, a typical total number of atoms in the soliton and its length. From this condition, and using the typical values for and previoulsy discussed leads to a lower limit for in the range between 100 and 1000 Hz. Altogether, these various estimates are well within experimental reach. ## Iv Numerical results Having characterized the idealized gap soliton solutions of Eq. (II) we now investigate whether they can be accessed from realistic initial conditions. To this end, we study numerically the following situation. A condensate of atoms in the internal state is initially prepared in a conventional optical dipole trap which provides only a tranverse confinement potential . This potential is assumed to be Gaussian, with a trapping frequency at the bottom. Axially, the condensate is confined by a harmonic magnetic trap of frequency . At the magnetic trap is turned off and the polarizations of the trapping light fields are switched to the configuration, with being unchanged. The simulations are performed in two spatial dimensions only, and , as this can already be expected to capture the relevant physics without requiring excessive computational resources. We concentrate on the more interesting case of a condensate with repulsive interactions since that is the new case in which solitons are expected. In order to estimate atom numbers the transformation between Eqs. (8) and (II) is performed after replacing by where the variance is determined from the two-dimensional wave packet structure in question. The main purpose of the numerical simulations is to show that gap solitons can be formed out of condensate wave functions whose initial parameters lie within a relatively broad range. It is only necessary to choose , , and such that the spatial extension of the initial condensate and the atom number are comparable to typical soliton values. They need not take on precisely defined values and the initial wave function does not have to match closely the form of a soliton. However, the condensate will not couple effectively to a soliton if it is at rest initially. Such a situation corresponds to the point with in Fig. 1 where the effective mass is still positive (). The key to an efficient generation of solitons is therefore to provide the condensate with an initial velocity close to the recoil velocity in the -direction. This may be achieved, e.g., by suddenly displacing the center of the magnetic trap. The initial wave function can then be written as with the ground state of the combined optical and magnetic trap and [3]. It is thus placed in the vicinity of the avoided crossing. Experimentally, condensates have already been accelerated to velocities in this range by a similar method in connection with the excitation of dipole oscillations [20]. Figure 2 shows an illustrative example for the formation of a soliton out of the initial distribution. It depicts the evolution of the transverse averaged atomic density N(z;τ)=∫dx(|ψ1(x,z;τ)|2+|ψ−1(x,z;τ)|2), (14) as a function of the scaled variables and . In this example, nm, the maximum intensity kW/cm, s, s, and . The characteristic scales are thus m and s, the coefficient . We choose which corresponds to an initial atom number of 2900, approximately. Figure 2 shows the formation of a soliton after an initial transient phase having a duration of 50 , approximately. This transient phase is characterized by strong “radiation losses”. They occur because half of the initial state pertains to the upper branch of the dispersion relation which cannot sustain solitons. The shape of the created soliton is not stationary in time but appears to oscillate. Further examination shows that norm of the state is slightly larger than that of the state, as is expected for a soliton moving slowly in negative -direction [13]. Figure 3 shows the atomic density in the soliton at . The soliton contains about 500 atoms. The inset depicts the longitudinally integrated density and the transverse confinement potential in the shape of an inverted Gaussian. The soliton thus spreads out over half the width of the potential well, approximately. Various numerical simulations were performed in order to assess the dependence of soliton formation on the various initial parameters. When changing the atom number a soliton was formed over the whole investigated range between 500 and 4000 atoms. With increasing and thus increasing effects of the nonlinearity the final velocity of the soliton changed from negative to positive values. For large a tendency to form two soliton wave packets out of the initial state was observed, however, the formation of each of these solitons is accompanied by large radiation losses which destabilize the other one. As to the transverse confinement parameter soliton formation was observed for s. At s a stable structure was no longer attained which is in rough agreement with the estimate given above. Whereas these numbers indicate a relatively large freedom in the choice of and (for similar results can be expected) a somewhat more restrictive condition is placed on the initial velocity . Its value should be chosen from the interval between 0.8 and 1.0 in order to guarantee soliton formation, the lower bound being determined by the point of vanishing curvature in the dispersion relation. For the initial wave function is situated more and more on the upper branch of the dispersion relation so that the tendency to form solitons is diminished rapidly. ## V Summary and conclusion In conclusion, we have demonstrated that gap soliton-like structures can be created in a Bose condensate confined in an optical dipole trap formed by two counterpopagating -polarized laser beams. Bright solitons can be formed not only for atomic species with attractive interactions but also in the repulsive case. This is rendered possible because the atoms can be ascribed a negative effective mass if their velocity is close to the recoil velocity. The repulsive interaction solitons are inherently superpositions of two hyperfine Zeeman sublevels. The discussion of characteristic scales and numerical simulations indicated that the actual observation of these structures should be achievable within the realm of current experimental possibilities. In our theoretical treatment spontaneous emission was neglected, an approximation justified by the large detunings in the optical trap [17]. The effects of anti-resonant terms, which were also ignored, might be of more importance. This question, as well as three-dimensional numerical studies, could be the subject of future work. ###### Acknowledgements. We have benefited from numerous discussions with E. V. Goldstein. This work is supported in part by the U.S. Office of Naval Research Contract No. 14-91-J1205, by the National Science Foundation Grant PHY95-07639, by the U.S. Army Research Office and by the Joint Services Optics Program. ## References • [1] T. F. Scott, R. J. Ballagh, and K. Burnett, Report No. cond–mat/9711111. • [2] W. P. Reinhardt and C. W. Clark, J. Phys. B 30, L785 (1997). • [3] S. A. Morgan, R. J. Ballagh, and K. Burnett, Phys. Rev. A 55, 4338 (1997). • [4] R. Dum, J. I. Cirac, M. Lewenstein, and P. Zoller, Report No. cond–mat/9710238. • [5] A. D. Jackson, G. M. Kavoulakis, and C. J. Pethick, Report No. cond–mat/9803116. • [6] G. Lenz, P. Meystre, and E. W. Wright, Phys. Rev. Lett. 71, 3271 (1993). • [7] G. Lenz, P. Meystre, and E. W. Wright, Phys. Rev. A 50, 1681 (1994). • [8] K. J. Schernthanner, G. Lenz, and P. Meystre, Phys. Rev. A 50, 4170 (1994). • [9] W. Zhang, D. F. Walls, and B. C. Sanders, Phys. Rev. Lett. 72, 60 (1994). • [10] S. Dyrting, Weiping Zhang, and B. C. Sanders, Phys. Rev. A 56, 2051 (1997). • [11] M. Holzmann and J. Audretsch, Europhys. Lett. 40, 31 (1997). • [12] C. M. de Sterke and J. E. Sipe, in Progress in Optics, edited by E. Wolf (Elsevier, Amsterdam, 1994), Vol. XXXIII. • [13] A. B. Aceves and S. Wabnitz, Phys. Lett. A 141, 37 (1989). • [14] T.-L. Ho and V. B. Shenoy, Phys. Rev. Lett. 77, 2595 (1996). • [15] E. V. Goldstein and P. Meystre, Phys. Rev. A 55, 2935 (1997). • [16] C. J. Myatt, E. A. Burt, R. W. Ghrist, E. A. Cornell, and C. E. Wieman, Phys. Rev. Lett. 78, 586 (1997). • [17] D. M. Stamper-Kurn, M. R. Andrews, A. P. Chikkatur, S. Inouye, H.-J. Miesner, J. Stenger, and W. Ketterle, Phys. Rev. Lett. 80, 2027 (1998). • [18] C. Cohen-Tannoudji, in Fundamental Systems in Quantum Optics, edited by J. Dalibard, J.-M. Raimond, and J. Zinn-Justin (North-Holland, Amsterdam, 1992). • [19] A. R. Champneys, B. A. Malomed, and M. J. Friedman, Phys. Rev. Lett. 80, 4169 (1998). • [20] D. S. Durfee and W. Ketterle, Opt. Express 2, 299 (1998).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216679334640503, "perplexity": 645.9979865466472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00126.warc.gz"}
https://www.statsmodels.org/devel/generated/statsmodels.discrete.discrete_model.MultinomialResults.pred_table.html
# statsmodels.discrete.discrete_model.MultinomialResults.pred_table¶ MultinomialResults.pred_table()[source] Returns the J x J prediction table. Notes pred_table[i,j] refers to the number of times “i” was observed and the model predicted “j”. Correct predictions are along the diagonal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971136450767517, "perplexity": 2459.9427429926377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00021.warc.gz"}
https://sml-group.cc/blog/2020-active-meta-learning/
Probabilistic Active Meta Learning (PAML) Meta-learning can make machine learning algorithms more data-efficient, using experience from prior tasks to learn related tasks quicker. Since some tasks will be more-or-less informative with respect to performance on any given task in a domain, an interesting question to consider is, how might a meta-learning algorithm automatically choose an informative task to learn? Here we summarise probabilistic active meta-learning (PAML): a meta-learning algorithm that uses latent task representations to rank and select informative tasks to learn next. PAML In our paper1, we consider a setting where the goal is to actively explore a task domain. We assume that the meta-learning algorithm is given a set of task descriptive observations (task descriptors) to select the next task (akin to a continuous or discrete action space). For example, task descriptors might be fully or partially observed task parameterisations (e.g., weights of robot links), high-dimensional descriptors of tasks (e.g., image data of different objects for grasping), or simply a few observations from the task itself. PAML is based on the intuition that by formalising meta-learning as a latent variable model23, the learned task embeddings will represent task differences in a way that can be exploited to make decisions about what task to learn next. Figure 2 shows the graphical model for active meta-learning that underpins PAML. Given a set of training datasets, learning and inference is done jointly by maximising a lower on the log model evidence (ELBO), with respect to global model parameters $\theta$ and the variational parameters $\phi$ that approximate the posterior over the latent task variables $\boldsymbol{h}_i$. Since the variational posterior is chosen to be computable in closed form (e.g. Gaussian), we can naturally define a utility function as the self-information (or surprise) of a point under a mixture distribution defined by the training tasks, i.e., $$u(\boldsymbol{h}^{*}) := -\log \sum\nolimits_{i=1}^N q_{\phi_i}(\boldsymbol{h}^*) + \log N,$$ where $N$ is the number of training tasks and $\boldsymbol{h}^*$ is the point being evaluated. The full PAML algorithm is illustrated in Figure 3. Results In the paper1, we run experiments on simulated robotic systems. We test PAML’s performance on varying types of task-descriptors. We generate tasks within domains by varying configuration parameters of the simulator, such as the masses and lengths of parts of the system. We then perform experiments where the learning algorithm observes: (i) fully observed task parameters, (ii) partially observed task parameters, (iii) noisy task parameters and (iv) high-dimensional image descriptors. We compare PAML to uniform sampling (UNI), used in recent meta-learning work4 and equivalent to domain randomization5, Latin hypercube sampling (LHS) and an oracle. Figure 4 shows the results for observed task descriptors and Figure 5 for image task descriptors. In all experiments, we see a noticeable improvement in data-efficiency, measured as the predictive performance—i.e. RMSE and negative log-likelihood (NLL)—on a set of test tasks, plotted against the number of training tasks added. Conclusion To summarise, PAML is a probabilistic formulation of active meta-learning. By exploiting learned task representations and their relationship in latent space, PAML can use prior experience to select more informative tasks. The flexibility of the underlying active meta-learning model enables PAML to do this even when the task descriptors—the representation of the tasks observed by the model—are partially observed or even when they are images. References 1. J. Kaddour, S. Sæmundsson, and M. Deisenroth, Probabilistic Active Meta-Learning, NeurIPS 2020. ↩︎ 2. S. Sæmundsson, K. Hofmann, and M. Deisenroth. Meta Reinforcement Learning with Latent Variable Gaussian Processes. UAI, 2018. ↩︎ 3. J. Gordon, J. Bronskill, M. Bauer, S. Nowozin, and R. Turner. Meta-learning Probabilistic Inference for Prediction. ICLR, 2019. ↩︎ 4. C. Finn, P. Abbeel, and S. Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. ICML, 2017. ↩︎ 5. B. Mehta, M. Diaz, F. Golemo, C. J. Pal, and Liam Paull. Active Domain Randomization, CoRL 2020. ↩︎
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723245859146118, "perplexity": 2250.2090188774005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704824728.92/warc/CC-MAIN-20210127121330-20210127151330-00407.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/80627
## Files in this item FilesDescriptionFormat application/pdf 3395560.pdf (2MB) (no description provided)PDF ## Description Title: Strongly Interacting Fermi Gases, Radio Frequency Spectroscopy and Universality Author(s): Zhang, Shizhong Doctoral Committee Chair(s): Leggett, Anthony J. Department / Program: Physics Discipline: Physics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Physics, Low Temperature Abstract: In Chapter 5, we consider the BEC-BCS crossover problem in the dilute atomic gases frm a general point of view. We show that as a result of the specific properties of the ultra-cold Fermi gases, there exists a universal function which incorporates all the many-body information of the system. Although we do not yet know how to compute the function analytically, we show that many physical quantities can be expressed in terms of this function. It gives an intuitive way of understanding of how universality arises in the cold atomic gases. Issue Date: 2009 Type: Text Language: English Description: 133 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2009. URI: http://hdl.handle.net/2142/80627 Other Identifier(s): (MiAaPQ)AAI3395560 Date Available in IDEALS: 2015-09-25 Date Deposited: 2009 
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655152320861816, "perplexity": 3109.20252298227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823009.19/warc/CC-MAIN-20181209185547-20181209211547-00341.warc.gz"}
http://math.stackexchange.com/users/4175/andry?tab=activity&sort=comments
Andry Reputation 396 Top tag Next privilege 500 Rep. Access review queues Badges 2 13 Newest Impact ~10k people reached • 0 posts edited • 0 helpful flags • 32 votes cast # 52 Comments Feb 14 comment Infinite self-convolution for a function I had sort of the same feeling... Would you be able to link me to a proof or something? Thank you Mar 9 comment Queue system with queue-triggered input process I labeled it as homework, thank you for telling... Sep 10 comment Self multiplication of a CDF degenerates into a Dirac Delta? Thank you very much for your answer, it could provide many useful details. I checked the another one as the correct answer because it simply makes more explicit what was questioned... but your answer provides good Math background. Thank you!!! Apr 13 comment Infinite self-convolution for a function @StefanSmith: Yeah, I am also getting contradictory results. It is not easy to manage this thing here... Furthermore, nobody asked me to solve this specific problem, actually it is just something that I need to do in order to achieve another target. Apr 13 comment Infinite self-convolution for a function Yes! I thought it was the same right? Apr 12 comment Infinite self-convolution for a function Variance increases everytime... and it seems not reaching a stable value... Mar 20 comment Problems getting transformation function from source and destination random variables knowledge when handling the discrete case Thankyou user65384 for your answer, but I am afraid to say that you did not get the point. I do not want to know how to get $F_Y(y)$, in my case I suppose I already have it. In your example you use $g(\cdot) = ln(\cdot)$ but in my question I pointed clear that $g$ must be $g(\cdot) =F_Y^{-1}(F_X(\cdot))$. In particular I need to consider the case where $F_Y(y)$ is a stair function that, so, cannot be inverted! Mar 19 comment Deriving the transformation function of a random variable from the original and the final distributions Just a question, how can I make the transformation when X and Y are discrete? $F_Y^{-1}(y)$ cannot be done... Feb 18 comment Deriving the process of successfully consumed requests from the process of request-producers and the process of request-consumers At the moment I am thinking how to re-write the question in a better way... Need some time sorry... you are right btw... Feb 18 comment Deriving the process of successfully consumed requests from the process of request-producers and the process of request-consumers This is what I am trying to understand... I provided some initial data, but I am trying to device a way to get this thing done... In my question I just wanted to know about a possible approach using the two quantities I introduced (say the two probabilities). I am aware that the question is getting a little vague... I'll try to edit it it... Feb 18 comment Deriving the process of successfully consumed requests from the process of request-producers and the process of request-consumers @joriki: It is one of the things I am asking as well... Feb 17 comment Deriving the process of successfully consumed requests from the process of request-producers and the process of request-consumers I had a feeling... But I cannot figure out how to adapt a death-birth model to this scenario. Btw, gonna check it out, thank you very much! Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper OK, now I think I understand... he is treating $r$ which is a probability, like a random variable... Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper You know I have a problem here... Sure you are right, but what about $P(r)$? What does it mean? It is a pdf, well, but here the mean is considered on a continuos r.v. However I would say that $P(r)$ is the number of edges whose $r_{i,j}$ is $r$ (say) on the total number of edges... I cannot figure what P(r) represents... Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper I am providing a link to the paper: repositories.lib.utexas.edu/bitstream/handle/2152/13376/… Please refer to page 11 and you'll find it. Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper ABSOLUTELY sure about this... But if you think this is wrong, please tell me... Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper Thankyou... I was typing fast and did not realize I used gt and lt... Oct 9 comment Do hashing functions have a probability distribution calculated for their output? @MJD: I am trying... thanks for you message :) Jun 14 comment Is there any closed-form expression to calculate each element of the inverse of a matrix? Yeah by closed-form expression I mean a set of rules that involves elementary operations... For example, cofactors are calculated using minors, If I wanted to replace the cofactor term in the relation with an expression, how would it be... What I'd like to reach is a final formula not involving more steps to calculate the final quantity. Maybe it is not possible, just want a confirmation of this if possible. Jun 4 comment Eigenvalues of a quasi-stochastic matrix Ah, yeah, thankyou :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863295912742615, "perplexity": 510.2772335273594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456289.53/warc/CC-MAIN-20151124205416-00301-ip-10-71-132-137.ec2.internal.warc.gz"}
https://indico.cern.ch/event/645835/
# NEUTRINO PLATFORM WEEK 29 January 2018 to 2 February 2018 CERN Europe/Zurich timezone Fundamental questions in neutrino physics such as the existence of leptonic CP violation, the Majorana nature of neutrinos or the origin of neutrino masses and mixings could have essential implications in other areas of high energy physics, from collider physics to indirect searches for new physics, as well as in our understanding of the universe. This workshop aims at bringing together at CERN neutrino experts to discuss recent progress in this area.  Although the main focus of the workshop will be neutrino oscillation physics and BSM physics related to the oscillation program, the topics to be discussed include: • Prospects on measuring leptonic CP violation and the neutrino mass matrix • Non-standard searches in future neutrino experiments • Neutrinoless double-beta decay • Charged lepton flavour violation, lepton EDMs • Neutrino physics in colliders • Neutrino masses and theories of flavour • Neutrinos in cosmology: neutrino DM and DE connections, baryogenesis • Neutrinos in astrophysics: origin of PeV neutrinos, SuperNova neutrinos, neutrinos and GW This workshop is part of the  CERN Theory Neutrino Platform activities and will be carried out in coordination with the Fermilab Theory group. The workshop will also be held  in connection with the DUNE collaboration meeting and will consist of joint and separated sessions which  will include selected overview talks and ample time for discussions. Talks are by invitation only, if you are interested in giving a talk please contact the organizers. Registration:  The attendance will be limited to around 80 people.   Applications to attend will be open until late December, 2017 but the sooner you register the better. Organisers : G. Barenboim, P. Hernandez, P. Huber, S. Parke, S. Pascoli and T. Schwetz Starts Ends Europe/Zurich CERN 4/3-006 - TH Conference Room Go to map
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501548767089844, "perplexity": 3749.726466734817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704792131.69/warc/CC-MAIN-20210125220722-20210126010722-00469.warc.gz"}
http://tex.stackexchange.com/questions/13270/a-package-template-using-xkeyval/13275
# A package template using xkeyval? I would like to write a package offering a number of commands. The package should accept options, and some of these options should be available as command options. Usage should be as follows: ... \usepackage[optA=val1,optB=val2]{mypackage} \begin{document} \mycommand[optB=val3] ... It seems as if the xkeyval package can do that, but I am not sure how this should be done exactly. - Let me provide a simple (but full) example: \ProvidesPackage{myemph}[2011/03/12 v1.0 a test package] \providecommand\my@emphstyle{\em} % Note that the argument must be expandable, % or use xkvltxp package before \documentclass (see manual of xkeyval) \RequirePackage{xkeyval} \DeclareOptionX{style}{% \def\my@emphstyle{\csname my@style@#1\endcsname}} % predefined styles \providecommand\my@style@default{\em} \providecommand\my@style@bold{\bfseries} \ProcessOptionsX % For simple key-value commands, keyval would suffie \define@key{myemph}{code}{% \def\my@emphstyle{#1}} \define@key{myemph}{style}{% \def\my@emphstyle{\csname my@style@#1\endcsname}} \newcommand\setemph[1]{% \setkeys{myemph}{#1}} \renewcommand\emph[1]{% {\my@emphstyle #1}} \endinput Test file: \documentclass{article} \usepackage[style=default]{myemph} \begin{document} Something \emph{important} \setemph{style=bold} Something \emph{important} \setemph{code=\Large\sffamily} Something \emph{important} \end{document} - I have read this article en.wikibooks.org/wiki/LaTeX/Macros which lead me to this post. Can you explain what \providecommand\my@emphstyle{\em} does? I know the command \providecommand but I cant find what the @means. Also, if I do a regular short document and including just this line in the preamble, my document wont compile. – Adam Feb 4 '14 at 23:19 @Adam: You may read \@ and @ in macro names – Leo Liu Feb 5 '14 at 5:21 @Adam: See also, for example, What do \makeatletter and \makeatother do? – Leo Liu Feb 5 '14 at 5:23 Accepting key-value input can be done using a number of packages, and the general approach is the same for all of them: I covered this in some detail in a TUGboat article. Essentially, there are three things you need to do 1. Define one or more keys; 2. Tell LaTeX to process package options using these keys; In the question, you've mentioned the xkeyval package, with others including kvoptions, pgfkeys (plus pgfopts) and the LaTeX3 keys system l3keys (plus l3keys2e). I have used all of these in the past, and I would favour pgfkeys (if you do not want to use LaTeX3) or the LaTeX3 keys implementation (if you are happy using expl3). The reason is that these two have in my opinion the best overall method for defining keys. (I should add that I wrote most of the LaTeX3 keys system, and this was based initially on the pgfkeys approach.) As the question asks for an xkeyval approach, I will sketch one out here. First, of course, you'll need to load the package. \usepackage{xkeyval} This also loads the parent keyval package, which provides some of the basic mechaism. To define keys, the basic macro is \define@key: \define@key{mypkg}{optA}{<code for optA>} % 'mypkg' is the 'family' for the keys \define@key{mypkg}{optB}{<code for optB>} Within the code, #1 will be the value passed to the key. You can define key types with richer validation (for example Boolean keys) using the various xkeyval macros. As I say, the xkeyval approach is rather dense, and I think 'one question per key type' might be best if you want more information! The second stage is to process the package options. To do this, in place of \ProcessOptions you use \ProcessOptionsX<mypkg>. This will work through the package options, looking for a defined key for each one and executing the code it finds. Finally, to define a macro to use the keys after package loading, you need \setkeys: \newcommand\mymacro[1]{\setkeys{mypkg}{#1}} What you should notice here is that key-value package options are just keys that are defined when the \ProcessOptionsX macro is used. So it is possible to define keys only as package options, then disable them by doing \defin@key again. It's also possible to define options that are only available after package loading, by simply placing \define@key after \ProcessOptionsX. - A small correction: they should be \define@key<mypkg>{optA}{<code for optA>} and \ProcessOptionsX<mypkg> – Leo Liu Mar 12 '11 at 8:01 @Leo: been a while since I used xkeyval: I'll update that. – Joseph Wright Mar 12 '11 at 8:21 @Leo: I'm sure I'm right on \define@key, as the syntax comes from the keyval package! – Joseph Wright Mar 12 '11 at 8:22 sorry I thought it was \DeclareOptionX – Leo Liu Mar 12 '11 at 10:39 An example with some code from the documentation \NeedsTeXFormat{LaTeX2e} \ProvidesPackage{mypackage}[12/03/2011] \RequirePackage{xkeyval} \DeclareOptionX{parindent}[20pt]{\setlength\parindent{#1}} \ExecuteOptionsX{parindent=0pt} \ProcessOptionsX\relax % etc. \endinput \DeclareOptionX is equivalent to (thanks to Ahmed Musa for the correction) \define@key{mypackage.sty}{parindent}[20pt]{\setlength\parindent{#1}} - You have mixed package and class calls. In your case, \DeclareOptionX is equivalent to \define@key{mypackage.sty}{parindent}[20pt]{\setlength\parindent{#1}}. – Ahmed Musa Aug 10 '12 at 12:50 @AhmedMusa Thanks for the correction. – Alain Matthes Aug 10 '12 at 14:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518829941749573, "perplexity": 2767.102157794213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146550.16/warc/CC-MAIN-20160205193906-00143-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/geometric-series-geometric-progression.150174/
# Geometric series/geometric progression 1. Jan 5, 2007 ### Elec68 I can't figure this out for the life of me: A geometric series exists with the third term of 8 and the sixth term of 128, what is the geometric series? 2. Jan 5, 2007 ### Hurkyl Staff Emeritus Have you tried anything at all? What do you know about geometric series? 3. Jan 6, 2007 ### HallsofIvy Staff Emeritus In particular, do you know the formula for the nth term of a geometric sequence? Use that formula knowing that a3= 8 and a6= 128 to get two equations in the two parameters you need. Similar Discussions: Geometric series/geometric progression
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614820241928101, "perplexity": 1812.8221939174584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203515.32/warc/CC-MAIN-20170322213003-00503-ip-10-233-31-227.ec2.internal.warc.gz"}
https://365go.me/implication-rules-problem-12570/
# Implication rules problem ## What is the rule of implication? In propositional logic, material implication is a valid rule of replacement that allows for a conditional statement to be replaced by a disjunction in which the antecedent is negated. The rule states that P implies Q is logically equivalent to not- or and that either form can replace the other in logical proofs. ## What are the first 4 rules of inference? The first two lines are premises . The last is the conclusion . This inference rule is called modus ponens (or the law of detachment ). Rules of Inference. Name Rule Simplification p\wedge q \therefore p Conjunction p q \therefore p\wedge q Resolution p\vee q \neg p \vee r \therefore q\vee r ## What are the 9 rules of inference? Terms in this set (9) • Modus Ponens (M.P.) -If P then Q. -P. … • Modus Tollens (M.T.) -If P then Q. … • Hypothetical Syllogism (H.S.) -If P then Q. … • Disjunctive Syllogism (D.S.) -P or Q. … • Conjunction (Conj.) -P. … • Constructive Dilemma (C.D.) -(If P then Q) and (If R then S) … • Simplification (Simp.) -P and Q. … • Absorption (Abs.) -If P then Q. See also  Is it a fallacy to do a justified action coincidentally? (i.e. without the right justification) ## What are inference rules and implications? Introduction. Rules of inference are syntactical transform rules which one can use to infer a conclusion from a premise to create an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound. ## What are the two parts of an implication? In an implication p⇒q, the component p is called the sufficient condition, and the component q is called the necessary condition. ## What is P or not Q equivalent to? if p is a statement variable, the negation of p is “not p”, denoted by ~p. If p is true, then ~p is false. Conjunction: if p and q are statement variables, the conjunction of p and q is “p and q”, denoted p q. Commutative p q q p p q q p Negations of t and c ~t c ~c t ## What are the 8 rules of inference? Review of the 8 Basic Sentential Rules of Inference • Modus Ponens (MP) p⊃q, p. ∴ q. • Modus Tollens (MT) p⊃q, ~q. ∴ ~p. • Disjunctive Syllogism(DS) p∨q, ~p. ∴ q. … • Simplication (Simp) p.q. ∴ p. … • Conjunction (Conj) p, q. ∴ … • Hypothetical Syllogism (HS) p⊃q, q⊃r. ∴ … • Constructive Dilemma (CD) (p⊃q), (r⊃s), p∨r. ## What are rules of inference explain with example? Table of Rules of Inference Rule of Inference Name P∨Q¬P∴Q Disjunctive Syllogism P→QQ→R∴P→R Hypothetical Syllogism (P→Q)∧(R→S)P∨R∴Q∨S Constructive Dilemma (P→Q)∧(R→S)¬Q∨¬S∴¬P∨¬R Destructive Dilemma ## What is resolution in rules of inference? Resolution Inference Rules. Resolution is an inference rule (with many variants) that takes two or more parent clauses and soundly infers new clauses. A special case of resolution is when the parent causes are contradictory, and an empty clause is inferred. Resolution is a general form of modus ponens.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474900126457214, "perplexity": 4607.310100051799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00779.warc.gz"}
http://jdh.hamkins.org/tag/alfredo-roque-freire/
# Bi-interpretation in set theory, Oberwolfach Set Theory Conference, January 2022 This was a talk for the 2022 Set Theory Conference at Oberwolfach, which was a hybrid of in-person talks and online talks on account of the Covid pandemic. I gave my talk online 10 January 2022. Abstract: Set theory exhibits a truly robust mutual interpretability phenomenon: in any model of one set theory we can define models of diverse other set theories and vice versa. In any model of ZFC, we can define models of ZFC + GCH and also of ZFC + ¬CH and so on in hundreds of cases. And yet, it turns out, in no instance do these mutual interpretations rise to the level of bi-interpretation. Ali Enayat proved that distinct theories extending ZF are never bi-interpretable, and models of ZF are bi-interpretable only when they are isomorphic. So there is no nontrivial bi-interpretation phenomenon in set theory at the level of ZF or above.  Nevertheless, for natural weaker set theories, we prove, including ZFC- without power set and Zermelo set theory Z, there are nontrivial instances of bi-interpretation. Specifically, there are well-founded models of ZFC- that are bi-interpretable, but not isomorphic—even $\langle H_{\omega_1},\in\rangle$ and $\langle H_{\omega_2},\in\rangle$ can be bi-interpretable—and there are distinct bi-interpretable theories extending ZFC-. Similarly, using a construction of Mathias, we prove that every model of ZF is bi-interpretable with a model of Zermelo set theory in which the replacement axiom fails. This is joint work with Alfredo Roque Freire. # Bi-interpretation in weak set theories Abstract. In contrast to the robust mutual interpretability phenomenon in set theory, Ali Enayat proved that bi-interpretation is absent: distinct theories extending ZF are never bi-interpretable and models of ZF are bi-interpretable only when they are isomorphic. Nevertheless, for natural weaker set theories, we prove, including Zermelo-Fraenkel set theory $\newcommand\ZFCm{\text{ZFC}^-}\ZFCm$ without power set and Zermelo set theory Z, there are nontrivial instances of bi-interpretation. Specifically, there are well-founded models of ZFC- that are bi-interpretable, but not isomorphic — even $\langle H_{\omega_1},\in\rangle$ and $\langle H_{\omega_2},\in\rangle$ can be bi-interpretable — and there are distinct bi-interpretable theories extending ZFC-. Similarly, using a construction of Mathias, we prove that every model of ZF is bi-interpretable with a model of Zermelo set theory in which the replacement axiom fails. Set theory exhibits a robust mutual interpretability phenomenon: in a given model of set theory, we can define diverse other interpreted models of set theory. In any model of Zermelo-Fraenkel ZF set theory, for example, we can define an interpreted model of ZFC + GCH, via the constructible universe, as well as definable interpreted models of ZF + ¬AC, of ZFC + MA + ¬CH, of ZFC + $\mathfrak{b}<\mathfrak{d}$, and so on for hundreds of other theories. For these latter theories, set theorists often use forcing to construct outer models of the given model; but nevertheless the Boolean ultrapower method provides definable interpreted models of these theories inside the original model (explained in theorem 7). Similarly, in models of ZFC with large cardinals, one can define fine-structural canonical inner models with large cardinals and models of ZF satisfying various determinacy principles, and vice versa. In this way, set theory exhibits an abundance of natural mutually interpretable theories. Do these instances of mutual interpretation fulfill the more vigourous conception of bi-interpretation? Two models or theories are mutually interpretable, when merely each is interpreted in the other, whereas bi-interpretation requires that the interpretations are invertible in a sense after iteration, so that if one should interpret one model or theory in the other and then re-interpret the first theory inside that, then the resulting model should be definably isomorphic to the original universe (precise definitions in sections 2 and 3). The interpretations mentioned above are not bi-interpretations, for if we start in a model of ZFC+¬CH and then go to L in order to interpret a model of ZFC+GCH, then we’ve already discarded too much set-theoretic information to expect that we could get a copy of our original model back by interpreting inside L. This problem is inherent, in light of the following theorem of Ali Enayat, showing that indeed there is no nontrivial bi-interpretation phenomenon to be found amongst the set-theoretic models and theories satisfying ZF. In interpretation, one must inevitably discard set-theoretic information. Theorem. (Enayat 2016) 1. ZF is solid: no two models of ZF are bi-interpretable. 2. ZF is tight: no two distinct theories extending ZF are bi-interpretable. The proofs of these theorems, provided in section 6, seem to use the full strength of ZF, and Enayat had consequently inquired whether the solidity/tightness phenomenon somehow required the strength of ZF set theory. In this paper, we shall find support for that conjecture by establishing nontrivial instances of bi-interpretation in various natural weak set theories, including Zermelo-Fraenkel theory $\ZFCm$, without the power set axiom, and Zermelo set theory Z, without the replacement axiom. Main Theorems 1. $\ZFCm$ is not solid: there are well-founded models of $\ZFCm$ that are bi-interpretable, but not isomorphic. 2. Indeed, it is relatively consistent with ZFC that $\langle H_{\omega_1},\in\rangle$ and $\langle H_{\omega_2},\in\rangle$ are bi-interpretable. 3. $\ZFCm$ is not tight: there are distinct bi-interpretable extensions of $\ZFCm$. 4. Z is not solid: there are well-founded models of Z that are bi-interpretable, but not isomorphic. 5. Indeed, every model of ZF is bi-interpretable with a transitive inner model of Z in which the replacement axiom fails. 6. Z is not tight: there are distinct bi-interpretable extensions of Z. These claims are made and proved in theorems 20, 17, 21 and 22. We shall in addition prove the following theorems on this theme: 7. Well-founded models of ZF set theory are never mutually interpretable. 8. The Väänänen internal categoricity theorem does not hold for $\ZFCm$, not even for well-founded models. These are theorems 14 and 16. Statement (8) concerns the existence of a model $\langle M,\in,\bar\in\rangle$ satisfying $\ZFCm(\in,\bar\in)$, meaning $\ZFCm$ in the common language with both predicates, using either $\in$ or $\bar\in$ as the membership relation, such that $\langle M,\in\rangle$ and $\langle M,\bar\in\rangle$ are not isomorphic. # The axiom of well-ordered replacement is equivalent to full replacement over Zermelo + foundation In recent work, Alfredo Roque Freire and I have realized that the axiom of well-ordered replacement is equivalent to the full replacement axiom, over the Zermelo set theory with foundation. The well-ordered replacement axiom is the scheme asserting that if $I$ is well-ordered and every $i\in I$ has unique $y_i$ satisfying a property $\phi(i,y_i)$, then $\{y_i\mid i\in I\}$ is a set. In other words, the image of a well-ordered set under a first-order definable class function is a set. Alfredo had introduced the theory Zermelo + foundation + well-ordered replacement, because he had noticed that it was this fragment of ZF that sufficed for an argument we were mounting in a joint project on bi-interpretation. At first, I had found the well-ordered replacement theory a bit awkward, because one can only apply the replacement axiom with well-orderable sets, and without the axiom of choice, it seemed that there were not enough of these to make ordinary set-theoretic arguments possible. But now we know that in fact, the theory is equivalent to ZF. Theorem. The axiom of well-ordered replacement is equivalent to full replacement over Zermelo set theory with foundation. $$\text{ZF}\qquad = \qquad\text{Z} + \text{foundation} + \text{well-ordered replacement}$$ Proof. Assume Zermelo set theory with foundation and well-ordered replacement. Well-ordered replacement is sufficient to prove that transfinite recursion along any well-order works as expected. One proves that every initial segment of the order admits a unique partial solution of the recursion up to that length, using well-ordered replacement to put them together at limits and overall. Applying this, it follows that every set has a transitive closure, by iteratively defining $\cup^n x$ and taking the union. And once one has transitive closures, it follows that the foundation axiom can be taken either as the axiom of regularity or as the $\in$-induction scheme, since for any property $\phi$, if there is a set $x$ with $\neg\phi(x)$, then let $A$ be the set of elements $a$ in the transitive closure of $\{x\}$ with $\neg\phi(a)$; an $\in$-minimal element of $A$ is a set $a$ with $\neg\phi(a)$, but $\phi(b)$ for all $b\in a$. Another application of transfinite recursion shows that the $V_\alpha$ hierarchy exists. Further, we claim that every set $x$ appears in the $V_\alpha$ hierarchy. This is not immediate and requires careful proof. We shall argue by $\in$-induction using foundation. Assume that every element $y\in x$ appears in some $V_\alpha$. Let $\alpha_y$ be least with $y\in V_{\alpha_y}$. The problem is that if $x$ is not well-orderable, we cannot seem to collect these various $\alpha_y$ into a set. Perhaps they are unbounded in the ordinals? No, they are not, by the following argument. Define an equivalence relation $y\sim y’$ iff $\alpha_y=\alpha_{y’}$. It follows that the quotient $x/\sim$ is well-orderable, and thus we can apply well-ordered replacement in order to know that $\{\alpha_y\mid y\in x\}$ exists as a set. The union of this set is an ordinal $\alpha$ with $x\subseteq V_\alpha$ and so $x\in V_{\alpha+1}$. So by $\in$-induction, every set appears in some $V_\alpha$. The argument establishes the principle: for any set $x$ and any definable class function $F:x\to\text{Ord}$, the image $F\mathrel{\text{”}}x$ is a set. One proves this by defining an equivalence relation $y\sim y’\leftrightarrow F(y)=F(y’)$ and observing that $x/\sim$ is well-orderable. We can now establish the collection axiom, using a similar idea. Suppose that $x$ is a set and every $y\in x$ has a witness $z$ with $\phi(y,z)$. Every such $z$ appears in some $V_\alpha$, and so we can map each $y\in x$ to the smallest $\alpha_y$ such that there is some $z\in V_{\alpha_y}$ with $\phi(y,z)$. By the observation of the previous paragraph, the set of $\alpha_y$ exists and so there is an ordinal $\alpha$ larger than all of them, and thus $V_\alpha$ serves as a collecting set for $x$ and $\phi$, verifying this instance of collection. From collection and separation, we can deduce the replacement axiom $\Box$ I’ve realized that this allows me to improve an argument I had made some time ago, concerning Transfinite recursion as a fundamental principle. In that argument, I had proved that ZC + foundation + transfinite recursion is equivalent to ZFC, essentially by showing that the principle of transfinite recursion implies replacement for well-ordered sets. The new realization here is that we do not need the axiom of choice in that argument, since transfinite recursion implies well-ordered replacement, which gives us full replacement by the argument above. Corollary. The principle of transfinite recursion is equivalent to the replacement axiom over Zermelo set theory with foundation. $$\text{ZF}\qquad = \qquad\text{Z} + \text{foundation} + \text{transfinite recursion}$$ There is no need for the axiom of choice. # Different set theories are never bi-interpretable I was fascinated recently to discover something I hadn’t realized about relative interpretability in set theory, and I’d like to share it here. Namely, Different set theories extending ZF are never bi-interpretable! For example, ZF and ZFC are not bi-interpretable, and neither are ZFC and ZFC+CH, nor ZFC and ZFC+$\neg$CH, despite the fact that all these theories are equiconsistent. The basic fact is that there are no nontrivial instances of bi-interpretation amongst the models of ZF set theory. This is surprising, and could even be seen as shocking, in light of the philosophical remarks one sometimes hears asserted in the philosophy of set theory that what is going on with the various set-theoretic translations from large cardinals to determinacy to inner model theory, to mention a central example, is that we can interpret between these theories and consequently it doesn’t much matter which context is taken as fundamental, since we can translate from one context to another without loss. The bi-interpretation result shows that these interpretations do not and cannot rise to the level of bi-interpretations of theories — the most robust form of mutual relative interpretability — and consequently, the translations inevitably must involve a loss of information. To be sure, set theorists classify the various set-theoretic principles and theories into a hierarchy, often organized by consistency strength or by other notions of interpretative power, using forcing or definable inner models. From any model of ZF, for example, we can construct a model of ZFC, and from any model of ZFC, we can construct models of ZFC+CH or ZFC+$\neg$CH and so on. From models with sufficient large cardinals we can construct models with determinacy or inner-model-theoretic fine structure and vice versa. And while we have relative consistency results and equiconsistencies and even mutual interpretations, we will have no nontrivial bi-interpretations. (I had proved the theorem a few weeks ago in joint work with Alfredo Roque Freire, who is visiting me in New York this year. We subsequently learned, however, that this was a rediscovery of results that have evidently been proved independently by various authors. Albert Visser proves the case of PA in his paper, “Categories of theories and interpretations,” Logic in Tehran, 284–341, Lect. Notes Log., 26, Assoc. Symbol. Logic, La Jolla, CA, 2006, (pdf, see pp. 52-55). Ali Enayat gave a nice model-theoretic argument for showing specifically that ZF and ZFC are not bi-interpretable, using the fact that ZFC models can have no involutions in their automorphism groups, but ZF models can; and he proved the general version of the theorem, for ZF, second-order arithmetic $Z_2$ and second-order set theory KM in his 2016 article, A. Enayat, “Variations on a Visserian theme,” in Liber Amicorum Alberti : a tribute to Albert Visser / Jan van Eijck, Rosalie Iemhoff and Joost J. Joosten (eds.) Pages, 99-110. ISBN, 978-1848902046. College Publications, London. The ZF version was apparently also observed independently by Harvey Friedman, Visser and Fedor Pakhomov.) Meanwhile, let me explain our argument. Recall from model theory that one theory $S$ is interpreted in another theory $T$, if in any model of the latter theory $M\models T$, we can define (and uniformly so in any such model) a certain domain $N\subset M^k$ and relations and functions on that domain so as to make $N$ a model of $S$. For example, the theory of algebraically closed fields of characteristic zero is interpreted in the theory of real-closed fields, since in any real-closed field $R$, we can consider pairs $(a,b)$, thinking of them as $a+bi$, and define addition and multiplication on those pairs in such a way so as to construct an algebraically closed field of characteristic zero. Two theories are thus mutually interpretable, if each of them is interpretable in the other. Such theories are necessarily equiconsistent, since from any model of one of them we can produce a model of the other. Note that mutual interpretability, however, does not insist that the two translations are inverse to each other, even up to isomorphism. One can start with a model of the first theory $M\models T$ and define the interpreted model $N\models S$ of the second theory, which has a subsequent model of the first theory again $\bar M\models T$ inside it. But the definition does not insist on any particular connection between $M$ and $\bar M$, and these models need not be isomorphic nor even elementarily equivalent in general. By addressing this, one arrives at a stronger and more robust form of mutual interpretability. Namely, two theories $S$ and $T$ are bi-interpretable, if they are mutually interpretable in such a way that the models can see that the interpretations are inverse. That is, for any model $M$ of the theory $T$, if one defines the interpreted model $N\models S$ inside it, and then defines the interpreted model $\bar M$ of $T$ inside $N$, then $M$ is isomorphic to $\bar M$ by a definable isomorphism in $M$, and uniformly so (and the same with the theories in the other direction). Thus, every model of one of the theories can see exactly how it itself arises definably in the interpreted model of the other theory. For example, the theory of linear orders $\leq$ is bi-interpretable with the theory of strict linear order $<$, since from any linear order $\leq$ we can define the corresponding strict linear order $<$ on the same domain, and from any strict linear order $<$ we can define the corresponding linear order $\leq$, and doing it twice brings us back again to the same order. For a richer example, the theory PA is bi-interpretable with the finite set theory $\text{ZF}^{\neg\infty}$, where one drops the infinity axiom from ZF and replaces it with the negation of infinity, and where one has the $\in$-induction scheme in place of the foundation axiom. The interpretation is via the Ackerman encoding of hereditary finite sets in arithmetic, so that $n\mathrel{E} m$ just in case the $n^{th}$ binary digit of $m$ is $1$. If one starts with the standard model $\mathbb{N}$, then the resulting structure $\langle\mathbb{N},E\rangle$ is isomorphic to the set $\langle\text{HF},\in\rangle$ of hereditarily finite sets. More generally, by carrying out the Ackermann encoding in any model of PA, one thereby defines a model of $\text{ZF}^{\neg\infty}$, whose natural numbers are isomorphic to the original model of PA, and these translations make a bi-interpretation. We are now ready to prove that this bi-interpretation situation does not occur with different set theories extending ZF. Theorem. Distinct set theories extending ZF are never bi-interpretable. Indeed, there is not a single model-theoretic instance of bi-interpretation occurring with models of different set theories extending ZF. Proof. I mean “distinct” here in the sense that the two theories are not logically equivalent; they do not have all the same theorems. Suppose that we have a bi-interpretation instance of the theories $S$ and $T$ extending ZF. That is, suppose we have a model $\langle M,\in\rangle\models T$ of the one theory, and inside $M$, we can define an interpreted model of the other theory $\langle N,\in^N\rangle\models S$, so the domain of $N$ is a definable class in $M$ and the membership relation $\in^N$ is a definable relation on that class in $M$; and furthermore, inside $\langle N,\in^N\rangle$, we have a definable structure $\langle\bar M,\in^{\bar M}\rangle$ which is a model of $T$ again and isomorphic to $\langle M,\in^M\rangle$ by an isomorphism that is definable in $\langle M,\in^M\rangle$. So $M$ can define the map $a\mapsto \bar a$ that forms an isomorphism of $\langle M,\in^M\rangle$ with $\langle \bar M,\in^{\bar M}\rangle$. Our argument will work whether we allow parameters in any of these definitions or not. I claim that $N$ must think the ordinals of $\bar M$ are well-founded, for otherwise it would have some bounded cut $A$ in the ordinals of $\bar M$ with no least upper bound, and this set $A$ when pulled back pointwise by the isomorphism of $M$ with $\bar M$ would mean that $M$ has a cut in its own ordinals with no least upper bound; but this cannot happen in ZF. If the ordinals of $N$ and $\bar M$ are isomorphic in $N$, then all three models have isomorphic ordinals in $M$, and in this case, $\langle M,\in^M\rangle$ thinks that $\langle N,\in^N\rangle$ is a well-founded extensional relation of rank $\text{Ord}$. Such a relation must be set-like (since there can be no least instance where the predecessors form a proper class), and so $M$ can perform the Mostowski collapse of $\in^N$, thereby realizing $N$ as a transitive class $N\subseteq M$ with $\in^N=\in^M\upharpoonright N$. Similarly, by collapsing we may assume $\bar M\subseteq N$ and $\in^{\bar M}=\in^M\upharpoonright\bar M$. So the situation consists of inner models $\bar M\subseteq N\subseteq M$ and $\langle \bar M,\in^M\rangle$ is isomorphic to $\langle M,\in^M\rangle$ in $M$. This is impossible unless all three models are identical, since a simple $\in^M$-induction shows that $\pi(y)=y$ for all $y$, because if this is true for the elements of $y$, then $\pi(y)=\{\pi(x)\mid x\in y\}=\{x\mid x\in y\}=y$. So $\bar M=N=M$ and so $N$ and $M$ satisfy the same theory, contrary to assumption. If the ordinals of $\bar M$ are isomorphic to a proper initial segment of the ordinals of $N$, then a similar Mostowski collapse argument would show that $\langle\bar M,\in^{\bar M}\rangle$ is isomorphic in $N$ to a transitive set in $N$. Since this structure in $N$ would have a truth predicate in $N$, we would be able to pull this back via the isomorphism to define (from parameters) a truth predicate for $M$ in $M$, contrary to Tarski’s theorem on the non-definability of truth. The remaining case occurs when the ordinals of $N$ are isomorphic in $N$ to an initial segment of the ordinals of $\bar M$. But this would mean that from the perspective of $M$, the model $\langle N,\in^N\rangle$ has some ordinal rank height, which would mean by the Mostowski collapse argument that $M$ thinks $\langle N,\in^N\rangle$ is isomorphic to a transitive set. But this contradicts the fact that $M$ has an injection of $M$ into $N$. $\Box$ It follows that although ZF and ZFC are equiconsistent, they are not bi-interpretable. Similarly, ZFC and ZFC+CH and ZFC+$\neg$CH are equiconsistent, but no pair of them is bi-interpretable. And again with all the various equiconsistency results concerning large cardinals. A similar argument works with PA to show that different extensions of PA are never bi-interpretable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9473897814750671, "perplexity": 302.43113869704456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00519.warc.gz"}
http://math.stackexchange.com/questions/115316/question-on-induction-proof-about-direct-sums-of-irreducible-submodules
# Question on induction proof (about direct sums of irreducible submodules) Let $V$ be an $L$-module. I want to show that $V$ is a direct sum of irreducible $L$-submodules if each $L$-submodule of $V$ possesses a complement. I want to show this via induction on the dimension of $V$. Do I start with $\dim V=1$ or $\dim V=2$ for my base case? - Surely you need to assume "finite dimensional" somewhere... The proof must include the case $\dim V = 1$, though in that case the claim is trivial. Whether you need to do the case $\dim V = 2$ separately (as a special case) or not will depend on the precise argument in your inductive step; sometimes the inductive step requires the $n=2$ case to be already established, which is why it is proven separately. Sometimes it doesn't. – Arturo Magidin Mar 1 '12 at 17:28 @Arturo, thanks for the edit! I will incorporate this style next time. – Edison Mar 1 '12 at 17:28 A $1$-dimensional module can be viewed as a trivial sum of irreducible modules. – Joe Johnson 126 Mar 1 '12 at 17:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167205095291138, "perplexity": 204.8981299523251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00093-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/uniformly-polarized-disk-on-a-conducting-plane-e-field.538319/
# Uniformly Polarized disk on a conducting plane (E-Field) • Start date • #1 1,097 2 ## Homework Statement A uniformly polarized dielectric disk surrounded by air is lying at a conducting plane, as shown in the figure. The polarization vector in the is, $$\vec{P} = P \hat{k},$$ the disk radius is a, and the thickness d. Calculate the electric field intensity vector along the disk axis normal to the conducting plane (z-axis). ## The Attempt at a Solution See the second figure attached for their solution and a picture of the problem, and the first figure for my attempt. Are our answers the same? I can't seem to get it exactly in the form they have but it looks relatively close. Can someone confirm? Is my answer equivalent to theirs? If no what did I do wrong? #### Attachments • 62.4 KB Views: 519 • 54.6 KB Views: 595 Related Advanced Physics Homework Help News on Phys.org • #2 lightgrav Homework Helper 1,248 30 your E1(z) is ok, your E2(z) is ok. your re-writing of them , as they are added, makes them seem more complicated, rather than terms canceling (to simplify). • Last Post Replies 1 Views 765 • Last Post Replies 5 Views 3K • Last Post Replies 4 Views 2K • Last Post Replies 2 Views 9K • Last Post Replies 4 Views 1K • Last Post Replies 0 Views 15K • Last Post Replies 4 Views 668 • Last Post Replies 3 Views 3K • Last Post Replies 8 Views 2K • Last Post Replies 2 Views 9K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8616317510604858, "perplexity": 3516.5314711633096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735836.89/warc/CC-MAIN-20200803224907-20200804014907-00375.warc.gz"}
https://arxiv.org/abs/1204.4526
cs.DS (what is this?) # Title: A Tight Combinatorial Algorithm for Submodular Maximization Subject to a Matroid Constraint Abstract: We present an optimal, combinatorial 1-1/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pal and Vondrak, 2008), our algorithm is extremely simple and requires no rounding. It consists of the greedy algorithm followed by local search. Both phases are run not on the actual objective function, but on a related non-oblivious potential function, which is also monotone submodular. Our algorithm runs in randomized time O(n^8u), where n is the rank of the given matroid and u is the size of its ground set. We additionally obtain a 1-1/e-eps approximation algorithm running in randomized time O (eps^-3n^4u). For matroids in which n = o(u), this improves on the runtime of the continuous greedy algorithm. The improvement is due primarily to the time required by the pipage rounding phase, which we avoid altogether. Furthermore, the independence of our algorithm from pipage rounding techniques suggests that our general approach may be helpful in contexts such as monotone submodular maximization subject to multiple matroid constraints. Our approach generalizes to the case where the monotone submodular function has restricted curvature. For any curvature c, we adapt our algorithm to produce a (1-e^-c)/c approximation. This result complements results of Vondrak (2008), who has shown that the continuous greedy algorithm produces a (1-e^-c)/c approximation when the objective function has curvature c. He has also proved that achieving any better approximation ratio is impossible in the value oracle model. Subjects: Data Structures and Algorithms (cs.DS) MSC classes: 68W25 ACM classes: F.2.2 Cite as: arXiv:1204.4526 [cs.DS] (or arXiv:1204.4526v4 [cs.DS] for this version) ## Submission history From: Justin Ward [view email] [v1] Fri, 20 Apr 2012 03:42:03 GMT (34kb) [v2] Sun, 1 Jul 2012 06:39:48 GMT (35kb) [v3] Wed, 16 Oct 2013 17:56:02 GMT (31kb) [v4] Tue, 19 Nov 2013 17:01:33 GMT (32kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8325580954551697, "perplexity": 1675.793663763483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607860.7/warc/CC-MAIN-20170524192431-20170524212431-00433.warc.gz"}
https://www.nature.com/articles/s41524-021-00525-5?error=cookies_not_supported&code=c2fd6d07-eaa5-4c12-bfaf-96d0c11c3c6f
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Intersystem crossing and exciton–defect coupling of spin defects in hexagonal boron nitride ## Abstract Despite the recognition of two-dimensional (2D) systems as emerging and scalable host materials of single-photon emitters or spin qubits, the uncontrolled, and undetermined chemical nature of these quantum defects has been a roadblock to further development. Leveraging the design of extrinsic defects can circumvent these persistent issues and provide an ultimate solution. Here, we established a complete theoretical framework to accurately and systematically design quantum defects in wide-bandgap 2D systems. With this approach, essential static and dynamical properties are equally considered for spin qubit discovery. In particular, many-body interactions such as defect–exciton couplings are vital for describing excited state properties of defects in ultrathin 2D systems. Meanwhile, nonradiative processes such as phonon-assisted decay and intersystem crossing rates require careful evaluation, which competes together with radiative processes. From a thorough screening of defects based on first-principles calculations, we identify promising single-photon emitters such as SiVV and spin qubits such as TiVV and MoVV in hexagonal boron nitride. This work provided a complete first-principles theoretical framework for defect design in 2D materials. ## Introduction Optically addressable defect-based qubits offer a distinct advantage in their ability to operate with high fidelity under room temperature conditions1,2. Despite the tremendous progress made in years of research, systems that exist today remain inadequate for real-world applications. The identification of stable single-photon emitters (SPEs) in 2D materials has opened up a new playground for novel quantum phenomena and quantum technology applications, with improved scalability in device fabrication and leverage in doping spatial control, qubit entanglement, and qubit tuning3,4. In particular, hexagonal boron nitride (h-BN) has demonstrated that it can host stable defect-based SPEs5,6,7,8 and spin triplet defects9,10. However, persistent challenges must be resolved before 2D quantum defects can become the most promising quantum information platform. These challenges include the undetermined chemical nature of existing SPEs7,11, difficulties in the controlled generation of desired spin defects, and scarcity of reliable theoretical methods which can accurately predict critical physical parameters for defects in 2D materials due to their complex many-body interactions. To circumvent these challenges, the design of promising spin defects by high-integrity theoretical methods is urgently needed. Introducing extrinsic defects can be unambiguously produced and controlled, which fundamentally solves the current issues of the undetermined chemical nature of existing SPEs in 2D systems. As highlighted by refs. 2,12, promising spin qubit candidates should satisfy several essential criteria: deep defect levels, stable high spin states, large zero-field splitting (ZFS), efficient radiative recombination, high intersystem crossing (ISC) rates, and long spin coherence and relaxation time. Using these criteria for theoretical screening can effectively identify promising candidates but requires theoretical development of first-principles methods, significantly beyond the static and mean-field level. For example, accurate defect charge transition levels in 2D materials necessitates careful treatment of defect charge corrections for removal of spurious charge interactions13,14,15 and electron correlations for non-neutral excitation, e.g. from GW approximations15,16 or Koopmans-compliant hybrid functionals17,18,19,20. Optical excitation and exciton radiative lifetime must account for defect–exciton interactions, e.g. by solving the Bethe–Salpeter equation (BSE), due to large exciton-binding energies in 2D systems21,22. Spin-phonon relaxation time calls for a general theoretical approach to treat complex symmetry and state degeneracy of defective systems, along the line of recent development based on ab-initio density matrix approach23. Spin coherence time due to the nuclei spin and electron spin coupling can be accurately predicted for defects in solids by combining first-principles and spin Hamiltonian approaches24,25. In the end, nonradiative processes, such as phonon-assisted nonradiative recombination, have been recently computed with first-principles electron–phonon couplings for defects in h-BN26, and resulted in less competitive rates than corresponding radiative processes. However, the spin–orbit-induced ISC as the key process for pure spin state initialization during qubit operation has not been investigated for spin defects in 2D materials from first-principles in-depth. This work has developed a complete theoretical framework which enables the design of spin defects based on the critical physical parameters mentioned above and highlighted in Fig. 1a. We employed state-of-the-art first-principles methods, focusing on many-body interaction such as defect–exciton couplings and dynamical processes through radiative and nonradiative recombinations. We developed a methodology to compute nonradiative ISC rates with an explicit overlap of phonon wavefunctions beyond current implementations in the Huang–Rhys approximation27. We showcase the discovery of transition metal complexes such as Ti and Mo with a vacancy (TiVV and MoVV) to be spin triplet defects in h-BN, and the discovery of SiVV to be a bright SPE in h-BN. We predict TiVV and MoVV are stable triplet defects in h-BN (which is rare considering the only known such defect is $${\,\text{V}}_{\text{B}\,}^{-}$$28) with large ZFS and spin-selective decay, which will set 2D quantum defects at a competitive stage with NV center in diamond for quantum technology applications. ## Results In the development of spin qubits in 3D systems (e.g. diamond, SiC, and AlN), defects beyond sp dangling bonds from N or C have been explored. In particular, large metal ions plus anion vacancy in AlN and SiC were found to have potential as qubits due to triplet ground states and large ZFS29. Similar defects may be explored in 2D materials30, such as the systems shown in Fig. 1b–d. This opens up the possibility of overcoming the current limitations of the uncontrolled and undetermined chemical nature of 2D defects, and unsatisfactory spin-dependent properties of existing defects. In the following, we will start the computational screening of spin defects with static properties of the ground state (spin state, defect formation energy, and ZFS) and the excited state (optical spectra), then we will discuss dynamical properties including radiative and nonradiative (phonon-assisted spin conserving and spin-flip) processes, as the flow chart shown in Fig. 1a. We will summarize the complete defect discovery procedure and discuss the outlook at the end. ### Screening triplet spin defects in h-BN To identify stable qubits in h-BN, we start by screening neutral dopant-vacancy defects for a triplet ground state based on total energy calculations of different spin states at both semi-local Perdew–Burke–Ernzerhof (PBE) and hybrid functional levels. We considered the dopant substitution at a divacancy site in h-BN (Fig. 1b) for four different elemental groups. The results of this procedure are summarized in Supplementary Table 1 and Note 1. With additional supercell tests in Supplementary Table 2, our screening process finally yielded that only MoVV and TiVV have a stable triplet ground state. We further confirmed the thermodynamic charge stability of these defect candidates via calculations of defect formation energy and charge transition levels. As shown in Supplementary Fig. 1, both TiVV and MoVV defects have a stable neutral (q = 0) region for a large range of Fermi levels (εF), from 2.2 to 5.6 eV for MoVV and from 2.9 to 6.1 eV for TiVV. These neutral states will be stable in intrinsic h-BN systems or with weak p-type or n-type doping (see Supplementary Note 2). With a confirmed triplet ground state, we next computed the two defects’ ZFS. A large ZFS is necessary to isolate the ms = ± 1 and ms = 0 levels even at zero magnetic field allowing for controllable preparation of the spin qubit. Here we computed the contribution of spin–spin interaction to ZFS by implementing the plane-wave-based method developed by Rayson et al. (see the “Methods” section for details of implementation and benchmark on NV center in diamond)31. Meanwhile, the spin–orbit contribution to ZFS was computed with the ORCA code. We find that both defects have sizable ZFS including both spin–spin and spin–orbit contributions (axial D parameter) of 19.4 GHz for TiVV and 5.5 GHz for MoVV, highlighting the potential for the basis of a spin qubit with optically detected magnetic resonance (ODMR) (see Supplementary Note 3 and Fig. 2). They are notably larger than previously reported values for ZFS of other known spin defects in solids29, although at a reasonable range considering large ZFS values (up to 1000 GHz) in transition-metal complex molecules32. ### Screening SPE defects in h-BN To identify SPEs in h-BN, we considered a separate screening process of these dopant-vacancy defects, targeting those with desirable optical properties. Namely, an SPE efficiently emits a single photon at a time at room temperature. Physically this corresponds to identifying defects that have a single bright intra-defect transition with a high quantum efficiency (i.e. much faster radiative rates than nonradiative ones), for example current SPEs in h-BN have radiative lifetimes ~1–10 ns and quantum efficiency over 50%33,34. Using these criteria we screened the defects by computing their optical transitions and radiative lifetime at random phase approximation (RPA) (see Supplementary Note 4, Fig. 3, and Table 3). This offers a cost-efficient first-pass to identify defects with bright transition and short radiative lifetime as potential candidates for SPEs. From this procedure, we found that CVV(T), SiVV(S), SiVV(T), SVV(S), GeVV(S), and $${{\rm{Sn}}}_{{\rm{VV}}}$$(S) could be promising SPE defects ((T) denotes triplet; (S) denotes singlet), with a bright intra-defect transition and radiative lifetimes on the order of 10 ns, at the same order of magnitude of the SPEs’ lifetime observed experimentally34. Among these, SiVV(S) has the shortest radiative lifetime, and in addition, Si has recently been experimentally detected in h-BN with samples grown in chemical vapor deposition (the ground state of SiVV is also singlet)35. Hence we will focus on SiVV as an SPE candidate in the following sections as we compute optical and electronic properties at higher level of theory from many-body perturbation theory including accurate electron correlation and electron–hole interactions. Note that CVV (commonly denoted CBVN) has also been suggested to be an SPE source in h-BN36. The single-particle energy levels of TiVV, MoVV, and SiVV are shown in Fig. 2. These levels are computed by many-body perturbation theory (G0W0) for accurate electron correlation, with hybrid functional (PBE0(α), α = 0.41 based on the Koopmans’ condition17) as the starting point to address self-interaction errors for 3d transition metal defects37,38. For example, we find that both the wavefunction distribution and ordering of defect states can differ between PBE and PBE0(α) (see Supplementary Figs. 46). The convergence test of G0W0 can been found in Supplementary Fig. 7, Note 5, and Table 4. Importantly, the single particle levels in Fig. 2 show there are well-localized occupied and unoccupied defect states in the h-BN bandgap, which yield the potential for intra-defect transitions. Obtaining reliable optical properties of these two-dimensional materials necessitates solving the BSE to include excitonic effects due to their strong defect–exciton coupling, which is not included in RPA calculations (see comparison in Supplementary Fig. 8 and Table 5)39,40,41,42. The BSE optical spectra are shown for each defect in Fig. 3a–c (the related convergence tests can be found in Supplementary Figs. 9 and 10). In each case, we find an allowed intra-defect optical transition (corresponding to the lowest energy peak as labeled in Fig. 3a–c, and red arrows in Fig. 2). From the optical spectra we can compute their radiative lifetimes as detailed in the “Methods” section on “Radiative recombination”. We find the transition metal defects’ radiative lifetimes (tabulated in Table 1) are long, exceeding μs. Therefore, they are not good candidates for SPE. In addition, while they still are potential spin qubits with optically allowed intra-defect transitions, optical readout of these defects will be difficult. Referring to Table 1 and the expression of radiative lifetime in Eq. (9) we can see this is due to their low excitation energies (E0, in the infrared region) and small dipole moment strength ($${\mu }_{\mathrm {e-h}}^{2}$$). The latter is related to the tight localization of the excitonic wavefunction for TiVV and MoVV (shown in Fig. 3d–f), as strong localization of the defect-bound exciton leads to weaker oscillator strength43. On the other hand, the optical properties of the SiVV defect are quite promising for SPEs, as Fig. 3c shows it has a very bright optical transition in the ultraviolet region. As a consequence, we find that the radiative lifetime (Table 1) for SiVV is 22.8 ns at G0W0 + BSE@PBE0(α). We note that although the lifetime of SiVV at the level of BSE is similar to that obtained at RPA (13.7 ns), the optical properties of 2D defects at RPA are still unreliable, due to the lack of excitonic effects. For example, the excitation energy (E0) can deviate by ~1 eV and oscillator strengths ($${\mu }_{{\mathrm {e-h}}}^{2}$$) can deviate by an order of magnitude (more details can be found in Supplementary Table 5). Above all, the radiative lifetime of SiVV is comparable to experimentally observed SPE defects in h-BN34, showing that SiVV is a strong SPE defect candidate in h-BN. ### Multiplet structure and excited-state dynamics Finally, we discuss the excited-state dynamics of the spin qubit candidates TiVV and MoVV defects in h-BN, where the possibility of ISC is crucial. This can allow for polarization of the system to a particular spin state by optical pumping, required for realistic spin qubit operation. An overview of the multiplet structure and excited-state dynamics is given in Fig. 4 for the TiVV and MoVV defects. For both defects, the system will begin from a spin-conserved optical excitation from the triplet ground state to the triplet excited state, where next the excited state relaxation and recombination can go through several pathways. The excited state can directly return to the ground state via a radiative (red lines) or nonradiative process (dashed dark blue lines). For the TiVV defect shown in Fig. 4a, we find the system may relax to another excited state with lower symmetry through a pseudo-Jahn–Teller distortion (PJT; solid dark blue lines), and ultimately recombine back to the ground state nonradiatively. Most importantly, a third pathway is to nonradiatively relax to an intermediate singlet state through a spin–flip ISC and then again recombine back to the ground state (dashed light-blue lines). This ISC pathway is critical for the preparation of a pure spin state, similar to the NV center in diamond. Below, we will discuss our results for the lifetime of each radiative or nonradiative process, in order to determine the most competitive pathway under the operation condition. First, we will consider the direct ground state recombination processes. Figure 5 shows the configuration diagram of the TiVV and MoVV defects. The zero-phonon line (ZPL) for direct recombination can be accurately computed by subtracting its vertical excitation energy computed at BSE (0.56 eV for TiVV and 1.08 eV for MoVV) by its relaxation energy in the excited state (i.e. Franck–Condon shift44, ΔEFC in Fig. 5). This yields ZPLs of 0.53 and 0.91 eV for TiVV and MoVV, respectively. Although this method accurately includes both many-body effects and Franck–Condon shifts, it is difficult to evaluate ZPLs for the triplet to singlet-state transition currently. Therefore, we compared it with the ZPLs computed by the constrained occupation DFT (CDFT) method at PBE. This yields ZPLs of 0.49 and 0.92 eV for TiVV and MoVV, respectively, which are in great agreement with the ones obtained from BSE excitation energies subtracting ΔEFC above. Lastly, the radiative lifetimes for these transitions are presented in Table 1 as discussed in the earlier section, which shows TiVV and MoVV have radiative lifetimes of 195 and 33 μs, respectively (red lines in Fig. 4). In terms of nonradiative properties, the small Huang–Rhys (Sf) for the $$|_{1}^{3}A^{\prime\prime} \rangle$$ to $$|_{0}^{3}A^{\prime\prime} \rangle$$ the transition of the TiVV defect (0.91) implies extremely small electron–phonon coupling and potentially an even slower nonradiative process. On the other hand, Sf for the $$|_{1}^{3}A\rangle$$ to $$|_{0}^{3}A\rangle$$ the transition of the MoVV defect is sizable (22.05) and may indicate a possible nonradiative decay. Following the formalism presented in ref. 26, we computed the nonradiative lifetime of the ground state direct recombination (T = 10 K is chosen to compare with the measurement at cryogenic temperatures45). Consistent with their Huang–Rhys factors, the nonradiative lifetime of TiVV is found to be 10 s, while the nonradiative lifetime of the MoVV defect is found to be 0.02 μs. The former lifetime is indicative of a forbidden transition; however, the TiVV defect also possesses a PJT effect in the triplet excited state (red curve in Fig. 5a). Due to the PJT effect, the excited state (CS, $$|_{1}^{3}A^{\prime\prime}\rangle$$) can relax to lower symmetry (C1, $$|_{1}^{3}A\rangle$$) with a nonradiative lifetime of 394 ps (solid dark blue line in Fig. 4a, additional details see Supplementary Note 9 and Fig. 11). Afterward, nonradiative decay from $$|_{1}^{3}A\rangle$$ to the ground state ($$|_{0}^{3}A^{\prime\prime} \rangle$$) (dashed dark blue line in Fig. 4a) exhibits a lifetime of 0.044 ps due to a large Huang–Rhys factor (14.95). ### Spin–orbit coupling (SOC) and nonradiative ISC rate Lastly, we considered the possibility of an ISC between the triplet excited state and the singlet ground state for each defect, which is critical for spin qubit application. In order for a triplet to singlet transition to occur, a spin-flip process must take place. For ISC, typically SOC can entangle triplet and singlet states yielding the possibility for a spin-flip transition. To validate our methods for computing SOC (see the “Methods” section), we first computed the SOC strengths for the NV center in diamond. We obtained SOC values of 4.0 GHz for the axial λz and 45 GHz for non-axial λ in fair agreement with previously computed values and experimentally measured values27,46. We then computed the SOC strength for the TiVV defect (λz = 149 GHz, λ = 312 GHz) and the MoVV defect (λz = 16 GHz, λ = 257 GHz). The value of λ in particular leads to the potential for a spin-selective pathway for both defects, analogous to NV center in diamond. To compute the ISC rate, we developed an approach which is a derivative of the nonradiative recombination formalism presented in Eq. (11): $${{{\Gamma }}}_{{\mathrm {ISC}}}=4\pi \hslash {\lambda }_{\perp }^{2}{\widetilde{X}}_{{\mathrm {if}}}(T)$$ (1) $${\widetilde{X}}_{{\mathrm {if}}}(T)=\sum _{n,m}{p}_{{\mathrm {in}}}{\left|\left\langle {\phi }_{fm}({\bf{R}})\right|{\phi }_{{\mathrm {in}}}({\bf{R}})\rangle\right|}^{2}\delta (m\hslash {\omega }_{{\mathrm {f}}}-n\hslash {\omega }_{{\mathrm {i}}}+{{\Delta }}{E}_{{\mathrm {if}}})$$ (2) Compared with previous formalism27, this method allows different values for initial state vibrational frequency (ωi) and final state one (ωf) through explicit calculations of phonon wavefunction overlap. Again to validate our methods we first computed the ISC rate for NV center in diamond. Using the experimental value for λ we obtain an ISC rate for NV center in diamond of 2.3 MHz which is in excellent agreement with the experimental value of 8 and 16 MHz45. In final, we obtain an ISC time of 83 ps for TiVV and 2.7 μs for MoVV as shown in Table 2 and light blue lines in Fig. 4. The results of all the nonradiative pathways for the two spin defects are summarized in Table 2 and are displayed in Fig. 4 along with the radiative pathway. We begin by summarizing the results for TiVV first and then discuss MoVV below. In short, for TiVV the spin conserved optical excitation from the triplet ground state $$|_{0}^{3}A^{\prime\prime} \rangle$$ to the triplet excited state $$|_{1}^{3}A^{\prime\prime} \rangle$$ cannot directly recombine nonradiatively due to a weak electron–phonon coupling between these states. In contrast, a nonradiative decay is possible via its PJT state ($$|_{1}^{3}A\rangle$$) with a lifetime of 394 ps. Finally, the process of ISC from the triplet excited state $$|_{1}^{3}A^{\prime\prime}\rangle$$ to the singlet state ($$|_{0}^{1}A^{\prime} \rangle$$) is an order of magnitude faster (i.e. 83 ps) and is in-turn a dominant relaxation pathway. Therefore the TiVV defect in h-BN is predicted to have an expedient spin purification process due to a fast ISC with a rate of 12 GHz. We note that while the defect has a low optical quantum yield and is predicted to not be a good SPE candidate, it is still noteworthy, as to date the only discovered triplet defect in h-BN is the negatively charged boron vacancy, which also does not exhibit SPE and has similarly low quantum efficiency9. Meanwhile, the leveraged control of an extrinsic dopant can offer advantages in spatial and chemical nature of defects. For the MoVV defect, its direct nonradiative recombination lifetime from the triplet excited state $$|_{1}^{3}A\rangle$$ to the ground state $$|_{0}^{3}A\rangle$$ is 0.02 μs. While the comparison with its radiative lifetime (33 μs) is improved compared to the TiVV defect, it still is predicted to have low quantum efficiency. However, again the ISC between $$|_{1}^{3}A\rangle$$ and $$|_{0}^{1}A\rangle$$ is competitive with a lifetime of 2.7 μs. This rate (around MHz) is similar to diamond and implies a feasible ISC. Owing to its more ideal ZPL position (~1eV) and improved quantum efficiency, optical control of the MoVV defect is seen as more likely and may be further improved by other methods such as coupling to optical cavities47,48 and applying strain5,26. ## Discussion In summary, we proposed a general theoretical framework for identifying and designing optically addressable spin defects for the future development of quantum emitter and quantum qubit systems. We started by searching for defects with triplet ground state by DFT total energy calculations which allow for rapid identification of possible candidates. Here we found that the TiVV and MoVV defects in h-BN have a neutral triplet ground state. We then computed ZFS of secondary spin quantum sublevels and found they are sizable for both defects, larger than that of NV center in diamond, enabling possible control of these levels for qubit operation. In addition, we screened for potential SPEs in h-BN based on allowed intra-defect transitions and radiative lifetimes, leading to the discovery of SiVV. Next, the electronic structure and optical spectra of each defect were computed from many-body perturbation theory. Specifically, the SiVV defect is shown to possess an exciton radiative lifetime similar to experimentally observed SPEs in h-BN and is a potential SPE candidate. Finally, we analyzed all possible radiative and nonradiative dynamical processes with first-principles rate calculations. In particular, we identified a dominant spin-selective decay pathway via ISC at the TiVV defect which gives a key advantage for initial pure spin state preparation and qubit operation. Meanwhile, for the MoVV defect, we found that it has the benefit of improved quantum efficiency for more realistic optical control. This work emphasizes that the theoretical discovery of spin defects requires careful treatment of many-body interactions and various radiative and nonradiative dynamical processes such as ISC. We demonstrate the high potential of extrinsic spin defects in 2D host materials as qubits for quantum information science. Future work will involve further examination of spin coherence time and its dominant decoherence mechanism, as well as other spectroscopic fingerprints from first-principles calculations to facilitate experimental validation of these defects. ## Methods ### First-principles calculations In this study, we used the open source plane-wave code Quantum ESPRESSO49 to perform calculations on all structural relaxations and total energies with optimized norm-conserving Vanderbilt (ONCV) pseudopotentials50 and a wavefunction cutoff of 50 Ry. A supercell size of 6 × 6 or higher was used in our calculations with a 3 × 3 × 1 k-point mesh. Charged cell total energies were corrected to remove spurious charge interactions by employing the techniques developed in refs. 15,51,52 and implemented in the JDFTx code53. The total energies, charged defect formation energies and geometry were evaluated at the PBE level54. Single-point calculations with k-point meshes of 2 × 2 × 1 and 3 × 3 × 1 were performed using hybrid exchange-correlation functional PBE0(α), where the mixing parameter α = 0.41 was determined by the generalized Koopmans’ condition as discussed in refs. 17,20. Moreover, we used the YAMBO code55 to perform many-body perturbation theory with the GW approximation to compute the quasi-particle correction using PBE0(α) eigenvalues and wavefunctions as the starting point. The RPA and BSE calculations were further solved on top of the GW approximation for the electron–hole interaction to investigate the optical properties of the defects, including absorption spectra and radiative lifetime. ### Thermodynamic charge transition levels and defect formation energy The defect formation energy (FEq) was computed for the TiVV and MoVV defects following: $${\mathrm {F{E}}}_{q}({\varepsilon }_{\mathrm {{F}}})={E}_{\mathrm {{q}}}-{E}_{\mathrm {{pst}}}+\sum _{i}{\mu }_{i}{{\Delta }}{N}_{i}+q{\varepsilon }_{\mathrm {{F}}}+{{{\Delta }}}_{q}$$ (3) where Eq is the total energy of the defect system with charge q, Epst is the total energy of the pristine system, μi and ΔNi are the chemical potential and change in the number of atomic species i, and εF is the Fermi energy. A charged defect correction Δq was computed for charged cell calculations by employing the techniques developed in refs. 15,51. The chemical potential references are computed as $${\mu }_{{\mathrm {Ti}}}={E}_{{\mathrm {Ti}}}^{{\mathrm {bulk}}}$$ (total energy of bulk Ti), $${\mu }_{{\mathrm {Mo}}}={E}_{{\mathrm {Mo}}}^{{\mathrm {bulk}}}$$ (total energy of bulk Mo), $${\mu }_{{\mathrm {BN}}}={E}_{{\mathrm {BN}}}^{{\mathrm {ML}}}$$ (total energy of monolayer h-BN). Meanwhile the corresponding charge transition levels of defects can be obtained from the value of εF where the stable charge state transitions from q to $$q^{\prime}$$. $${\epsilon }_{q| q^{\prime} }=\frac{\mathrm {{F{E}}}_{q}-{\mathrm {F{E}}}_{q^{\prime} }}{q^{\prime} -q}$$ (4) ### Zero-field splitting The first-order ZFS due to spin–spin interactions was computed for the dipole–dipole interactions of the electron spin: $${H}_{{\mathrm {ss}}}=\frac{{\mu }_{0}}{4\pi }\frac{{({g}_{{\mathrm {e}}}\hbar )}^{2}}{{r}^{5}}\left[3({{\bf{s}}}_{1}\cdot {\bf{r}})({{\bf{s}}}_{2}\cdot {\bf{r}})-({{\bf{s}}}_{1}\cdot {{\bf{s}}}_{2}){r}^{2}\right].$$ (5) Here, μ0 is the magnetic permeability of vacuum, ge is the electron gyromagnetic ratio, $${\hbar}$$ is the Planck’s constant, s1, s2 is the spin of first and second electron, respectively, and r is the displacement vector between these two electron. The spatial and spin dependence can be separated by introducing the effective total spin S = ∑isi. This yields a Hamiltonian of the form $${H}_{{\mathrm {ss}}}={{\bf{S}}}^{{\mathrm {T}}}\hat{{\bf{D}}}{\bf{S}}$$, which introduces the traceless ZFS tensor $$\hat{{\bf{D}}}$$. It is common to consider the axial and rhombic ZFS parameters D and E which can be acquired from the $$\hat{{\bf{D}}}$$ tensor: $$D=\frac{3}{2}{D}_{zz} \quad {\text{and}} \quad E=({D}_{yy}-{D}_{xx})/2\,.$$ (6) Following the formalism of Rayson et al. 31, the ZFS tensor $$\hat{{\bf{D}}}$$ can be computed with periodic boundary conditions as $${D}_{ab}=\frac{1}{2}\frac{{\mu }_{0}}{4\pi }{({g}_{{\mathrm {e}}}\hslash )}^{2}\sum\limits_{i > j}{\chi }_{ij}\left\langle {{{\Psi }}}_{ij}({{\bf{r}}}_{1},{{\bf{r}}}_{2})\left| \frac{{{\bf{r}}}^{2}{\delta }_{ab}-3{{\bf{r}}}_{a}{{\bf{r}}}_{b}}{{r}^{5}}\right| {{{\Psi }}}_{ij}({{\bf{r}}}_{1},{{\bf{r}}}_{2})\right\rangle .$$ (7) Here the summation on pairs of i, j runs over all occupied spin-up and spin-down states, with χij taking the value +1 for parallel spin and −1 for anti-parallel spin, and Ψij(r1, r2) is a two-particle Slater determinant constructed from the Kohn–Sham wavefunctions of the ith and jth states. This procedure was implemented as a post-processing code interfaced with Quantum ESPRESSO. To verify our implementation is accurate, we computed the ZFS of the NV center in diamond which has a well-established result. Using ONCV pseudopotentials, we obtained a ZFS of 3.0 GHz for NV center, in perfect agreement with previous reported results29. For heavy elements such as transition metals, spin–orbit (SO) coupling can have substantial contribution to ZFS. Here, we also computed the SO contribution of the ZFS as implemented in the ORCA code56,57 (additional details can be found in Supplementary Note 10, Fig. 12, and Table 6). In order to quantitatively study radiative processes, we computed the radiative rate ΓR from Fermi’s Golden Rule and considered the excitonic effects by solving BSE58: $${{{\Gamma }}}_{{\mathrm {R}}}({{\bf{Q}}}_{{\mathrm {ex}}})=\frac{2\pi }{\hslash }\sum _{{q}_{L},\lambda }{\left|\left\langle G,{1}_{{q}_{L},\lambda }| {H}^{{\mathrm {R}}}| S({{\bf{Q}}}_{{\mathrm {ex}}}),0\right\rangle \right|}^{2}\delta (E({{\bf{Q}}}_{{\mathrm {ex}}})-\hslash c{q}_{L}).$$ (8) Here, the radiative recombination rate is computed between the ground state G and the two-particle excited state S(Qex), $${1}_{{q}_{L},\lambda }$$ and 0 denote the presence and absence of a photon, HR is the electron–photon coupling (electromagnetic) Hamiltonian, E(Qex) is the exciton energy, and c is the speed of light. The summation indices in Eq. (8) run over all possible wavevector (qL) and polarization (λ) of the photon. Following the approach described in ref. 58, the radiative rate (inverse of radiative lifetime τR) in SI unit at zero temperature can be computed for isolated defect–defect transitions as $${{{\Gamma }}}_{{\mathrm {R}}}=\frac{{n}_{D}{e}^{2}}{3\pi {\epsilon }_{0}{\hslash }^{4}{c}^{3}}{E}_{0}^{3}{\mu }_{{\mathrm {e-h}}}^{2},$$ (9) where e is the charge of an electron, ϵ0 is vacuum permittivity, E0 is the exciton energy at Qex = 0, nD is the reflective index of the host material and $${\mu }_{{\mathrm {e-h}}}^{2}$$ is the modulus square of exciton dipole moment with length2 unit. Note that Eq. (9) considers defect–defect transitions in the dilute limit; therefore the lifetime formula for zero-dimensional systems embedded in a host material is used8,59 (also considering nD is unity in isolated 2D systems at the long-wavelength limit). We did not consider the radiative lifetime of TiVV defect at a finite temperature because the first and second excitation energy separation is much larger than kT. Therefore a thermal average of the first and higher excited states is not necessary and the first excited state radiative lifetime is nearly the same at 10 K as zero temperature. In this work, we compute the phonon-assisted nonradiative recombination rate via a Fermi’s golden rule approach: $${{{\Gamma }}}_{{\mathrm {NR}}}=\frac{2\pi }{\hslash }g\sum _{n,m}{p}_{{\mathrm {in}}}| \left\langle fm| {H}^{{\mathrm {e-ph}}}| {\mathrm {in}}\right\rangle {| }^{2}\delta ({E}_{{\mathrm {in}}}-{E}_{{\mathrm {fm}}})$$ (10) Here, ΓNR is the nonradiative recombination rate between electron state i in phonon state n and electron state f in phonon state m, pin is the thermal probability distribution of the initial state $$\left|{\mathrm {in}}\right\rangle$$, He−ph is the electron–phonon coupling Hamiltonian, g is the degeneracy factor and Ein is the energy of vibronic state $$\left|{\mathrm {in}}\right\rangle$$. Within the static coupling and one-dimensional (1D) effective phonon approximations, the nonradiative recombination can be reduced to: $${{{\Gamma }}}_{{\mathrm {NR}}}=\frac{2\pi }{\hslash }g| {W}_{{\mathrm {if}}}{| }^{2}{X}_{{\mathrm {if}}}(T),$$ (11) $${X}_{{\mathrm {if}}}(T)=\sum _{n,m}{p}_{{\mathrm {in}}}{\left|\left\langle {\phi }_{{\mathrm {fm}}}({\bf{R}})| Q-{Q}_{a}| {\phi }_{{\mathrm {in}}}({\bf{R}})\right\rangle \right|}^{2}\delta (m\hslash {\omega }_{\mathrm {{f}}}-n\hslash {\omega }_{{\mathrm {i}}}+{{\Delta }}{E}_{{\mathrm {if}}}),$$ (12) $$\left.{W}_{{\mathrm {if}}}=\left\langle {\psi }_{{\mathrm {i}}}({\bf{r}},{\bf{R}})\left| \frac{\partial H}{\partial Q}\right| {\psi }_{{\mathrm {f}}}({\bf{r}},{\bf{R}})\right\rangle \right|_{{\bf{R}} = {{\bf{R}}}_{a}}.$$ (13) Here, the static coupling approximation naturally separates the nonradiative recombination rate into phonon and electronic terms, Xif and Wif, respectively. The 1D phonon approximation introduces a generalized coordinate Q, with effective frequency ωi and ωf. The phonon overlap in Eq. (12) can be computed using the quantum harmonic oscillator wavefunctions with QQa from the configuration diagram (Fig. 5). Meanwhile the electronic overlap in Eq. (13) is computed by finite difference using the Kohn–Sham orbitals from DFT at the Γ point. The nonradiative lifetime τNR is given by taking the inverse of the rate ΓNR. Supercell convergence of phonon-assisted nonradiative lifetime is shown in Supplementary Note 11 and Table 7. We validated the 1D effective phonon approximation by comparing the Huang–Rhys factor with the full phonon calculations in Supplementary Table 8. ### SOC constant SOC can entangle triplet and singlet states yielding the possibility for a spin–flip transition. The SOC operator is given to zero-order by60 $${H}_{{\mathrm {so}}}=\frac{1}{2}\frac{1}{{c}^{2}{m}_{{\mathrm {e}}}^{2}}\sum _{i}\left({\nabla }_{i}V\times {{\bf{p}}}_{i}\right){{\bf{S}}}_{i}$$ (14) where c is the speed of light, me is the mass of an electron, p and S are the momentum and spin of electron i and V is the nuclear potential energy. The spin–orbit interaction can be rewritten in terms of the angular momentum L and the SOC strength λ as60 $${H}_{{\mathrm {so}}}=\sum _{i}{\lambda }_{\perp }({L}_{x,i}{S}_{x,i}+{L}_{y,i}{S}_{y,i})+{\lambda }_{z}{L}_{z,i}{S}_{z,i}.$$ (15) where λ and λz denote the non-axial and axial SOC strength, respectively. The SOC strength was computed for the TiVV and MoVV defect in h-BN using the ORCA code by TD-DFT56,61. More computational details can be found in Supplementary Note 10. ## Data availability The data that support the findings of this study and the code for the first-principles methods proposed in this study are available from the corresponding author (Yuan Ping) upon reasonable request. ## References 1. 1. Koehl, W. F., Buckley, B. B., Heremans, F. J., Calusine, G. & Awschalom, D. D. Room temperature coherent control of defect spin qubits in silicon carbide. Nature 479, 84–87 (2011). 2. 2. Weber, J. et al. Quantum computing with defects. Proc. Natl Acad. Sci. USA 107, 8513–8518 (2010). 3. 3. Liu, X. & Hersam, M. C. 2D materials for quantum information science. Nat. Rev. Mater. 4, 669–684 (2019). 4. 4. Aharonovich, I. & Toth, M. Quantum emitters in two dimensions. Science 358, 170–171 (2017). 5. 5. Mendelson, N., Doherty, M., Toth, M., Aharonovich, I. & Tran, T. T. Strain-induced modification of the optical characteristics of quantum emitters in hexagonal boron nitride. Adv. Mater. 32, 1908316 (2020). 6. 6. Feldman, M. A. et al. Phonon-induced multicolor correlations in hBN single-photon emitters. Phys. Rev. B 99, 020101 (2019). 7. 7. Yim, D., Yu, M., Noh, G., Lee, J. & Seo, H. Polarization and localization of single-photon emitters in hexagonal boron nitride wrinkles. ACS Appl. Mater. Int. 12, 36362–36369 (2020). 8. 8. Mackoit-Sinkevičienė, M., Maciaszek, M., Van de Walle, C. G. & Alkauskas, A. Carbon dimer defect as a source of the 4.1 eV luminescence in hexagonal boron nitride. Appl. Phys. Lett. 115, 212101 (2019). 9. 9. Kianinia, M., White, S., Fröch, J. E., Bradac, C. & Aharonovich, I. Generation of spin defects in hexagonal boron nitride. ACS Photonics 7, 2147–2152 (2020). 10. 10. Turiansky, M., Alkauskas, A. & Walle, C. Spinning up quantum defects in 2D materials. Nat. Mater. 19, 487–489 (2020). 11. 11. Li, X. et al. Nonmagnetic quantum emitters in boron nitride with ultranarrow and sideband-free emission spectra. ACS Nano 11, 6652–6660 (2017). 12. 12. Ivády, V., Abrikosov, I. A. & Gali, A. First principles calculation of spin-related quantities for point defect qubit research. npj Comput. Mater. 4, 1–13 (2018). 13. 13. Komsa, H.-P., Berseneva, N., Krasheninnikov, A. V. & Nieminen, R. M. Charged point defects in the flatland: accurate formation energy calculations in two-dimensional materials. Phys. Rev. X 4, 031044 (2014). 14. 14. Wang, D. et al. Determination of formation and ionization energies of charged defects in two-dimensional materials. Phys. Rev. Lett. 114, 196801 (2015). 15. 15. Wu, F., Galatas, A., Sundararaman, R., Rocca, D. & Ping, Y. First-principles engineering of charged defects for two-dimensional quantum technologies. Phys. Rev. Mater. 1, 071001 (2017). 16. 16. Govoni, M. & Galli, G. Large scale GW calculations. J. Chem. Theory Comput. 11, 2680–2696 (2015). 17. 17. Smart, T. J., Wu, F., Govoni, M. & Ping, Y. Fundamental principles for calculating charged defect ionization energies in ultrathin two-dimensional materials. Phys. Rev. Mater. 2, 124002 (2018). 18. 18. Nguyen, N. L., Colonna, N., Ferretti, A. & Marzari, N. Koopmans-compliant spectral functionals for extended systems. Phys. Rev. X 8, 021051 (2018). 19. 19. Weng, M., Li, S., Zheng, J., Pan, F. & Wang, L.-W. Wannier Koopmans method calculations of 2D material band gaps. J. Chem. Phys. Lett. 9, 281–285 (2018). 20. 20. Miceli, G., Chen, W., Reshetnyak, I. & Pasquarello, A. Nonempirical hybrid functionals for band gaps and polaronic distortions in solids. Phys. Rev. B 97, 121112 (2018). 21. 21. Refaely-Abramson, S., Qiu, D. Y., Louie, S. G. & Neaton, J. B. Defect-induced modification of low-lying excitons and valley selectivity in monolayer transition metal dichalcogenides. Phys. Rev. Lett. 121, 167402 (2018). 22. 22. Gao, S., Chen, H.-Y., Bernardi, M. Radiative properties and excitons of candidate defect emitters in hexagonal boron nitride. Preprint at arXiv:2007.10547 (2020). 23. 23. Xu, J., Habib, A., Kumar, S., Wu, F., Sundararaman, R. & Ping, Y. Spin-phonon relaxation from a universal ab initio density-matrix approach. Nat. Commun. 11, 1–10 (2020). 24. 24. Seo, H., Falk, A. L., Klimov, P. V., Miao, K. C., Galli, G. & Awschalom, D. D. Quantum decoherence dynamics of divacancy spins in silicon carbide. Nat. Commun. 7, 1–9 (2016). 25. 25. Ye, M., Seo, H. & Galli, G. Spin coherence in two-dimensional materials. npj Comput. Mater. 5, 1–6 (2019). 26. 26. Wu, F., Smart, T. J., Xu, J. & Ping, Y. Carrier recombination mechanism at defects in wide band gap two-dimensional materials from first principles. Phys. Rev. B 100, 081407 (2019). 27. 27. Thiering, G. & Gali, A. Ab initio calculation of spin–orbit coupling for an NV center in diamond exhibiting dynamic Jahn–Teller effect. Phys. Rev. B 96, 081115 (2017). 28. 28. Gottscholl, A. et al. Initialization and read-out of intrinsic spin defects in a van der Waals crystal at room temperature. Nat. Mater. 19, 540–545 (2020). 29. 29. Seo, H., Ma, H., Govoni, M. & Galli, G. Designing defect-based qubit candidates in wide-gap binary semiconductors for solid-state quantum technologies. Phys. Rev. Mater. 1, 075002 (2017). 30. 30. Turiansky, M. E., Alkauskas, A., Bassett, L. C. & Walle, C. G. Dangling bonds in hexagonal boron nitride as single-photon emitters. Phys. Rev. Lett. 123, 127401 (2019). 31. 31. Rayson, M. & Briddon, P. First principles method for the calculation of zero-field splitting tensors in periodic systems. Phys. Rev. B 77, 035119 (2008). 32. 32. Zolnhofer, E. M. et al. Electronic structure and magnetic properties of a titanium (II) coordination complex. Inorg. Chem. 59, 6187–6201 (2020). 33. 33. Tran, T. T. et al. Robust multicolor single photon emission from point defects in hexagonal boron nitride. ACS Nano 10, 7331–7338 (2016). 34. 34. Schell, A. W., Takashima, H., Tran, T. T., Aharonovich, I. & Takeuchi, S. Coupling quantum emitters in 2D materials with tapered fibers. ACS Photonics 4, 761–767 (2017). 35. 35. Ahmadpour Monazam, M. R., Ludacka, U., Komsa, H.-P. & Kotakoski, J. Substitutional Si impurities in monolayer hexagonal boron nitride. Appl. Phys. Lett. 115, 071604 (2019). 36. 36. Sajid, A. & Thygesen, K. S. VNCB defect as source of single photon emission from hexagonal boron nitride. 2D Mater. 7, 031007 (2020). 37. 37. Fuchs, F., Bechstedt, F., Shishkin, M. & Kresse, G. Quasiparticle band structure based on a generalized Kohn–Sham scheme. Phys. Rev. B 76, 115109 (2007). 38. 38. Bechstedt, F. Many-Body Approach to Electronic Excitations (Springer-Verlag, 2016). 39. 39. Ping, Y., Rocca, D. & Galli, G. Electronic excitations in light absorbers for photoelectrochemical energy conversion: first principles calculations based on many body perturbation theory. Chem. Soc. Rev. 42, 2437–2469 (2013). 40. 40. Rocca, D., Ping, Y., Gebauer, R. & Galli, G. Solution of the Bethe–Salpeter equation without empty electronic states: application to the absorption spectra of bulk systems. Phys. Rev. B 85, 045116 (2012). 41. 41. Ping, Y., Rocca, D., Lu, D. & Galli, G. Ab initio calculations of absorption spectra of semiconducting nanowires within many-body perturbation theory. Phys. Rev. B 85, 035316 (2012). 42. 42. Ping, Y., Rocca, D. & Galli, G. Optical properties of tungsten trioxide from first-principles calculations. Phys. Rev. B 87, 165203 (2013). 43. 43. Hours, J., Senellart, P., Peter, E., Cavanna, A. & Bloch, J. Exciton radiative lifetime controlled by the lateral confinement energy in a single quantum dot. Phys. Rev. B 71, 161306 (2005). 44. 44. Van de Walle, C. G. & Neugebauer, J. First-principles calculations for defects and impurities: applications to III-nitrides. J. Appl. Phys. 95, 3851–3879 (2004). 45. 45. Goldman, M. L. et al. Phonon-induced population dynamics and intersystem crossing in nitrogen-vacancy centers. Phys. Rev. Lett. 114, 145502 (2015). 46. 46. Bassett, L. C. et al. Ultrafast optical control of orbital and spin dynamics in a solid-state defect. Science 345, 1333–1337 (2014). 47. 47. Kim, S. et al. Photonic crystal cavities from hexagonal boron nitride. Nat. Commun. 9, 1–8 (2018). 48. 48. Zhong, T. et al. Optically addressing single rare-earth ions in a nanophotonic cavity. Phys. Rev. Lett. 121, 183603 (2018). 49. 49. Giannozzi, P. et al. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. J. Phys.: Condens. Matter 21, 395502 (2009). 50. 50. Hamann, D. R. Optimized norm-conserving Vanderbilt pseudopotentials. Phys. Rev. B 88, 085117 (2013). 51. 51. Sundararaman, R. & Ping, Y. First-principles electrostatic potentials for reliable alignment at interfaces and defects. J. Chem. Phys. 146, 104109 (2017). 52. 52. Wang, D. & Sundararaman, R. Layer dependence of defect charge transition levels in two-dimensional materials. Phys. Rev. B 101, 054103 (2020). 53. 53. Sundararaman, R., Letchworth-Weaver, K., Schwarz, K. A., Gunceler, D., Ozhabes, Y. & Arias, T. JDFTx: software for joint density-functional theory. SoftwareX 6, 278–284 (2017). 54. 54. Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865 (1996). 55. 55. Marini, A., Hogan, C., Grüning, M. & Varsano, D. Yambo: an ab initio tool for excited state calculations. Comput. Phys. Commun. 180, 1392–1403 (2009). 56. 56. Neese, F. The ORCA program system. WIREs Comput. Mol. Sci. 2, 73–78 (2012). 57. 57. Neese, F. Calculation of the zero-field splitting tensor on the basis of hybrid density functional and Hartree–Fock theory. J. Chem. Phys. 127, 164112 (2007). 58. 58. Wu, F., Rocca, D. & Ping, Y. Dimensionality and anisotropicity dependence of radiative recombination in nanostructured phosphorene. J. Mater. Chem. C 7, 12891–12897 (2019). 59. 59. Gupta, S., Yang, J.-H. & Yakobson, B. I. Two-level quantum systems in two-dimensional materials for single photon emission. Nano Lett. 19, 408–414 (2018). 60. 60. Maze, J. R. et al. Properties of nitrogen-vacancy centers in diamond: the group theoretic approach. N. J. Phys. 13, 025025 (2011). 61. 61. de Souza, B., Farias, G., Neese, F. & Izsák, R. Predicting phosphorescence rates of light organic molecules using time-dependent density functional theory and the path integral approach to dynamics. J. Chem. Theory Comput. 15, 1896–1904 (2019). 62. 62. Towns, J. et al. XSEDE: accelerating scientific discovery. Comput. Sci. Eng. 16, 62–74 (2014). ## Acknowledgements We acknowledge Susumu Takahashi for helpful discussions. This work is supported by the National Science Foundation under grant nos. DMR-1760260, DMR-1956015, and DMR-1747426. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. T.J.S. acknowledges the LLNL Graduate Research Scholar Program and funding support from LLNL LDRD 20-SI-004. This research used resources of the Scientific Data and Computing center, a component of the Computational Science Initiative, at Brookhaven National Laboratory under Contract No. DE-SC0012704, the lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST 1828315, the National Energy Research Scientific Computing Center (NERSC) a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231, the Extreme Science and Engineering Discovery Environment (XSEDE) which is supported by National Science Foundation Grant No. ACI-154856262. ## Author information Authors ### Contributions Y.P. established the theoretical models and supervised the project, T.J.S. and K.L. performed the calculations and data analysis, Y.P. and J.X. discussed the results, and all authors participated in the writing of this paper. T.J.S. and K.L. contributed equally to this work. ### Corresponding author Correspondence to Yuan Ping. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Smart, T.J., Li, K., Xu, J. et al. Intersystem crossing and exciton–defect coupling of spin defects in hexagonal boron nitride. npj Comput Mater 7, 59 (2021). https://doi.org/10.1038/s41524-021-00525-5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8172446489334106, "perplexity": 3801.086593963475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00426.warc.gz"}
https://www.doityourself.com/forum/rugs-carpets-carpeting/185596-how-can-we-do-baseboards-before-carpet-if-tack-strip-already-down.html
> > > # How can we do baseboards before carpet if tack strip is already down? #1 10-24-04, 02:08 PM txatmag Visiting Guest Posts: n/a How can we do baseboards before carpet if tack strip is already down? We tore up the carpet and the pad in our main room to do some remodeling. We originally had lawyer block panelling in our main room. We removed it and replaced it with drywall. The tack strip stayed down from where the original carpet laid. We want to put the baseboards on before the carpet comes, which is in three days. However, the question is primarily about the tack strip. We would rather not remove it...but the distance from the new drywall is different than from the old panelling. How far is the tack strip supposed to be from the wall? Ours is 1/2 inch away from the wall and we haven't yet added the baseboard. Now, do we add the baseboards first or after carpet? If we add them before the carpet? How far do we go up? (Going up a 1/2 inch like other messages have suggested doesn't allow any space between the baseboard and the tack strip.) So how do they tuck it in? #2 10-24-04, 04:01 PM Member Join Date: Nov 2002 Location: Canton Ohio Posts: 1,397 Likes: 0 Received 0 Likes on 0 Posts Go ahead and take the old tackless out and install the baseboard 1/4" to 3/8" above the floor. 1/2 inch is a little high for my taste, and unless the chosen carpet is really thick there would be a void between the two.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422418236732483, "perplexity": 3636.4913822449757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00011.warc.gz"}
https://iwaponline.com/aqua/article-abstract/62/3/155/29139/Bench-scale-evaluation-of-Fe-II-ions-on-haloacetic?redirectedFrom=fulltext
Cast iron pipes were installed broadly in North American water utilities, particularly in older cities such as Halifax, NS, and other cities in the northeastern portions of Canada and the USA. Many of these cast iron pipes are corroded and are continuous sources of Fe(II) ions in drinking water distribution systems. In this paper, the results of an experimental investigation into the factors influencing haloacetic acids (HAAs) formation in the presence of Fe(II) ions are presented. The experiments were conducted using NaHCO3 buffered synthetic water samples with different characteristics (i.e. pH, phosphate, stagnation time) simulating with water distribution systems. The results showed that Fe(II) ions significantly reduced HAAs formation in different reaction systems at a 95% confidence level. In control water systems, pH had no significant impact, however, in the presence of Fe(II) ions in water, pH had an obvious impact to increase HAAs formation (α = 0.05). In contrast, phosphate-based corrosion inhibitor significantly (α = 0.05) reduced HAAs formation in the presence of different dosages of Fe(II) ions in water samples for the reaction period of 24, 48, 84 and 130 h, respectively. Significant factors and their rank influencing HAAs formation and distribution were identified using a 24 full factorial design approach. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825547695159912, "perplexity": 3264.5017530397918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203493.88/warc/CC-MAIN-20190324210143-20190324232143-00505.warc.gz"}
https://math.stackexchange.com/questions/3231079/decomposition-and-inertia-fields-in-the-factorization-of-3-in-mathbbq-zet
# Decomposition and inertia fields in the factorization of $3$ in $\mathbb{Q}(\zeta_{24})$ I've seen the following exercise from an old problem sheet: For $$\zeta:=\zeta_{24}$$ a primitive $$24$$-th root of unity and $$\mathcal{O}:=\mathbb{Z}[\zeta]$$, determine the prime decomposition of $$3$$. Determine the decomposition and inertia fields of the primes above $$3$$. [Hint: show that there is a unique $$4$$-subextension $$F$$ of $$\mathbb{Q}(\zeta)|\mathbb{Q}$$ in which $$3$$ does not ramify, and that $$F$$ is the inertia field. Describe $$F$$ explicitly, then determine all quadratic fields $$E$$ under $$F$$ and find one where $$3$$ splits] Using a famous theorem on the decomposition of primes in cyclotomic fields, we find easily that $$3\mathcal{O}=(\mathfrak{p}\mathfrak{q})^2$$ for some primes $$\mathfrak{p}, \mathfrak{q}$$. For $$G:=\text{Gal}(\mathbb{Q}(\zeta)|\mathbb{Q})$$, we have $$G\simeq(\mathbb{Z}/(24))^\times=\{\overline{1},\overline{5},\overline{7},\overline{11},\overline{13},\overline{17},\overline{19},\overline{23}\}$$. Since $$\overline{d}^2=\overline{1}$$ for all $$\overline{d}\neq \overline{1}$$, then all subgroups $$H with order $$2$$ are of the form $$\langle\overline{d}\rangle$$ with $$\overline{d}\in G\setminus\{\overline{1}\}$$. By the Galois correspondence, $$F$$ must have the form $$\mathbb{Q}(\zeta)^H$$ for some $$H$$ as above. My questions are: 1) How do we know whether or not $$3$$ ramifies in $$\mathbb{Q}(\zeta)^H$$ for a given $$H$$? 2) Once we have $$F$$, how do we find $$E$$? • By the way, you can find $\mathfrak{p}$ and $\mathfrak{q}$ explicitly in the factorization of $(3)$ using Proposition (8.3) in Neukirch's ANT book. The 24-th cyclotomic polynomial is $x^8 - x^4 + 1$, and its factorization mod $3$ is $(x^2 + x + 2)^2(x^2 + 2x + 2)^2$. The proposition then says that the primes are $\mathfrak{p} = \langle 3, \zeta_{24}^2 + \zeta_{24} + 2\rangle$ and $\mathfrak{q} = \langle 3, \zeta_{24}^2 + 2\zeta_{24} + 2\rangle$. – Tob Ernack May 18 at 22:04 • For the decomposition group, maybe you can find the explicit subgroup of $G$ (in terms of maps $\zeta_{24} \to \zeta_{24}^i, \gcd(i, 24) = 1$) that sends $\zeta_{24}^2 + \zeta_{24} + 2$ back to $\mathfrak{p}$. – Tob Ernack May 18 at 22:11 • To be honest, I just used WolframAlpha. But given the theorem that you mentioned about prime factorizations in cyclotomic fields, you can already guess the form of the factorization, and use a bit of brute force on the irreducible quadratics mod $3$. – Tob Ernack May 18 at 22:28 • I think the approach hinted at in the problem statement might be more elegant actually. I haven't thought it through yet but it might spare you these computations. – Tob Ernack May 18 at 22:33 • Ok looking at their approach, one idea could be that the fixed field of $\langle \overline{d}\rangle$ is $\mathbb{Q}\left(\zeta_{24} + \zeta_{24}^{d}\right)$ (I haven't proved that). The minimal polynomial of $\zeta_{24} + \zeta_{24}^d$ can be computed for each $d$ in $\{1, 5, 11, ..., 23\}$ (incidentally this would prove that the fixed fields really are what I said, by checking that the degree is $4$). Then you can check whether $3$ ramifies by checking whether it divides the discriminant. This approach should work although there might be a smarter way to avoid the computations. – Tob Ernack May 18 at 23:43 Use the fact that $$\mathbb{Q}(\zeta_{24}) = \mathbb{Q}(\zeta_3)\mathbb{Q}(\zeta_8)$$. Then $$3$$ won't ramify in $$\mathbb{Q}(\zeta_8)$$, as $$3$$ doesn't divide the discriminant of the field. This is your wanted subfield $$F$$. Obviously $$F$$ is the inertia field, as it's the biggest subfield in which ramification doesn't occur. Moreover, using the fact that: $$\text{Gal}(\mathbb{Q}(\zeta_{24})/\mathbb{Q}) \cong \text{Gal}(\mathbb{Q}(\zeta_{8})/\mathbb{Q}) \times \text{Gal}(\mathbb{Q}(\zeta_{3})/\mathbb{Q})$$ we get that $$\mathbb{Q}(\zeta_8)$$ corresponds to $$H = \{1,17\}$$ in $$(\mathbb{Z}/(24))^\times$$ Now the quadratic subfields of $$F$$ are $$\mathbb{Q}(i), \mathbb{Q}(\sqrt{2})$$ and $$\mathbb{Q}(i\sqrt{2})$$. It's not hard to see that $$3$$ is inert in $$\mathbb{Q}(i)$$ and $$\mathbb{Q}(\sqrt{2})$$, while it splits in $$\mathbb{Q}(i\sqrt{2})$$. Hence the decomposition field is $$\mathbb{Q}(i\sqrt{2})$$. • How did you find out that $\mathbb{Q}(\zeta_8)$ corresponts to $H=\{1,17\}$ from the fact that $\text{Gal}(\mathbb{Q}(\zeta_{24})|\mathbb{Q})\simeq \text{Gal}(\mathbb{Q}(\zeta_8)|\mathbb{Q})\times\text{Gal}(\mathbb{Q}(\zeta_3)|\mathbb{Q})$? – rmdmc89 May 31 at 0:00 • And how did you conclude that the quadratic subfields of $F$ are $\mathbb{Q}(i)$, $\mathbb{\sqrt{2}}$, $\mathbb{Q}(i\sqrt{2})$? I'm sure there are many ways to do it, but I'm curious to know how you did it – rmdmc89 May 31 at 0:09 • @rmdmc89 From the Chinese Remainder's Theorem we have that $(\mathbb{Z}/(24))^\times \cong (\mathbb{Z}/(8))^\times \times (\mathbb{Z}/(3))^\times$, where the isomorphism is given by $n \to (n \mod 8, n\mod 3)$. Now the group fixing $\mathbb{Q}(\zeta_8)$ is given by $\{1\} \times (\mathbb{Z}/(3))^\times$, which under the isomorphism corresponds to elements of $(\mathbb{Z}/(24))^\times$ having remainder $1$ modulo $8$. They are exactly $1$ and $17$. – Stefan4024 May 31 at 8:00 • @rmdmc89 One way is to use the Galois group of $\mathbb{Q}(\zeta_8)$ and see what elements are fixed by the subgroups of order $2$. However this method is tedious. The easier method would be to use the explicit form od $\zeta_8$, i.e. $\frac{1+i}{\sqrt{2}}$. We have $\zeta_8^2 = \frac{1+2i-1}{2} = i$. Thus $\mathbb{Q}(i) \subset F$. Also we have that $\zeta_8 + \zeta_8^{-1} = \frac{1+ i}{\sqrt{2}} + \frac{1-i}{\sqrt{2}} = \sqrt{2}$. Thus $\mathbb{Q}(\sqrt{2}) \subset F$. From above we also have that $\mathbb{Q}(i\sqrt{2}) \subset F$. Since there are 3 quadratic fields we have found them all. – Stefan4024 May 31 at 8:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 50, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983834028244019, "perplexity": 1731.2308333332987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00549.warc.gz"}
https://ewintang.com/blog/2019/06/13/some-settings-supporting-efficient-state-preparation/
I wrote most of this list to procrastinate on the flight back from TQC (which was great!). So, for my own reference: here’s some settings where efficient state preparation / data loading is possible, and classical versions of these protocols. Notes: • There might be errors, especially in details of the quantum protocols, and some of the algorithms may be suboptimal (note the streaming setting, in particular). Let me know if you notice either of these. • Some relevant complexity research here is in QSampling (Section 4). • All these runtimes should have an extra $O(\log n)$ factor, since we assume that indices and entries take $\log n$ bits/qubits to specify. However, I’m going to follow the convention from classical computing and ignore these factors, hopefully with little resulting confusion. For all that follows, we are given $v \in \mathbb{C}^n$ in some way and want to output 1. for the quantum case, a copy of the state $\ket{v} = \sum_{i=1}^n \frac{v_i}{\|v\|} \ket{i}$, and 2. for the classical case, the pair $(i,v_i)$ output with probability $\frac{\vert v_i\vert^2}{\|v\|^2}$. type sparse uniform integrable QRAM streamed quantum $O(s)$ $O(C\log\frac1\delta)$ $O(I \log n)$ $O(\log n)$ depth $O(1)$ space with 2 passes classical $O(s)$ $O(C^2\log\frac1\delta)$ $O(I\log n)$ $O(\log n)$ $O(1)$ space with 1 pass Recall that if we want to prepare an arbitrary quantum state, we need at least $\Omega(\sqrt{n})$ time by search lower bounds, so for some settings of the above constants, these protocols are exponentially faster than the naive strategy. Further recall that state preparation and sampling both have easy protocols running in $O(n)$ time. ## $v$ is sparse We assume that $v$ has at most $s$ nonzero entries and we can access a list of the nonzero entries $((i_1,v_{i_1}),(i_2,v_{i_2}),\ldots,(i_s,v_{i_s}))$. Thus, we have the oracle $a \to (i_a, v_{i_a})$. We can prepare the quantum state and classical sample by preparing the vector $v' \in \BB{C}^s$ where $v_a' = v_{i_a}$, and then using the oracle to swap out the index $a$ with $i_a$. This gives $O(s)$ classical and quantum time. ## $v$ is close-to-uniform We assume that $\max\vert v_i\vert \leq C\frac{\|v\|}{\sqrt{n}}$ and we know $C, \|v\|$. Notice that we don’t give a lower bound on the size of entries, but we can’t have too many small entries, since this would lower the norm. Also notice that $C \geq 1$. Quantumly, given the typical oracle $\ket{i}\ket{0} \to \ket{i}\ket{v_i}$ we can prepare the state Measuring the ancilla and post-selecting on 0 gives $\ket{v}$. This happens with probability $\frac{1}{C^2}$, and with amplitude amplification this means we can get a copy of the state with probability $\geq 1-\delta$ in $O(C\log\frac1\delta)$ time. Classically, we perform rejection sampling from the uniform distribution: pick an index uniformly at random, and keep it with probability $\frac{v_i^2n}{\|v\|^2C^2}$; otherwise, restart. This outputs the correct distribution and gives a sample in $O(C^2\log\frac1\delta)$ time. ## $v$ is efficiently integrable We assume that, given $1 \leq a \leq b \leq n$, I can compute $\sqrt{\sum_{i=a}^b |v_i|^2}$ in $O(I)$ time. This assumption and the resulting quantum preparation routine comes from Grover-Rudolph. The quantum algorithm uses one core subroutine: adding an extra qubit, sending $\ket{v^{(k)}} \to \ket{v^{(k+1)}}$, where All that’s necessary is to apply it $O(\log n)$ times and add the phase at the end. I haven’t worked it out, but I think you can run the subroutine efficiently using three calls to the integration oracle, giving $O(I\log n)$ time. Classically, we can do essentially the same thing: the integration oracle means that we can compute marginal probabilities; that is, Thus, we can sample from the distribution on the first bit, then sample from the distribution on the second bit conditioned on our value of the first bit, and so on. This also gives $O(I\log n)$ time. ## $v$ is stored in a dynamic data structure We assume that our vector can be stored in a data structure that supports efficient updating of entries. Namely, we use the standard binary search tree data structure (see, for example, Section 2.2.2 of Prakash’s thesis). This is a simple data structure with many nice properties, including $O(\log n)$ time updates. If you want to prepare many states corresponding to similar vectors, this is a good option. There’s not much more to say, since the protocol is the same as the integrability protocol. The only difference is that, instead of assuming that we can compute interval sums efficiently, we instead precompute and store all of the integration oracle calls we need for the state preparation procedure in a data structure. The classical runtime is $O(\log n)$, and the quantum circuit takes $O(n)$ gates but only $O(\log n)$ depth. The quantum algorithm is larger because here, we need to query a linear number of memory cells, as opposed to the integrabilility assumption, where we only needed to run the integration oracle in superposition. While it may seem that the classical algorithm wins definitively here, the small depth leaves potential for this protocol to run in $O(\log n)$ time in practice, matching the classical algorithm. ## $v$ is streamed We assume that we can receive a stream of the entries of $v$ in order; we wish to produce a state/sample using as little space as possible. Classically, we can do this with reservoir sampling. The idea is that we maintain a sample $(s, v_s)$ from all of the entries we’ve seen before, along with their squared norm $\lambda = \sum_{i=1}^k \vert v_i\vert^2$. Then, when we receive a new entry $v_{k+1}$, we swap our sample to $(k+1,v_{k+1})$ with probability $\vert v_{k+1}\vert^2/(\lambda + \vert v_{k+1}\vert^2)$ and update our $\lambda$ to $\lambda + \vert v_{k+1}\vert^2$. After we go through all of $v$’s entries, we get a sample only using $O(1)$ space. (This is a particularly nice algorithm for sampling from a vector, since it has good locality and can be generalized to get $O(k)$ samples in $O(k)$ space and one pass.) Quantumly, I only know how to prepare a state in one pass with sublinear space if the norm is known. If you know $\|v\|$, then you can prepare $\ket{n}$, and as entries come in, rotate to get $\frac{v_1}{\|v\|}\ket{1} + \sqrt{1-\frac{|v_1|^2}{\|v\|^2}}\ket{n}$, then $\frac{v_1}{\|v\|}\ket{1} + \frac{v_2}{\|v\|}\ket{2} + \sqrt{1-\frac{|v_1|^2+|v_2|^2}{\|v\|^2}}\ket{n}$, and so on. This uses only $O(\log n)$ qubits, which I notate here as $O(1)$ space. You can relax this assumption to just having an estimate $\lambda$ of $\|v\|$ such that $\frac{1}{\text{poly}(n)} \leq \lambda/\|v\| \leq \text{poly}(n)$. Finally, if you like, you can remove the assumption that you know the norm just by requiring two passes instead of one; in the first pass, compute the norm, and in the second pass, prepare the state. But it’d be nice to remove the assumption entirely. So, is it possible to prepare a quantum state corresponding to a generic $v \in \BB{C}^n$, given only one pass through it? Thanks to Chunhao Wang and Nai-Hui Chia for telling me about this problem.
{"extraction_info": {"found_math": true, "script_math_tex": 81, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641268253326416, "perplexity": 337.16222548390766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00394.warc.gz"}
http://math.stackexchange.com/questions/418464/projections-of-multivariate-normal-distribution/459443
# Projections of multivariate normal distribution Given a random vector X with the multivariate normal distribution F(X), we know that, for two vectors a and b, the projections $A=\sum_j a_j X_j$ and $B=\sum_i b_i X_i$ are univariate normal. I'm interested in the joint distribution of A and B. Is their joint distribution normal? Is the dependence between A and B described only by their correlation? (do they have only linear dependence?) Thank you for any insight. References are highly appreciated as well. - I found the answer for the above question and I thought is nice to share it. So the answer is yes: A and B are joint normal and so the relation between them is determined by the correlation. This is due to the properties of the characteristic function of a multivariate normal. –  KAT Aug 4 '13 at 11:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381906390190125, "perplexity": 187.4540328319709}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500808153.1/warc/CC-MAIN-20140820021328-00361-ip-10-180-136-8.ec2.internal.warc.gz"}
https://math.answers.com/geometry/What_is_the_area_of_a_rectangle_that_is_4cm_long_and_2cm_wide
0 # What is the area of a rectangle that is 4cm long and 2cm wide? Wiki User 2013-11-07 12:13:13 It is: 4*2 = 8 square cm Wiki User 2013-11-07 12:13:13 Study guides 20 cards ## Properties that describe the appearance of matter are known as what properties ➡️ See all cards 3.84 38 Reviews Earn +20 pts Q: What is the area of a rectangle that is 4cm long and 2cm wide?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372237920761108, "perplexity": 3815.3283471548434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00023.warc.gz"}
https://www.codecademy.com/courses/machine-learning/lessons/logistic-regression/exercises/log-odds
Learn So far, we’ve learned that the equation for a logistic regression model looks like this: $ln(\frac{p}{1-p}) = b_{0} + b_{1}x_{1} + b_{2}x_{2} +\cdots + b_{n}x_{n}$ Note that we’ve replaced y with the letter p because we are going to interpret it as a probability (eg., the probability of a student passing the exam). The whole left-hand side of this equation is called log-odds because it is the natural logarithm (ln) of odds (p/(1-p)). The right-hand side of this equation looks exactly like regular linear regression! In order to understand how this link function works, let’s dig into the interpretation of log-odds a little more. The odds of an event occurring is: $Odds = \frac{p}{1-p} = \frac{P(event\ occurring)}{P(event\ not\ occurring)}$ For example, suppose that the probability a student passes an exam is 0.7. That means the probability of failing is 1 - 0.7 = 0.3. Thus, the odds of passing are: $Odds\ of\ passing = \frac{0.7}{0.3} = 2.\overline{33}$ This means that students are 2.33 times more likely to pass than to fail. Odds can only be a positive number. When we take the natural log of odds (the log odds), we transform the odds from a positive value to a number between negative and positive infinity — which is exactly what we need! The logit function (log odds) transforms a probability (which is a number between 0 and 1) into a continuous value that can be positive or negative. ### Instructions 1. Suppose that there is a 40% probability of rain today (p = 0.4). Calculate the odds of rain and save it as odds_of_rain. Note that the odds are less than 1 because the probability of rain is less than 0.5. Feel free to print odds_of_rain to see the results. 2. Use the odds that you calculated above to calculate the log odds of rain and save it as log_odds_of_rain. You can calculate the natural log of a value using the numpy.log() function. Note that the log odds are negative because the probability of rain was less than 0.5. Feel free to print log_odds_of_rain to see the results. 3. Suppose that there is a 90% probability that my train to work arrives on-time. Calculate the odds of my train being on-time and save it as odds_on_time. Note that the odds are greater than 1 because the probability is greater than 0.5. Feel free to print odds_on_time to see the results. 4. Use the odds that you calculated above to calculate the log odds of an on-time train and save it as log_odds_on_time. Note that the log odds are positive because the probability of an on-time train was greater than 0.5. Feel free to print log_odds_on_time to see the results.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8463050723075867, "perplexity": 421.82468636894197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00717.warc.gz"}
https://epiga.episciences.org/4944
## Beauville, Arnaud - Limits of the trivial bundle on a curve epiga:4454 - Épijournal de Géométrie Algébrique, November 1, 2018, Volume 2 - https://doi.org/10.46298/epiga.2018.volume2.4454 Limits of the trivial bundle on a curve Authors: Beauville, Arnaud We attempt to describe the rank 2 vector bundles on a curve C which are specializations of the trivial bundle. We get a complete classifications when C is Brill-Noether generic, or when it is hyperelliptic; in both cases all limit vector bundles are decomposable. We give examples of indecomposable limit bundles for some special curves. Volume: Volume 2 Published on: November 1, 2018 Submitted on: April 24, 2018 Keywords: Mathematics - Algebraic Geometry
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913848400115967, "perplexity": 1565.6747372570842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00311.warc.gz"}
https://www.gnu.org/software/gnuastro/manual/html_node/MakeCatalog-general-settings.html
## GNU Astronomy Utilities Next: , Previous: , Up: Invoking astmkcatalog   [Contents][Index] #### 7.3.5.2 MakeCatalog general settings Some of the columns require particular settings (for example the zero point magnitude for measuring magnitudes), the options in this section can be used for such configurations. -z FLT --zeropoint=FLT The zero point magnitude for the input image, see Flux Brightness and magnitude. -E --skysubtracted If the image has already been sky subtracted by another program, then you need to notify MakeCatalog through this option. Note that this is only relevant when the Signal to noise ratio is to be calculated. -T FLT --threshold=FLT For all the columns, only consider pixels that are above a given relative threshold. Symbolizing the value of this option as $$T$$, the Sky for a pixel at $$(i,j)$$ with $$\mu_{ij}$$ and its Standard deviation with $$\sigma_{ij}$$, that pixel will only be used if its value ($$B_{ij}$$) satisfies this condition: $$B_{ij}>\mu_{ij}+{T}\sigma_{ij}$$. The only calculations that will not be affected are is the average river values (--riverave), since they are used as a reference. A commented row will be added in the header of the output catalog that will print the given value, since this is a very important issue, it starts with **IMPORTANT**. NoiseChisel will detect very diffuse signal which is useful in most cases where the aggregate properties of the detections are desired, since there is signal there (with the desired certainty). However, in some cases, only the properties of the peaks of the objects/clumps are desired, for example in attempting to separate stars from galaxies, the peaks are the major target and the diffuse regions only act to complicate the separation. With this option, MakeCatalog will simply ignore any pixel below the relative threshold. This option is not mandatory, so if it isn’t given (after reading the command-line and all configuration files, see Configuration files), MakeCatalog will still operate. However, if it has a value in any lower-level configuration file and you want to ignore that value for this particular run or in a higher-level configuration file, then set it to NaN, for example --threshold=nan. Gnuastro uses the C library’s strtod function to read floats, which is not case-sensitive in reading NaN values. But to be consistent, it is good practice to only use nan. --nsigmag=FLT The median standard deviation (from the standard deviation image) will be multiplied by the value to this option and its magnitude will be reported in the comments of the output catalog. This value is a per-pixel value, not per object/clump and is not found over an area or aperture, like the common $$5\sigma$$ values that are commonly reported as a measure of depth or the upper-limit measurements (see Quantifying measurement limits). Next: , Previous: , Up: Invoking astmkcatalog   [Contents][Index]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802396655082703, "perplexity": 1296.080179493907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823260.52/warc/CC-MAIN-20171019084246-20171019104246-00701.warc.gz"}
https://www.physicsforums.com/threads/feather-falling-on-mars.172656/
# Feather Falling on Mars 1. Jun 3, 2007 ### Worzo Firstly, this isn't my homework question. I was trying to answer another, broader question for a student, and it boiled down to this one. There's quite a subtle point here, I think, but I just can't grasp it. Consider stable atmospheric conditions on Mars and Earth. A feather is dropped from a great height on both planents. Which planet gives the feather the higher terminal velocity? Data given is: - Mars gravity = (1/3)g - Earth atmosphere: 1000mbar - Mars atmosphere: 10mbar So terminal velocity goes as square root of gravitational force and inverse square root of viscosity. I can't work out how the viscosity changes with temperature and pressure. Gut feeling tells you that the weaker gravity (a third of Earth's) contributes to lowering the terminal velocity. However, doesn't the fact that the pressure is 100 times smaller contribute to the viscosity somehow? I remember proving in kinetic theory that viscosity is independent of pressure (except for high pressures), but does that hold here? I can't help thinking temperature has something to do with it as well. Any explanation/calculation of Terrestrial/Martian atmospheric viscosity would be most appreciated. 2. Jun 4, 2007 3. Jun 4, 2007 ### DaveC426913 AFAIK, Mars' atmo is more akin to vacuum than it is to a real atmo. 4. Jun 5, 2007 ### Worzo That's what I thought, but I can't find any expression for how the viscosity changes at low pressure.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8856029510498047, "perplexity": 1435.1089477810795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542323.80/warc/CC-MAIN-20161202170902-00217-ip-10-31-129-80.ec2.internal.warc.gz"}
https://arxiv.org/abs/1203.6580
# Title:Search for events with large missing transverse momentum, jets, and at least two tau leptons in 7 TeV proton-proton collision data with the ATLAS detector Abstract: A search for events with large missing transverse momentum, jets, and at least two tau leptons has been performed using 2 fb^-1 of proton-proton collision data at sqrt(s) = 7 TeV recorded with the ATLAS detector at the Large Hadron Collider. No excess above the Standard Model background expectation is observed and a 95% CL visible cross section upper limit for new phenomena is set. A 95% CL lower limit of 32 TeV is set on the GMSB breaking scale Lambda independent of tan(beta). These limits provide the most stringent tests to date in a large part of the considered parameter space. Comments: 6 pages plus author list (19 pages total), 3 figures, revised author list, matches published PLB version Subjects: High Energy Physics - Experiment (hep-ex) Journal reference: Phys.Lett. B714 (2012) 180-196 DOI: 10.1016/j.physletb.2012.06.055 Report number: CERN-PH-EP-2012-054 Cite as: arXiv:1203.6580 [hep-ex] (or arXiv:1203.6580v2 [hep-ex] for this version) ## Submission history From: Atlas Publications [view email] [v1] Thu, 29 Mar 2012 16:35:14 UTC (247 KB) [v2] Sat, 28 Jul 2012 11:30:28 UTC (243 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030207991600037, "perplexity": 4526.767488278136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143646.38/warc/CC-MAIN-20200218085715-20200218115715-00351.warc.gz"}
http://math.stackexchange.com/questions/211689/real-valued-2d-fourier-series
# Real-valued 2D Fourier series? For a (well-behaved) one-dimensional function $f: [-\pi, \pi] \rightarrow \mathbb{R}$, we can use the Fourier series expansion to write $$f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} \left( a_n \cos(nx) + b_n\sin(nx) \right)$$ For a function of two variables, Wikipedia lists the formula $$f(x,y) = \sum_{j,k \in \mathbb{Z}} c_{j,k} e^{ijx}e^{iky}$$ In this formula, $f$ is complex-valued. Is there a similar series representation for real-valued functions of two variables? - Substitute $e^{i\omega} = \cos\omega + i\sin\omega$ and $c_{j,k} = a_{j,k} + ib_{j,k}$ in the formula you get from Wikipedia, and look only at the real value of the result. The formula gets a bit unwieldy due to the 4 $\sin\cos$ combinations you get, but it works... –  fgp Oct 12 '12 at 15:06 Yes! And these types of expansions occur in a variety of applications, e.g., solving the heat or wave equation on a rectangle with prescribed boundary and initial data. As a specific example, we can think of the following expansion as a two dimensional Fourier sine series for $f(x,y)$ on $0<x<a$, $0<y<b$: $$f(x,y)=\sum_{n=1}^\infty \sum_{m=1}^\infty c_{nm}\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right), \quad 0<x<a,\ 0<y<b,$$ where the coefficients (obtained from the same type of orthogonality argument as in the 1D case) are given by \begin{align} c_{nm}&={\int_0^b \int_0^a f(x,y)\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right)\,dx\,dy\over \int_0^b \int_0^a \sin^2\left({n\pi\, x\over a}\right)\sin^2\left({m\pi\, y\over b}\right)\,dx\,dy}\\ &={4\over a b}\int_0^b \int_0^a f(x,y)\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right)\,dx\,dy, \quad n,m=1,2,3,\dots \end{align} For example, the picture below shows (left) the surface $$f(x,y)=30x y^2 (1-x)(1-y)\cos(10x)\cos(10y), \quad 0<x<1,\ 0<y<1,$$ and a plot of the two dimensional Fourier sine series (right) of $f(x,y)$ for $n,m,=1,\dots,5$: Finally, keep in mind that we are not limited just to double sums of the form sine-sine. We could have any combination we like so long as they form a complete orthogonal family on the domain under discussion. - This representation seems to be valid only if the values of the function on the boundary of the rectangle are zero. –  Beni Bogosel Nov 13 '13 at 11:59 Yes, that's why I said, "As a specific example..." The sine functions used there are the eigenfunctions obtained when solving the heat equation on a rectangle where zero boundary conditions are specified. –  JohnD Nov 14 '13 at 0:47 What about the case where cosine terms are required? This answer is useless because it does not address the general case. –  user3728501 Dec 21 '13 at 14:19 @JohnD only details about the coefficients. The correct formula is: $$c_{n,m} = \frac{\int_{0}^a \int_0^b f(x,y)sin(\frac{n\pi x}{a})sin(\frac{n\pi y}{b})dxdy}{{\int_{0}^a \int_0^b sin^2(\frac{n\pi x}{a})sin^2(\frac{n\pi y}{b})dxdy}{}}$$ the impression that $c_{n,m}$ is $1$. That's not true. Cheers! - Yes, it was a typo (I had left off the squares in the denominator). Fixed now. –  JohnD Mar 6 '13 at 4:48 By Euler's identity $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ trigonometrical Fourier series expansion $$f(x) \sim \dfrac{a_0}{2} + \sum\limits_{n=1}^{\infty} \left( a_n \cos(nx) + b_n\sin(nx) \right)$$ may be easily transformed into exponential form $$f(x) \sim \sum\limits_{n=-\infty}^{\infty}{c_n{e^{inx}}},$$ and vice versa, where $c_k=\dfrac{1}{2\pi}\int\limits_0^{2\pi}f(x){e^{-ikx}} \, dx$ are complex Fourier coefficients. Function $f$ may be real- or complex-valued.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548519253730774, "perplexity": 556.8860676127272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997878518.58/warc/CC-MAIN-20140722025758-00098-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www.researchgate.net/publication/252146865_Interplay_between_magnetism_and_superconductivity_in_EuFe2-xCoxAs2_studied_by_57Fe_and_151Eu_Mssbauer_spectroscopy
Article # Interplay between magnetism and superconductivity in EuFe2-xCoxAs2 studied by 57Fe and 151Eu Mössbauer spectroscopy • ##### P. J. W. Moll Physical review. B, Condensed matter (Impact Factor: 3.66). 01/2011; 84. DOI: 10.1103/PhysRevB.84.174503 ABSTRACT The compound EuFe2-xCoxAs2 was investigated by means of 57Fe and 151Eu Mössbauer spectroscopy versus temperature (4.2-300 K) for x = 0 (parent), x = 0.34-0.39 (superconductor), and x = 0.58 (overdoped). It was found that the spin density wave (SDW) is suppressed by Co substitution; however, it survives in the region of superconductivity, but iron spectra exhibit some nonmagnetic components in the superconducting region. Europium orders magnetically, regardless of the cobalt concentration, with the spin reorientation from the a-axis in the parent compound toward the c-axis with increasing replacement of iron by cobalt. The reorientation takes place close to the a-c plane. Some trivalent europium appears in EuFe2-xCoxAs2 versus substitution due to the chemical pressure induced by Co atoms, and it experiences some transferred hyperfine field from Eu2+. Iron experiences some transferred field due to the europium ordering for substituted samples in the SDW and nonmagnetic state both, while the transferred field is undetectable in the parent compound. Superconductivity coexists with the 4f-europium magnetic order within the same volume. It seems that superconductivity has some filamentary character in EuFe2-xCoxAs2, and it is confined to the nonmagnetic component seen by the iron Mössbauer spectroscopy. 0 Bookmarks · 42 Views • ##### Article: Local structure and hyperfine interactions of (57)Fe in NaFeAs studied by Mössbauer spectroscopy. [Hide abstract] ABSTRACT: Detailed (57)Fe Mössbauer spectroscopy measurements on superconducting NaFeAs powder samples have been performed in the temperature range 13 K ≤ T < 300 K. The (57)Fe spectra recorded in the paramagnetic range (T > TN ≈ 46 K) are discussed supposing that most of the Fe(2+) ions are located in distorted (FeAs4) tetrahedra of NaFeAs phase, while an additional minor (<10%) component of the spectra corresponds to impurity or intergrowth NaFe2As2 phase with a nominal composition near NaFe2As2. Our results reveal that the structural transition (TS ≈ 55 K) has a weak effect on the electronic structure of iron ions, while at T ≤ TN the spectra show a continuous distribution of hyperfine fields HFe. The shape of these spectra is analyzed in terms of two models: (i) an incommensurate spin density wave modulation of iron magnetic structure, (ii) formation of a microdomain structure or phase separation. It is shown that the hyperfine parameters obtained using these two methods have very similar values over the whole temperature range. Analysis of the temperature dependence HFe(T) with the Bean-Rodbell model leads to ζ = 1.16 ± 0.05, suggesting that the magnetic phase transition is first order in nature. A sharp evolution of the VZZ(T) and η(T) parameters of the full Hamiltonian of hyperfine interactions near T ≈ (TN,TS) is interpreted as a manifestation of the anisotropic electron redistribution between the dxz-, dyz- and dxy-orbitals of the iron ions. Journal of Physics Condensed Matter 08/2013; 25(34):346003. · 2.22 Impact Factor • ##### Article: Electron spin resonance in iron pnictides [Hide abstract] ABSTRACT: We report on electron spin resonance studies in Eu based 122-superconductors where the Eu^2+ ions serve as a probe of the normal and superconducting state. In polycrystalline Eu0.5K0.5Fe2As2 the spin-lattice relaxation rate 1/T1^ESR obtained from the ESR linewidth exhibits a Korringa-like linear increase with increasing temperature above Tc evidencing a normal Fermi-liquid behavior. Below Tc the spin lattice relaxation rate 1/T1^ESR follows a T^1.5-behavior without any appearance of a coherence peak. In superconducting EuFe2As1.8P0.2 single crystals we find a similar Korringa slope in the normal state and observe anisotropic spectra for measuring with the external field parallel and perpendicular to the c-axis. In addition, we will discuss the ESR properties of selected systems from the 1111 and 11 families. Physical review. B, Condensed matter 09/2012; 86(9). · 3.66 Impact Factor • ##### Article: Interplay between spin density wave and superconductivity in '122' iron pnictides: 57Fe M\ [Hide abstract] ABSTRACT: Iron-based superconductors Ba0.7Rb0.3Fe2As2 and CaFe1.92Co0.08As2 of the '122' family have been investigated by means of the 14.41-keV Moessbauer transition in 57Fe versus temperature ranging from the room temperature till 4.2 K. A comparison is made with the previously investigated parent compounds BaFe2As2 and CaFe2As2. It has been found that Moessbauer spectra of these superconductors are composed of the magnetically split component due to development of spin density wave (SDW) and non-magnetic component surviving even at lowest temperatures. The latter component is responsible for superconductivity. Hence, the superconductivity occurs in the part of the sample despite the sample is single phase. This phenomenon is caused by the slight variation of the dopant concentration across the sample (crystal). 10/2011;
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8510969281196594, "perplexity": 4007.4124816760154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548623.95/warc/CC-MAIN-20141224185908-00013-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/42601-hello-all-need-help-number-theory.html
# Thread: hello to all! need help in number theory 1. ## hello to all! need help in number theory Let p be a prime of the form 4k+3. Prove that either {(p-1)/2}!≡1 (mod p) or{(p-1)/2}!≡-1 (mod p) 2. Originally Posted by jen_mojic Let p be a prime of the form 4k+3. Prove that either {(p-1)/2}!≡1 (mod p) or{(p-1)/2}!≡-1 (mod p) By Wilson's theorem we have, $1\cdot 2\cdot 3 \cdot ... \cdot (p-1) \equiv -1(\bmod p)$. Now, $p-1 \equiv -1$, and $p-2\equiv -2$, and so on ... until the middle. Thus, $(-1)^{(p-1)/2} \left[ (\tfrac{p-1}{2})! \right] \equiv -1(\bmod p)$. However, $(-1)^{(p-1)/2} = -1$ since $p=4k+3$. And therefore we have, $\left[(\tfrac{p-1}{2})!\right]^2 \equiv 1(\bmod p) \implies (\tfrac{p-1}{2})! \equiv \pm 1(\bmod p)$. For example, let $p=7$, then, $1\cdot 2\cdot 3 \cdot 4\cdot 5 \cdot 6 \equiv -1(\bmod 7)$ Do the trick above, $1\cdot 2 \cdot 3 \cdot (-3) \cdot (-2)\cdot (-1) \equiv -1(\bmod 7)$ Thus, $(-1)^3 (3!)^2 \equiv -1(\bmod 7)$ And the rest follows.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919956922531128, "perplexity": 3085.402535804386}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720000.45/warc/CC-MAIN-20161020183840-00377-ip-10-171-6-4.ec2.internal.warc.gz"}
https://nbviewer.ipython.org/github/gpeyre/numerical-tours/blob/master/matlab/sparsity_2_cs_images.ipynb
# Compressed Sensing of Images¶ Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$ This tour explores compressed sensing of natural images, using different sparsity priors over a wavelet basis. In [2]: addpath('toolbox_signal') ## Low Pass Linear Measures¶ We first make use of $P$ low pass linear measurements to remove the low frequency content of the image. Natural images are not only sparse over a wavelet domain. They also exhibit a fast decay of the coefficient through the scale. The coarse (low pass) wavelets caries much of the image energy. It thus make sense to measure directly the low pass coefficients. We load an image $f \in \RR^{n^2}$ of $n \times n$ pixels. In [3]: name = 'boat'; n = 256; f = rescale(f); Shortcuts for the wavelet transform $\{\dotp{f}{\psi_m}\}_m$. We only compute up to a scale $J$ so that only $k_0$ sub-bands are transformed. In [4]: k0 = 2; J = log2(n)-k0; Wav = @(f)perform_wavelet_transf(f,J,+1); WavI = @(x)perform_wavelet_transf(x,J,-1); Compute the wavelet transform. In [5]: fw = Wav(f); Display the coefficients. In [6]: clf; plot_wavelet(fw, J); Exercise 1 Compute an approximation |fLow| using the $P=2^{2J}=(n/k_0)^2$ low pass coefficients. In [7]: exo1() In [8]: %% Insert your code here. ## Randomized Orthogonal Measurements¶ We consider a compressed sensing operator that corresponds to randomized orthogonal projections. Extract the high pass wavelet coefficients, $x_0 = \{ \dotp{f}{\psi_m} \}_{m \in I_0}$. In [9]: A = ones(n,n); A(1:2^J,1:2^J) = 0; I0 = find(A==1); x0 = fw(I0); Number of coefficients. In [10]: N = length(x0); Number $P_0 = 2^{2J}=(n/k_0)^2$ of low pass measurements. In [11]: P0 = (n/2^k0)^2; Number of CS measurements. In [12]: P = 4 * P0; Generate random permutation operators $S_1,S_2 : \RR^N \rightarrow \RR^N$ so that $S_k(x)_i = x_{\sigma_k(i)}$ where $\sigma_k \in \Sigma_N$ is a random permutation of $\{1,\ldots,N\}$. In [13]: sigma1 = randperm(N)'; sigma2 = randperm(N)'; S1 = @(x)x(sigma1); S2 = @(x)x(sigma2); The adjoint (and also inverse) operators $S_1^*,S_2^*$ (denoted |S1S,S2S|) corresponds to the inverse permutation $\sigma_k^*$ such that $\sigma_k^* \circ \sigma_k(i)=i$. In [14]: sigma1S = 1:N; sigma1S(sigma1) = 1:N; sigma2S = 1:N; sigma2S(sigma2) = 1:N; S1S = @(x)x(sigma1S); S2S = @(x)x(sigma2S); We consider a CS operator $\Phi : \RR^N \rightarrow \RR^P$ that corresponds to a projection on randomized atoms $$(\Phi x)_i = \dotp{x}{ \phi_{\sigma_2(i)}}$$ where $\phi_i$ is a scrambled orthogonal basis $$\phi_i(x) = c_i( \sigma_1(x) )$$ where $\{ c_i \}_i$ is the orthogonal DCT basis. This can be rewritten in compact operator form as $$\Phi x = ( S_2 \circ C \circ S_1 (x) ) \downarrow_P$$ where $S_1,S_2$ are the permutation operators, and $\downarrow_P$ selects the $P$ first entries of a vector. In [15]: downarrow = @(x)x(1:P); Phi = @(x)downarrow(S2(dct(S1(x)))); The adjoint operator is $$\Phi^* x = S_1^* \circ C^* \circ S_2^* (x\uparrow_P)$$ where $\uparrow_P$ append $N-P$ zeros at the end of a vector, and $C^*$ is the inverse DCT transform. In [16]: uparrow = @(x)[x; zeros(N-P,1)]; PhiS = @(x)S1S(idct(S2S(uparrow(x)))); Perform the CS (noiseless) measurements. In [17]: y = Phi(x0); Exercise 2 Reconstruct an image using the pseudo inverse coefficients $\Phi^+ y = \Phi^* y$. In [18]: exo2() In [19]: %% Insert your code here. ## Compressed Sensing Recovery using Douglas Rachford Scheme¶ We consider the minimum $\ell^1$ recovery from the measurements $y = \Phi x_0 \in \RR^P$ $$\umin{\Phi x = y} \normu{x}.$$ This can be written as $$\umin{ x } F(x) + G(x) \qwhereq \choice{ F(x) = i_{\Cc}(x), \\ G(x) = \normu{x}. }$$ where $\Cc = \enscond{x}{\Phi x =y}$. One can solve this problem using the Douglas-Rachford iterations $$\tilde x_{k+1} = \pa{1-\frac{\mu}{2}} \tilde x_k + \frac{\mu}{2} \text{rPox}_{\gamma G}( \text{rProx}_{\gamma F}(\tilde x_k) ) \qandq x_{k+1} = \text{Prox}_{\gamma F}(\tilde x_{k+1},)$$ We have use the following definition for the proximal and reversed-proximal mappings: $$\text{rProx}_{\gamma F}(x) = 2\text{Prox}_{\gamma F}(x)-x$$ $$\text{Prox}_{\gamma F}(x) = \uargmin{y} \frac{1}{2}\norm{x-y}^2 + \ga F(y).$$ One can show that for any value of $\gamma>0$, any $0 < \mu < 2$, and any $\tilde x_0$, $x_k \rightarrow x^\star$ which is a solution of the minimization of $F+G$. Exercise 3 Implement the proximal and reversed-proximal mappings of $F$ (the orthogonal projector on $\Cc$ and $G$ (soft thresholding). In Matlab, use inline function with the |@| operator. In [20]: exo3() In [21]: %% Insert your code here. Value for the $0 < \mu < 2$ and $\gamma>0$ parameters. You can use other values, this might speed up the convergence. In [22]: mu = 1; gamma = 1; Exercise 4 Implement the DR iterative algorithm. Keep track of the evolution of the $\ell^1$ norm $G(x_k)$. In [23]: exo4() In [24]: %% Insert your code here. Exercise 5 Display the image reconstructed using the $P_0$ linear and $P$ CS measurements. The total number of used measurements is thus $P+P_0$. In [25]: exo5() In [26]: %% Insert your code here. ## Compressed Sensing Reconstruction using Block Sparsity¶ In order to enhance the CS reconstruction, it is possible to use more advanced priors than plain $\ell^1$. One can for instance use a block $\ell^1$ norm $$G(x) = \sum_i \norm{x_{B_i}}$$ where $(B_i)_i$ is a disjoint segmentation of the index set $\{1,\ldots,N\}$, where $x_{B} = \{ x_i \}_{i \in B} \in \RR^{|B|}$ extracts the coefficients within $B$, and $\norm{x_B}$ is the $\ell^2$ norm. The proximal operator of this block $\ell^1$ norm is a block thresholding $$\forall \, m \in B_i, \quad \text{Prox}_{\ga G}(x)_i = \max(0, 1-\ga/\norm{x_{B_i}}) x_i.$$ We use uniform blocks of size $w \times w$. In [27]: w = 4; Blocks position and offset in the image domain. In [28]: v = 1:w:n; dv = 0:w-1; [dX,dY,X,Y] = ndgrid(dv,dv,v,v); q = size(X,3); dX = reshape(dX, [w*w q*q]); dY = reshape(dY, [w*w q*q]); X = reshape(X, [w*w q*q]); Y = reshape(Y, [w*w q*q]); Remove the block which fails outside the image. In [29]: I = find( sum(X+dX>n | Y+dY>n) ); X(:,I) = []; Y(:,I) = []; dX(:,I) = []; dY(:,I) = []; Compute the indexes of the block in $\{1,\ldots,N\}$, i.e. not in image space but over the CS coefficients space. In [30]: U = zeros(n,n); U(I0) = 1:N; Ind = X+dX + (Y+dY-1)*n; I = U(Ind); Remove the indexes that corresponds to low pass wavelet coefficients. In [31]: I(:,sum(I==0)>0) = []; A block is defined as $B_i = \{ I_{k,i} \}_{k=1}^{w^2}$. Define the energy. In [32]: G = @(x)sum( sqrt(sum(x(I).^2)) ); Just for check : display in coefficient space the block structure. In [33]: [A,tmp] = meshgrid( randperm(size(I,2)) , ones(w*w,1)); x = zeros(N,1); x(I) = A; Z = zeros(n,n); Z(I0) = x; clf; imageplot(Z); colormap jet(256); Exercise 6 define the proximal operator $\text{Prox}_{\ga G}$ of $G$, and its reversed proximal mapping. In [34]: exo6() In [35]: %% Insert your code here. Exercise 7 Implement the DR iterative algorithm. Keep track of the evolution of $G(x_k)$. In [36]: exo7() In [37]: %% Insert your code here. Exercise 8 Display the image reconstructed using the $P_0$ linear and $P$ CS measurements. In [38]: exo8() In [39]: %% Insert your code here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9153195023536682, "perplexity": 1190.6043974070385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00523.warc.gz"}
https://clay6.com/qa/115511/in-a-mass-spectrometer-used-for-measuring-the-masses-of-ions-the-ions-are-i
# In a mass spectrometer used for measuring the masses of ions, the ions are initially accelerated by an electric potential V and then made to describe semicircular paths of radius R using a magnetic field B. If V and B are kept constant, the ratio $\frac{Charge\; on\; the ion}{mass\; of\; the\; ion}$ will be proportional to :- ( A ) $\frac{1}{R}$ ( B ) $R^2$ ( D ) $\frac{1}{R^2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767476916313171, "perplexity": 486.08228623233646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400208095.31/warc/CC-MAIN-20200922224013-20200923014013-00237.warc.gz"}
http://physics.stackexchange.com/questions/8289/matrix-solution-of-an-equivalent-resistance-circuit-problem
# Matrix solution of an equivalent resistance circuit problem Start with a set of points $x_1, x_2, \ldots$ that are connected by wires with some resistance. Represent the resistance by a conductance matrix (conductance being one over the resistance), where $\mathbf{C}_{ij}$ is the conductance between points $i$ and $j$, if the point are connected by a wire, otherwise the $\mathbf{C}_{ij}=0$. Can one solve for the equivalent resistance between two points by some matrix transform of $\mathbf{C}$? EDIT The comments bring up some interesting points - and suggest an alternate phrasing: Can you compute the resistance distance for a graph when the resistances are not all unit values using matrix operations? - Neat question :-) I suspect that there may be something like this, since you can use the Fourier transform (essentially an infinite-dimensional linear transformation) to solve the problem in xkcd.com/356. – David Z Apr 8 '11 at 20:58 It should probably be pointed out that $C_{ij}$ is measured when all the other resistors are absent, while the sought-for equivalent resistance $R_{ij}$ is measure when all the resistors are present. Or do you have something else in mind? – Qmechanic Apr 8 '11 at 21:18 @David: you can use Fourier transform for that because there the graph is lattice and the resistance has translational symmetry. In general the underlying graph will have no such structure. It might not even make sense to talk about embedding into $k$-dim space (which we often take for granted). – Marek Apr 8 '11 at 21:55 I think it's misleading to use the word "matrix" for this table of numbers because there is no natural linear structure on this space, as far as I can see. So the formulae to invert the conductances to resistances won't be a natural linear algebra formula - it won't be a "function" of the matrix, in particular, it won't be the inverse matrix, I guess. The most striking deviation from the "matrix logic" is that the entries $C_{ii}$ are either zero or infinite. – Luboš Motl Apr 9 '11 at 4:40 A more natural question is to first ask if there is known a general algorithm to find the equivalent reesistance for two points, given such a network. There is atleast an analogous thing for an arbitrary network of equal resistances mathworld.wolfram.com/ResistanceDistance.html – user1708 Apr 9 '11 at 4:54 Write $U_i$ for the potential at the site $i$ and $I_i$ for the external current flowing into the site $i$. Then continuity equation gives us $I_i = \sum_{j \neq i} C_{ij} (U_i - U_j)$ which can be rewritten as $I_i = \sum_j A_{ij} U_j$ with $$A_{ij} = \begin{cases} \sum_{k \neq i} C_{ij} & i = j \\ -C_{ij} & i \neq j \end{cases}$$ Now one can proceed directly to solve for $U_i$, given external current flows. But it turns out that thanks to special properties of the matrix $A$ (notice that sum entries of each row gives zero) more can be said. It turns out (read the paper for details) one can express equivalent resistance between points $k$ and $l$ as $$R_{kl} = {{\rm det}A^{(kl)} \over {\rm det} A^{(l)}}$$ where the indexed matrices are obtained by removing the said rows and columns from the matrix $A$. Last remark (not related directly to your question but it would be shame not to mention it now) is that those determinants can be interpreted naturally as spanning tree polynomials in $C_{ij}$ on the given graph $G$ (with or without $(kl)$ edge) and this in turn can be computed directly from the partition function of $q \to 0$ limit of the $q$-state Potts model on the said graph $G$ with weights on the edges related to their resistances.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262612462043762, "perplexity": 260.6825851656038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00097-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.freemathhelp.com/forum/threads/93320-Chain-rule-partial-derivative?p=381970
# Thread: Chain rule partial derivative 1. ## Chain rule partial derivative (1 pt) Suppose that $\,\chi(s,\, t)\, =\, -4s^2\, -\, 2t^2,\,y\,$ a function of $\,(s,\, t)\,$ with $\, y(1,\, 1)\, =\, 1\,$ and $\, \dfrac{\partial y}{\partial t}\, (1,\, 1)\, =\, -2.$ Suppose that $\, u\, =\, xy,\, v\,$ a function of $\, x,\, y\,$ with $\, \dfrac{\partial v}{\partial y}\, (-6,\, 1)\, =\,4.$ Now suppose that $\, f(s,\,t)\, =\, u(x(s,\, t),\, y(s,\, t))\,$ and $\, g(s,\, t)\, =\, v(x(s,\, t),\, y(s,\, t)).\,$ You are given: . . . . .$\dfrac{\partial f}{\partial s}\, (1,\, 1)\, =\, -32,\,$. . .$\dfrac{\partial f}{\partial t}\, (1,\, 1)\, =\, 8,\,$. . .$\dfrac{\partial g}{\partial s}\, (1,\, 1)\, =\, -16.$ The value of $\, \dfrac{\partial g}{\partial t}\, (1,\, 1)\,$ must be: dv/dt=dv/dx*dx/dt+dv/dy*dy/dt dx/dt=-4t -> evaluate at (1,1) =-4 dv/dt=-4dv/dx+4(-2) dv/dt=-4dv/dx-8 How can I find the missing dv/dx in order to get a value for dv/dt? Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967904090881348, "perplexity": 1214.2520243835459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863967.46/warc/CC-MAIN-20180521063331-20180521083331-00292.warc.gz"}
http://tmdag.com/vopraytracer-pt5/
# Houdini VOP raytracer part 5 Posted on October 2, 2013 # Specular Hilight A specular highlight is a bright spot of light that appears on shiny objects when illuminated. The term specular means that light is perfectly reflected in a mirror-like way from the light source to the viewer. We are going to cover two specular reflection models. Phong reflection model. $K{\tiny spec} = \big\|R\big\| \big\|V\big\| cos^n \beta =(\hat{R} \cdot \hat{V})^n$ Where $\hat{R}$ is normalised mirror reflection of the light vector off the surface, and $\hat{V}$ is normalised viewpoint vector. The number $^n$ is called the Phong exponent, and is a user-chosen value that controls the apparent smoothness of the surface. When calculating $(\hat{R} \cdot \hat{V})$, again we will get negative values like we had in diffuse calculation, so we have to clamp it, $max(0,(\hat{R} \cdot \hat{V}))$. You can refer to the example above, that we get specular also on the back of our sphere. There are few ways to fix that issue, for e.g. we can multiply sampled position by shadow that we already have calculated (shadowed area ==0, non shadowed ==1) or make another dot product test with normal vector. Below is a phong model representation in Houdini nodes. First, calculate reflection vector $\hat{R}$. As law of reflection states – direction of incoming light (the incident ray), and the direction of outgoing light reflected (the reflected ray) make the same angle with respect to the surface normal. In this situation reflect vex node comes in handy. It takes input of vector to be reflected – in our case vector of Light towards the surface (It is very important to check your vector direction) and input of normalised surface normal vector. As an output we will get reflected $R$ vector which should be normalised ($\hat{R}$). Vector $\hat{V}$ is normalised Houdini (“I”) – vector towards Eye (Camera). After calculating dot product between those two vectors $(\hat{R} \cdot \hat{V})$ , we need make sure that we clamp negative values. I use clamp vex node instead of maxing it with zero as I know that values from dot product of normalised vectors won’t exceed value of 1. Last part is to use power function to control exponent and simple multiplication for overall intensity. ## Phong-Blinn reflection model $K{\tiny spec} = \big\|N\big\| \big\|H\big\| cos^n \beta =(\hat{N} \cdot \hat{H})^n$ Where $\hat{N}$ is normalised smooth surface normal vector off the surface, and $\hat{H}$  is the half-angle direction (the direction vector midway between L, the vector to the light, and V, the viewpoint vector). First problem we will find here is that our halfway vector $\hat{H}$ will jump as soon as angle in between L and V is larger than 180°. Here is an example with non clamped values of rotating light by 360°. H – half-angle direction between L and V N – smooth surface normal R – Mirror reflection of the light vector L – Light vector towards surface V – viewpoint vector (eye/camera) Here you can see both specular models with clamped values. As you have noticed, Blinn reflection model give us wider specular as the angle between R-V is more aggressive than N-H. We can compensate the difference by adjusting exponent. Maxing (or clamping in this example) dot product is very important as we would get undesirable effect when calculating exponent from negative values. First lets calculate $\hat{H}$ vector between the viewer and light-source vector. $H = \frac{L + V} {|L + V|}$ Halfway vector ${H}$ equals sum of vectors ${L}$ and ${V}$ divided by sum of their lengths. In many internet examples you will see calculation of halfway vector presented like this: $H = \frac{L + V} {2}$ This is not the mathematically correct way of calculating ${H}$ vector but in our case vectors ${L}$ and  ${V}$ are normalised (their length is equal of 1.0) so sum of their lengths will equal 2. Next step is a dot product calculation with normalised surface normal. Rest of nodes are exactly the same as in phong model. To get rid of jumping $\hat{H}$ vector, we can add second dot product calculation for negated halfway vector. ## Mantra shader version of specular model Mantra has build in specular vex node with few specular models. To create your own model, we need Illuminance Loop vex node that will iterate through all active lights in scene file. To get $L$ vector, you can create global variables vex node (global variables vex node is different inside and outside illuminance loop). $L$ vector vector provided by global variables is a “Direction From Surface to Light”  so we need to negate it (reverse its direction). Global $I$ vector (eye) is “Direction From Eye to Surface” (equivalent of our $V$ – Viewpoint vector), also needs to be negated for proper dot product calculation. This setup should produce exactly same effect as build in Phong model from specular vex node.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114871978759766, "perplexity": 1977.2671143443538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463611569.86/warc/CC-MAIN-20170528220125-20170529000125-00367.warc.gz"}
https://efinancemanagement.com/economics/law-of-diminishing-marginal-utility
# Law of Diminishing Marginal Utility ## What is the Law of Diminishing Marginal Utility? The law of diminishing marginal utility is an economic concept that helps to explain human buying behavior. As per this law, the amount of satisfaction from consuming every additional unit of a good or service drops as we increase the total consumption. Or, we can say that as the consumption increases, the additional or marginal utility goes down with each additional unit. In economics, the term utility refers to satisfaction or happiness. Total utility is also an economic term that tells the total satisfaction after consuming a unit of a product or service. Marginal utility is the change (increase or decrease) in the total utility after consuming an extra unit of good or service. ## Explanation of Law of Diminishing Marginal Utility In general, also, consuming the first unit of a commodity gives us the highest enjoyment or happiness. But as we consume more of the same commodity, after some time, we don’t feel the urge to consume more of it. This is what the law of diminishing marginal utility is all about. The first unit of a product gives the highest level of utility. But the marginal utility drops with each subsequent intake of the commodity. Moreover, if the consumers want to know marginal utility by spending the same dollars on various commodities, it is called the law of equi marginal utility, which is also called an extension of the law of marginal utility. We can also use a graph to explain the law of diminishing marginal utility. In the graph below, the X-axis shows the amount of units of a commodity that a consumer consumes. And the Y-axis shows the marginal utility of each unit. The graph below shows the total utility and the marginal utility curves. The graph clearly shows that the total utility is maximum when the marginal utility is zero. This is because the MU is the slope of the total utility curve. After zero MU, the total utility starts to drop, and MU goes negative. A negative marginal utility implies that consuming more units will lead to dissatisfaction for the consumer. The law of diminishing marginal utility also has a direct relation with the concept of diminishing prices. As the marginal utility of goes down, a consumer will be willing to pay a lower price for a product or service. For example, a consumer is willing to pay \$20 for the first burger, but since he is no more after the first burger, they will be willing to pay a less amount for the second burger. ## Example of Law of Diminishing Marginal Utility Let us understand the law of diminishing MU with the help of a simple example. Mr. A is very hungry, and he goes to a pizza store where he buys a very large pizza with 6 slices. The first slice of pizza gives him the maximum satisfaction, say 10. However, with the subsequent slices, Mr. A’s satisfaction keeps reducing as his stomach gets to fill. Or, we can say that the marginal utility for Mr. A diminishes as he consumes more pizza slices. The following table shows Mr. A’s total utility and marginal utility. ## Law of Diminishing Marginal Utility: Assumptions The law of diminishing marginal utility makes the following assumptions: • Consumers need to behave rationally to maximize their utility with their limited income. It means that a consumer always makes sound decisions. • Consumers need to continuously consume each extra unit of a commodity. It means that there shouldn’t be a pause between consumption. This is because if there is a gap in the consumption of two units, then the marginal utility may not drop. For example, if in the above pizza example, suppose Mr. A eats the third slice after a break or when he feels hungry again, the marginal utility, in this case, will increase. • It is crucial that each unit of a product is standardized. This implies that the size, quality, and volume of each unit should be the same. In case of a deviation, the marginal utility may change. In the pizza example, if the second slice is smaller than the first one, then it is possible that Mr. A would get the same level of satisfaction as from the first. • One can easily measure utility. Also, a consumer can tell their satisfaction level in absolute value, such as 1, 2, 3, etc. • A consumer must consume the product in a reasonable quantity. For example, if a thirsty person drinks water with a spoon, then every additional spoon will increase the satisfaction level. • The marginal utility of money remains constant. Usually, after spending money on the first unit, consumers are left with less money. The remaining money is generally dearer to the consumer. This will increase the MU for money for the consumer. But the law assumes there is no change in the MU for the money. • The income of the consumer and the price of the commodity don’t change. • There is no change in the taste and fashion of the consumer. ## Exceptions It is important to note that the law of diminishing marginal utility does not always hold. There are a few scenarios when this law doesn’t hold, and these are: • In the case of Addictions/Hobbies, this law doesn’t hold. For example, a person who loves to paint may not witness a drop in marginal utility after a new painting. Similarly, for an alcoholic, an extra glass of alcohol may not decrease the marginal utility. • Items that are rare or valuable are also an exception to this law. For example, if someone loves to collect a limited-edition watch, then they could continue collecting such items indefinitely without any drop in the marginal utility. Apart from these two exceptions, there is criticism against this law as well. And this criticism is that the assumptions of the law of diminishing marginal utility may not always hold. For instance, a consumer may not always make a rational decision. And there can also be cases when there is a usual gap between the consumption of two units of a commodity. ## Final Words Despite the exceptions and criticism, the law of diminishing marginal utility is a very popular economics concept. Economists and exporters widely use this law to explain why consumers get less satisfaction with every additional unit. Moreover, companies also use this concept to increase the marginal utility of their products and services for consumers. This, in turn, help companies to increase their sales.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717380166053772, "perplexity": 679.1355748435956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00410.warc.gz"}
https://arxiv-export-lb.library.cornell.edu/abs/2110.11922v1
gr-qc (what is this?) # Title: A singularity theorem for evaporating black holes Abstract: The classical singularity theorems of General Relativity rely on energy conditions that are easily violated by quantum fields. Here, we provide motivation for an energy condition obeyed in semiclassical gravity: the smeared null energy condition (SNEC), a proposed bound on the weighted average of the null energy along a finite portion of a null geodesic. Using SNEC as an assumption we proceed to prove a singularity theorem. This theorem extends the Penrose singularity theorem to semiclassical gravity and has interesting applications to evaporating black holes. Comments: Contribution to the Proceedings of the 16th Marcel Grossmann Meeting (MG16), 9 pages, 2 figures. arXiv admin note: text overlap with arXiv:2012.11569 Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) Cite as: arXiv:2110.11922 [gr-qc] (or arXiv:2110.11922v1 [gr-qc] for this version) ## Submission history From: Eleni-Alexandra Kontou [view email] [v1] Fri, 22 Oct 2021 17:06:43 GMT (915kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307042717933655, "perplexity": 3098.9484318134614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00198.warc.gz"}
http://tex.stackexchange.com/questions/2132/how-to-define-a-command-that-takes-more-than-9-arguments?answertab=votes
# How to define a command that takes more than 9 arguments I have a mathematical transformation that takes 16 parameters (grouped into 3+8+5) and would like to make a latex command for it, so that I can easily change the notation for it if the need arises. As far as I know, both \def and \newcommand take a maximum of 9 arguments, is there any (recommended) way to extend this? - Perhaps you might show us the detail of what is wanted. This sounds like a question where the best answer will be to think carefully about the input you really require. –  Joseph Wright Aug 21 '10 at 6:43 I edited the question to make it clear the parameters are not programmatic, but rather, an unavoidable part of the the maths that I'm using. –  Simon Aug 21 '10 at 7:02 I wonder if there's a magic solution involving Currying. –  Seamus Feb 21 '13 at 16:19 You are going to have to parse the arguments some at a time and store them into temporary registers or macros. For example \newcommand\foo[9]{% \def\tempa{#1}% \def\tempb{#2}% \def\tempc{#3}% \def\tempd{#4}% \def\tempe{#5}% \def\tempf{#6}% \def\tempg{#7}% \def\temph{#8}% \def\tempi{#9}% \foocontinued } \newcommand\foocontinued[7]{% % Do whatever you want with your 9+7 arguments here. } - Thanks TH - that's the same solution as supplied in the "black TeX magic" link provided by mindcorrosive. I think that I'll use the xargs package, since it will make my code clearer and I like the simple default arguments. –  Simon Aug 21 '10 at 6:55 There's the xargs package, and there's also some black TeX magic. As for myself, being conditioned in Python, I prefer the key-value parameter syntax provided by keyval/xkeyval packages. On an unrelated note, if I find myself needing more than 9 parameters, that usually means that my macro/def/code organization is not very good, and I'd try to improve that first. But of course, there are legitimate situations where 9 parameters are perfectly okay --- especially if you try to build a definition with a lot of knobs and tweaks. - Thanks, I don't know how my googling did not turn up the first option you gave. The 16 parameters define a nonlinear transformation - they're not options in the macro. –  Simon Aug 21 '10 at 6:51 Actually, xargs does not allow more than 9 arguments - it only gives a neat interface for optional arguments. I'll have to use the TeX hack. –  Simon Aug 21 '10 at 7:10 That's correct. Until you clarified what you need so much parameters for, I assumed it's for a macro, and you'd use the keyval interface. But of course in that case it's better with plain TeX. –  Martin Tapankov Aug 21 '10 at 7:15 In a response to How to use variables inside a command when generating a table? I mention how the stringstrings package has a \getargs command that will parse large numbers of arguments that are passed within a single { }. To recap that reply, \documentclass{article} \usepackage{stringstrings} \begin{document} \getargs{1 2 3 4 5 6 7 8 9 10 11 12 FinalArgument} There are \narg~arguments. The thirteenth is \argxiii \end{document} The result to this example is: There are 13 arguments. The thirteenth is FinalArgument EDIT: A much more efficient version of \getargs is available in the readarray package and called \getargsC (in deference to David Carlisle's help). Thus, the same task can be accomplished more quickly with \documentclass{article} \begin{document} \getargsC{1 2 3 4 5 6 7 8 9 10 11 12 FinalArgument} There are \narg~arguments. The thirteenth is \argxiii \end{document} - Since it's a different technique, I also present the following: local macro definitions. \documentclass{article} \def\NineteenArgs#1#2#3#4#5#6#7#8#9{% \def\ArgsTenAndFurther##1##2##3##4##5##6##7##8##9{% \def\ArgNineteen####1{% ####1##9##8##7##6##5##4##3##2##1#9#8#7#6#5#4#3#2#1% }% \ArgNineteen% }% \ArgsTenAndFurther% } \begin{document} %1234567890123456789 \NineteenArgs abcdefghijklmnopqrs \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253761529922485, "perplexity": 1292.9858410082097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988458.74/warc/CC-MAIN-20150728002308-00095-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/expanded-form-e-4th-6th
Expanded Form [E] In this expanded form worksheet, students write a set of 12 numbers in expanded form; all numbers are 3 digits and answers are included on page 2. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472760558128357, "perplexity": 2249.574457949731}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00303.warc.gz"}
http://www.physicsgre.com/viewtopic.php?f=19&t=5588
## A discussion problem of thermodynamics from University Physi Himanshu_Shukla Posts: 5 Joined: Sat Jul 19, 2014 10:42 am ### A discussion problem of thermodynamics from University Physi QI9.17. The prevailing winds on the Hawaiian island of Kauai blow from the northeast. The winds cool as they go up the slope of Mt. Waialeale (elevation 1523 m), causing water vapor to con- dense and rain to fall. There is much more precipitation at the sum- mit than at the base of the mountain. In fact, Mt. Waialeale is the rainiest spot on earth, averaging 11.7 m of rainfall a year. But what makes the winds cool? QI9.18. Applying the same considerations as in Question 19.17, explain why the island of Niihau, a few kilometers to the south- west of Kauai, is almost a desert and farms there need to be irrigated. blighter Posts: 256 Joined: Thu Jan 26, 2012 6:30 pm ### Re: A discussion problem of thermodynamics from University Physi Himanshu_Shukla wrote:QI9.17. The prevailing winds on the Hawaiian island of Kauai blow from the northeast. The winds cool as they go up the slope of Mt. Waialeale (elevation 1523 m), causing water vapor to con- dense and rain to fall. There is much more precipitation at the sum- mit than at the base of the mountain. In fact, Mt. Waialeale is the rainiest spot on earth, averaging 11.7 m of rainfall a year. But what makes the winds cool? QI9.18. Applying the same considerations as in Question 19.17, explain why the island of Niihau, a few kilometers to the south- west of Kauai, is almost a desert and farms there need to be irrigated. Chiron Posts: 9 Joined: Tue Sep 01, 2015 10:37 am ### Re: A discussion problem of thermodynamics from University Physi I know that this is an older post, but in the hopes that others may also read this I'm posting my take on these questions. I believe that for the first question one way to look at this is that the hot air will rise. However, as it rises the pressure decreases. Thus, assuming the gas to be ideal, using the equation $PV = N k_B T$, we see that assuming the volume of the air to be relatively constant, if the pressure decreases then the temperature must decrease as well. For the second problem at least one way to look at it is that much of the moisture has already been expelled. However, using the same principle as before, as it goes down in elevation we see that the pressure increases, thus causing the temperature to increase. Therefore, the water vapor will not condense, and you will find a lack of rain.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 1, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644367456436157, "perplexity": 1545.469791247763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00540-ip-10-171-10-70.ec2.internal.warc.gz"}
https://arxiv.org/abs/1703.05358
astro-ph.SR (what is this?) # Title: Unstable standard candles. Periodic light curve modulation in fundamental mode classical Cepheids Authors: R. Smolec Abstract: We report the discovery of periodic modulation of pulsation in 51 fundamental mode classical Cepheids of the Magellanic Clouds observed by the Optical Gravitational Lensing Experiment. Although the overall incidence rate is very low, about 1 per cent in each of the Magellanic Clouds, in the case of the SMC and pulsation periods between 12 and 16d the incidence rate is nearly 40 per cent. On the other hand, in the LMC the highest incidence rate is 5 per cent for pulsation periods between 8 and 14d, and the overall amplitude of the effect is smaller. It indicates that the phenomenon is metallicity dependent. Typical modulation periods are between 70 and 300d. In nearly all stars the mean brightness is modulated, which, in principle, may influence the use of classical Cepheids for distance determination. Fortunately, the modulation of mean brightness does not exceed 0.01 mag in all but one star. Also, the effect averages out in typical observations spanning a long time base. Consequently, the effect of modulation on the determination of the distance moduli is negligible. The relative modulation amplitude of the fundamental mode is also low and, with one exception, it does not exceed 6 per cent. The origin of the modulation is unknown. We draw a hypothesis that the modulation is caused by the 2:1 resonance between the fundamental mode and the second overtone that shapes the famous Hertzsprung bump progression. Comments: 13 pages, 14 figures, accepted for publication in MNRAS Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Cosmology and Nongalactic Astrophysics (astro-ph.CO) Journal reference: MNRAS, 468(4): 4299-4310 (2017) DOI: 10.1093/mnras/stx679 Cite as: arXiv:1703.05358 [astro-ph.SR] (or arXiv:1703.05358v1 [astro-ph.SR] for this version)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067738056182861, "perplexity": 1771.5290377947747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320491.13/warc/CC-MAIN-20170625115717-20170625135717-00110.warc.gz"}
http://physics.stackexchange.com/questions/4147/covariant-description-of-light-scattering-at-a-fastly-rotating-cylinder/22591
# Covariant Description of Light Scattering at a fastly rotating Cylinder Let us consider the following Gedankenexperiment: A cylinder rotates symmetric around the $z$ axis with angular velocity $\Omega$ and a plane wave with $\mathbf{E}\text{, }\mathbf{B} \propto e^{\mathrm{i}\left(kx - \omega t \right)}$ gets scattered by it. We assume to know the isotropic permittivity $\epsilon(\omega)$ and permeability $\mu(\omega)$ of the cylinder's material at rest. Furthermore, the cylinder is infinitely long in $z$-direction. The static problem ($\Omega = 0$) can be treated in terms of Mie Theory - here, however, one will need a covariant description of the system for very fast rotations (which are assumed to be possible) causing nontrivial transformations of $\epsilon$ and $\mu$. Hence my question: ### What is the scattering response to a plane wave on a fastly rotating cylinder? - What is "infinite" on that cylinder? – Georg Jan 29 '11 at 13:35 Thank you @Georg for pointing out to the misleading formulation. I mean infinitely in $z$-direction. I will change it in a second :) Greets – Robert Filter Jan 29 '11 at 13:38 @Carl: You might consider that there are still some things in classical electrodynamics which are somehow basic but not standard homework problems. To my mind, the covariant description of electrodynamics in media belongs to this class. Greets – Robert Filter Jan 30 '11 at 12:24 Robert, it might be useful to begin with the case for light impinging on a moving half-infinite media (i.e. infinite plane dividing space into two different materials). That case solves trivially (just boost the case for non moving material), and can be summed up (I think) to give a limiting case for the rotating cylinder (in the limit of small wave length). But it's been 30 years since I took E&M and it was never my "best" subject. – Carl Brannen Jan 30 '11 at 23:51 @Carl: Thank you for the hint. The difference to the reflection problem at a half-space is the rotational character of the system. One attempt to solve the problem is to go into a co-rotating coordinate system and transform the plane wave accordingly - In this case I am not sure if such a framework is physically correct. The other way would be to just covariantly transform the medium - this is much more general since we would learn about the special relativistic relation of $\epsilon$ and $\mu$. Greets – Robert Filter Jan 31 '11 at 9:13 First of all, I don't quite understand the following phrase: "The static problem (Ω=0) can be treated in terms of Mie Theory". The Mie theory is for diffraction on a homogeneous sphere, not a cylinder. The complete solution of the problem of diffraction of electromagnetic waves on an infinite homogeneous cylinder was obtained in J. R. Wait, Can. Journ. of Phys. 33, 189 (1955) (or you may find the outline of the Wait's solution for a cylindrical wave in http://arxiv.org/abs/physics/0405091 , Section III). This solution is rather complex, so I suspect your problem can only be solved numerically, as it seems significantly more complex. The Wait's problem is a special case of your problem, so the solution of the latter problem cannot be simpler than the Wait's solution. In particular, it seems advisable to expand your plane wave into cylindrical waves, following Wait. It seems that the material equations for the rotating cylinder can be obtained following http://arxiv.org/abs/1104.0574 (Am. J. Phys. 78, 1181 (2010)). However, the cylinder will not be homogeneous (the material properties will depend on the distance from the axis and may be anisotropic). I suspect the problem can be solved using numerical solution of an ordinary differential equation for the parameters of the cylindrical waves. - Can you at least solve the problem analytically for some special cases where still $\Omega\neq 0$? – Alexey Bobrick Mar 18 '12 at 15:10 Probably. For example, for a perfectly conducting cylinder, the radiation will not penetrate significantly into the cylinder, so the problem would be pretty much equivalent to that for a homogeneous cylinder. This case may look relatively trivial though. Anyway, I am afraid I don't have much time or motivation to solve this problem. For example, I am not enthusiastic about studying the AM. J. Phys. article trying to determine the electric properties of the rotating cylinder. With all due respect, the author of the question may be in a better position to do that. – akhmeteli Mar 18 '12 at 17:19 Thank you @akmeteli for your input. I however think that the determination of the properties of the rotating cylinder is at the core of this problem - how do $\epsilon$ and $\mu$ transform? Greets – Robert Filter Apr 22 '12 at 10:06 @Robert Filter: I agree. However, this issue is discussed, e.g., in the Am. J. Phys. I cited. I am not sure though that it would be possible to find an exact solution of the diffraction problem for the inhomogeneous cylinder (or the exact solution can be too complex to be useful). – akhmeteli Apr 22 '12 at 16:07 Look here for some details Some remarks on scattering by a rotating dielectric cylinder Also articles that cite them. - We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863249659538269, "perplexity": 329.8814125801043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398471441.74/warc/CC-MAIN-20151124205431-00340-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/140159-subscripts-p.html
# Math Help - Subscripts with a P? 1. ## Subscripts with a P? I'm really sorry if this doesn't belong here. I just registered and I'm new. I don't know if it belongs here because I haven't seen it before, so I don't know what classification of math it falls under. The problem is, 'By how much does 6P3 exceed 6P2? 2. Originally Posted by DPooch I'm really sorry if this doesn't belong here. I just registered and I'm new. I don't know if it belongs here because I haven't seen it before, so I don't know what classification of math it falls under. The problem is, 'By how much does 6P3 exceed 6P2? $^nP_k = \frac{n!}{(n-k)!}$ Therefore $^6P_3-^6P_2 = \frac{6!}{(6-3)!}- \frac{6!}{(6-2)!}$ Can you finish it? 3. Originally Posted by pickslides $^nP_k = \frac{n!}{(n-k)!}$ Therefore $^6P_3-^6P_2 = \frac{6!}{(6-3)!}- \frac{6!}{(6-2)!}$ Can you finish it? 720/6 - 720/24 = 120 - 30 = 90? Thanks for the help. Though, what is this called and what section should it be in? 4. Originally Posted by DPooch Though, what is this called and what section should it be in? It's fine where it is but more correctly could be put in "Basic Statistics and Probabilty" or even "Discrete Mathematics" Moderator edit: Moved to Discrete Maths.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017917513847351, "perplexity": 1746.3041087340548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548655.55/warc/CC-MAIN-20141224185908-00081-ip-10-231-17-201.ec2.internal.warc.gz"}
https://regularize.wordpress.com/2019/07/
In my previous post I illustrated why it is not possible to compute the Jordan canonical form numerically (i.e. in floating point numbers). The simple reason: For every matrix ${A}$ and every ${\epsilon>0}$ there is a matrix ${A_{\epsilon}}$ which differs from ${A}$ by at most ${\epsilon}$ (e.g. in every entry – but all norms for matrices are equivalent, so this does not really play a role) such that ${A_{\epsilon}}$ is diagonalizable. So why should you bother about computing the Jordan canonical form anyway? Or even learning or teaching it? Well, the prime application of the Jordan canonical form is to calculate solutions of linear systems of ODEs. The equation $\displaystyle y'(t) = Ay(t),\quad y(0) = y_{0}$ with matrix ${A\in {\mathbb R}^{n\times n}}$ and initial value ${y_{0}\in{\mathbb R}^{n}}$ (both could also be complex). This system has a unique solution which can be given explicitly with the help of the matrix exponential as $\displaystyle y(t) = \exp(At)y_{0}$ where the matrix exponential is $\displaystyle \exp(At) = \sum_{k=0}^{\infty}\frac{A^{k}t^{k}}{k!}.$ It is not always simple to work out the matrix exponential by hand. The straightforward way would be to calculate all the powers of ${A}$, weight them by ${1/k!}$ and sum the series. This may be a challenge, even for simple matrices. My favorite example is the matrix $\displaystyle A = \begin{bmatrix} 0 & 1\\ 1 & 1 \end{bmatrix}.$ Its first powers are $\displaystyle A^{2} = \begin{bmatrix} 1 & 1\\ 1 & 2 \end{bmatrix},\quad A^{3} = \begin{bmatrix} 1 & 2\\ 2 & 3 \end{bmatrix}$ $\displaystyle A^{4} = \begin{bmatrix} 2 & 3\\ 3 & 5 \end{bmatrix},\quad A^{5} = \begin{bmatrix} 3 & 5\\ 5 & 8 \end{bmatrix}.$ You may notice that the Fibonicci numbers appear (and this is pretty clear on a second thought). So, finding a explicit form for ${\exp(A)}$ leads us to finding an explicit form for the ${k}$-th Fibonacci number (which is possible, but I will not treat this here). Another way is diagonalization: If ${A}$ is diagonalizable, i.e. there is an invertible matrix ${S}$ and a diagonal matrix ${D}$ such that $\displaystyle S^{-1}AS = D\quad\text{or, equivalently}\quad A = SDS^{-1},$ you see that $\displaystyle \exp(At) = S\exp(Dt)S^{-1}$ and the matrix exponential of a diagonal matrix is simply the exponential function applied to the diagonal entries. But not all matrices are diagonalizable! The solution that is usually presented in the classroom is to use the Jordan canonical form instead and to compute the matrix exponential of Jordan blocks (using that you can split a Jordan block ${J = D+N}$ into the sum of a diagonal matrix ${D}$ and a nil-potent matrix ${N}$ and since ${D}$ and ${N}$ commute one can calculate ${\exp(J) = \exp(D)\exp(N)}$ and both matrix exponentials are quite easy to compute). But in light of the fact that there are a diagonalizable matrices arbitrarily close to any matrix, on may ask: What about replacing a non-diagonalizable matrix ${A}$ with a diagonalizable one (with a small error) and then use this one? Let’s try this on a simple example: We consider $\displaystyle A = \begin{bmatrix} -1 & 1\\ 0 & -1 \end{bmatrix}$ which is not diagonalizable. The linear initial value problem $\displaystyle y' = Ay,\quad y(0) = y_{0}$ has the solution $\displaystyle y(t) = \exp( \begin{bmatrix} -t & t\\ 0 & -t \end{bmatrix}) y_{0}$ and the matrix exponential is $\displaystyle \begin{array}{rcl} \exp( \begin{bmatrix} -t & t\\ 0 & -t \end{bmatrix}) & = &\exp(\begin{bmatrix} -t & 0\\ 0 & -t \end{bmatrix})\exp(\begin{bmatrix} 0 & t\\ 0 & 0 \end{bmatrix})\\& = &\begin{bmatrix} \exp(-t) & 0\\ 0 & \exp(-t) \end{bmatrix}\begin{bmatrix} 1 & t\\ 0 & 1 \end{bmatrix}\\ &=& \begin{bmatrix} \exp(-t) & t\exp(-t)\\ 0 & \exp(-t) \end{bmatrix}. \end{array}$ So we get the solution $\displaystyle y(t) = \begin{bmatrix} e^{-t}(y^{0}_{1} + ty^{0}_{2})\\ e^{-t}y^{0}_{2} \end{bmatrix}.$ Let us take a close-by matrix which is diagonalizable. For some small ${\epsilon}$ we choose $\displaystyle A_{\epsilon} = \begin{bmatrix} -1 & 1\\ 0 & -1+\epsilon \end{bmatrix}.$ Since ${A_{\epsilon}}$ is upper triangular, it has its eigenvalues on the diagonal. Since ${\epsilon\neq 0}$, there are two distinct eigenvalues and hence, ${A_{\epsilon}}$ is diagonalizable. Indeed, with $\displaystyle S = \begin{bmatrix} 1 & 1\\ 0 & \epsilon \end{bmatrix},\quad S^{-1}= \begin{bmatrix} 1 & -\tfrac1\epsilon\\ 0 & \tfrac1\epsilon \end{bmatrix}$ we get $\displaystyle A = S \begin{bmatrix} -1 & 0 \\ 0 & -1+\epsilon \end{bmatrix}S^{-1}.$ The matrix exponential of ${A_{\epsilon}t}$ is $\displaystyle \begin{array}{rcl} \exp(A_{\epsilon}t) &=& S\exp( \begin{bmatrix} -t & 0\\ 0 & -t(1-\epsilon) \end{bmatrix} )S^{-1}\\ &=& \begin{bmatrix} e^{-t} & \tfrac{e^{-(1-\epsilon)t} - e^{-t}}{\epsilon}\\ 0 & e^{-(1-\epsilon)t} \end{bmatrix}. \end{array}$ Hence, the solution of ${y' = Ay}$, ${y(0) = y_{0}}$ is $\displaystyle y(t) = \begin{bmatrix} e^{-t}y^{0}_{1} + \tfrac{e^{-(1-\epsilon)t} - e^{-t}}{\epsilon}y^{0}_{2}\\ e^{-(1-\epsilon)t}y^{0}_{2} \end{bmatrix}.$ How is this related to the solution of ${y'=Ay}$? How far is it away? Of course, the lower right entry of ${\exp(A_{\epsilon}t)}$ converges to ${e^{-t}}$ for ${\epsilon \rightarrow 0}$, but what about the upper right entry? Note that the entry $\displaystyle \tfrac{e^{-(1-\epsilon)t} - e^{-t}}{\epsilon}$ is nothing else that the (negative) difference quotient for the derivative of the function ${f(a) = e^{-at}}$ at ${a=1}$. Hence $\displaystyle \tfrac{e^{-(1-\epsilon)t} - e^{-t}}{\epsilon} \stackrel{\epsilon\rightarrow 0}{\longrightarrow} -f'(1) = te^{-t}$ and we get $\displaystyle \exp(A_{\epsilon}t) \stackrel{\epsilon\rightarrow 0}{\longrightarrow} \begin{bmatrix} e^{-t} & te^{-t}\\ 0 & e^{-t} \end{bmatrix} = \exp(At)$ as expected. It turns out that a fairly big $\epsilon$ is already enough to get a quite good approximation and even the correct asymptotics: The blue curve it first component of the exact solution (initialized with the second standard basis vector), the red one corresponds $\epsilon = 0.1$ and the yellow on (pretty close to the blue on) is for $\epsilon = 0.01$. to  \$\e
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 59, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354787468910217, "perplexity": 180.30906425503275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00112.warc.gz"}
https://www.mankier.com/1/texsis
# texsis man page TeXsis — TeX macros for Physicists ## Synopsis `texsis [ filename ]` ## Description TeXsis is a collection of TeX macros for typesetting physics documents such as papers and preprints, conference proceedings, books, theses, referee reports, letters, and memos. TeXsis macros provide automatic numbering of equations, automatic numbering and formatting of references, double column formatting, macros for making tables and figures, with or without captions, including tables with horizontal and vertical rules. TeXsis supports a wide variety of type sizes and a number of specialized document formats, and it even includes macros for making form letters for job applications or letters of recommendation. TeXsis is an extension of "plain" TeX, so anything you know how to do in plain TeX you can do in TeXsis. TeXsis macro instructions are simply abbreviations for often used combinations of control sequences used to typeset physics documents. For more information about plain TeX see the man pages for tex(1), and/or The TeXbook, by D.E. Knuth. TeXsis is stored as a pre-loaded format so that it loads quickly (see the man pages for initex(1), and/or "preloaded formats" in The TeXbook ). To run TeXsis simply give the command texsis in place of the tex command, i.e. texsis [ filename ] where filename.tex is the name of a file containing TeX and/or TeXsis \controlsequences. TeXsis is initally in plain TeX mode, i.e. 10pt type and singlespaced, but the control sequence \texsis selects 12pt type, doublespacing, and enables other useful features. Alternatively, \paper turns on these features and sets things up to typeset a paper, \thesis does the same for typesetting a thesis, \letter is used to produce a letter using macros similar to those listed in the back of The TeXbook, \memo gives a setup for producing memoranda, and so on. A manual which describes all of the TeXsis macro instructions is available. It is written in TeXsis, so it serves as its own example of how to write a document with TeXsis. The source code is also heavily commented, so it is possible to extract useful macros from the source code and modify them to suit your own purposes. Provisions are made for local customization of TeXsis. In particular, the file TXSmods.tex, if it exists, is read from the current directory or from the path TEXINPUTS whenever TeXsis is started. You can therefore put your own custom macros for a given project in a directory and they will automatically be loaded when TeXsis is run from that directory. ## Installation There is an appendix to the printed manual containing detailed installation instructions, but they are also provided in a form which can be processed by plain TeX, in the file Install.tex. ## Diagnostics TeXsis informational messages are written to the terminal and the log file begining with `% '. Warning and error messages begin with `> '. ## Files The source files for TeXsis and the TeXsis manual are usually installed in the same place the rest of TeX is kept. Although this may vary from intallation to installation, it will generally include a root directory named texmf. Common examples are /usr/share/texmf/, /usr/lib/teTeX/texmf, or /usr/local/lib/texmf. Filenames here are relative to this texmf root directory. web2c/texsis.fmt tex/texsis/TXS*.tex TeXsis source code. tex/texsis/*.txs "Style" files which can be read in at run time for special document formats. doc/texsis/TXS*.doc Source for the printed TeXsis manual (written in TeXsis). tex/texsis/TXSsite.tex Local site customization instructions (this is read only once, when the format file is created). tex/texsis/TXSpatch.tex Run time patch file (like a system TeXsis.rc file, it is read every time TeXsis is run). TXSmods.tex Run time init file (this is read every time TeXsis is run from the current directory, or from the search path in TEXINPUTS ). ## Restrictions Please note that TeXsis is designed to be completely compatible with plain TeX. As a result it cannot be compatible with LaTeX. Having the full manual written in TeXsis can cause a problem if you don't have a version of TeXsis already running. To get around this you can run Manual.tex through plain TeX and it will load the TeXsis files before processing the manual. This takes longer, but not by much. ## Bugs Please report bugs (or suggestions for improvements) to [email protected]. Patchs to correct small problems or make small improvements are available at in the file TXSpatch.tex (If that file doesn't exist then there are no current patches.) initex(1), tex(1), virtex(1) Donald E. Knuth, The TeXbook; Michael Doob, A Gentle Introduction to TeX. ## Authors Eric Myers <[email protected]> Department of Physics University of Michigan Ann Arbor, Michigan USA and Frank E. Paige <[email protected]> Physics Department Brookhaven National Laboratory Upton, New York 11973 USA ## Version Revision Number: 2.18/beta3 Release Date: 16 May 2000
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670374393463135, "perplexity": 4796.015632242637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00564-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.math.uni-potsdam.de/professuren/graphentheorie/team/dr-siegfried-beckus/
# Dr. rer. nat. Siegfried Beckus Kontakt Raum: 2.09.2.15 Telefon: +49 331 977 2748 ## Research interests • spectral theory of Schrödinger operators on graphs with aperiodic ordered potentials • dynamical Systems • Delone sets • graph limits and graphings • aperiodic tilings • operator algebras • fields of C*-algebras (especially C*-algebras induced by groupoids) ## PhD thesis Spectral approximation of aperiodic Schrödinger operators Friedrich-Schiller Universität Jena, October 2016 ## Diploma thesis Generalized Bloch Theory for Quasicrystals Friedrich-Schiller Universität Jena, February 2012 ## Publications 2019 | Hölder Continuity of the Spectra for Aperiodic Hamiltonians | Siegfried Beckus, Jean Bellissard, Horia Cornean Zeitschrift: Annales Henri Poincaré Link zur Publikation, Link zum Preprint ### Hölder Continuity of the Spectra for Aperiodic Hamiltonians #### Autoren: Siegfried Beckus, Jean Bellissard, Horia Cornean (2019) We study the spectral location of strongly pattern equivariant Hamiltonians arising through configurations on a colored lattice. Roughly speaking, two configurations are "close to each other" if, up to a translation, they "almost coincide" on a large fixed ball. The larger this ball is, the more similar they are, and this induces a metric on the space of the corresponding dynamical systems. Our main result states that the map which sends a given configuration into the spectrum of its associated Hamiltonian, is Hölder (even Lipschitz) continuous in the usual Hausdorff metric. Specifically, the spectral distance of two Hamiltonians is estimated by the distance of the corresponding dynamical systems. Zeitschrift: Annales Henri Poincaré 2019 | Corrigendum to “Spectral continuity for aperiodic quantum systems I. General theory | Siegfried Beckus, Jean Bellissard, Giuseppe De Nittis Zeitschrift: Journal of Functional Analysis Reihe: 277 Seiten: 3351-3353 Link zur Publikation, Link zum Preprint ### Corrigendum to “Spectral continuity for aperiodic quantum systems I. General theory #### Autoren: Siegfried Beckus, Jean Bellissard, Giuseppe De Nittis (2019) A correct statement of Theorem 4 in [1] is provided. The change does not affect the main results. Zeitschrift: Journal of Functional Analysis Reihe: 277 Seiten: 3351-3353 2018 | Spectral continuity for aperiodic quantum systems I. General theory | Siegfried Beckus, Jean Bellissard, Giuseppe De Nittis Zeitschrift: Journal of Functional Analysis Reihe: 275 Seiten: 2917 - 2977 Link zur Publikation, Link zum Preprint ### Spectral continuity for aperiodic quantum systems I. General theory #### Autoren: Siegfried Beckus, Jean Bellissard, Giuseppe De Nittis (2018) How does the spectrum of a Schrödinger operator vary if the corresponding geometry and dynamics change? Is it possible to define approximations of the spectrum of such operators by defining approximations of the underlying structures? In this work a positive answer is provided using the rather general setting of groupoid C*-algebras. A characterization of the convergence of the spectra by the convergence of the underlying structures is proved. In order to do so, the concept of continuous field of groupoids is slightly extended by adding continuous fields of cocycles. With this at hand, magnetic Schrödinger operators on dynamical systems or Delone systems fall into this unified setting. Various approximations used in computational physics, like the periodic or the finite cluster approximations, are expressed through the tautological groupoid, which provides a universal model for fields of groupoids. The use of the Hausdorff topology turns out to be fundamental in understanding why and how these approximations work. Zeitschrift: Journal of Functional Analysis Reihe: 275 Seiten: 2917 - 2977 2018 | Delone dynamical systems and spectral convergence | Siegfried Beckus, Felix Pogorzelski Zeitschrift: Ergodic Theory and Dynamical Systems Link zur Publikation, Link zum Preprint ### Delone dynamical systems and spectral convergence #### Autoren: Siegfried Beckus, Felix Pogorzelski (2018) In the realm of Delone sets in locally compact, second countable Hausdorff groups, we develop a dynamical systems approach in order to study the continuity behavior of measured quantities arising from point sets. A special focus is both on the autocorrelation, as well as on the density of states for random bounded operators. It is shown that for uniquely ergodic limit systems, the latter measures behave continuously with respect to the Chabauty–Fell convergence of hulls. In the special situation of Euclidean spaces, our results complement recent developments in describing spectra as topological limits: we show that the measured quantities under consideration can be approximated via periodic analogs. Zeitschrift: Ergodic Theory and Dynamical Systems 2018 | Shnol-type theorem for the Agmon ground state | Siegfried Beckus, Yehuda Pinchover Zeitschrift: Journal of Spectral Theory Link zum Preprint ### Shnol-type theorem for the Agmon ground state #### Autoren: Siegfried Beckus, Yehuda Pinchover (2018) Let H be a Schrödinger operator defined on a noncompact Riemannian manifold Ω, and let WL(Ω;R). Suppose that the operator H+W is critical in Ω, and let φ be the corresponding Agmon ground state. We prove that if u is a generalized eigenfunction of H satisfying |u|φ in Ω, then the corresponding eigenvalue is in the spectrum of H. The conclusion also holds true if for some KΩ the operator H admits a positive solution in Ω'=ΩK, and |u|ψ in Ω', where ψ is a positive solution of minimal growth in a neighborhood of infinity in Ω. Under natural assumptions, this result holds true also in the context of infinite graphs, and Dirichlet forms. Zeitschrift: Journal of Spectral Theory 2018 | Note on spectra of non-selfadjoint operators over dynamical system | Siegfried Beckus, Daniel Lenz,Marko Lindner,Christian Seifert Zeitschrift: Proceedings of the Edinburgh Mathematical Society Reihe: 61 Seiten: 371 -386 Link zur Publikation, Link zum Preprint ### Note on spectra of non-selfadjoint operators over dynamical system #### Autoren: Siegfried Beckus, Daniel Lenz,Marko Lindner,Christian Seifert (2018) We consider equivariant continuous families of discrete one-dimensional operators over arbitrary dynamical systems. We introduce the concept of a pseudo-ergodic element of a dynamical system. We then show that all operators associated to pseudo-ergodic elements have the same spectrum and that this spectrum agrees with their essential spectrum. As a consequence we obtain that the spectrum is constant and agrees with the essential spectrum for all elements in the dynamical system if minimality holds. Zeitschrift: Proceedings of the Edinburgh Mathematical Society Reihe: 61 Seiten: 371 -386 2017 | On the spectrum of operator families on discrete groups over minimal dynamical Systems | Siegfried Beckus, Daniel Lenz, Marko Lindner, Christian Seifert Zeitschrift: Mathematische Zeitschrift Reihe: 287 Seiten: 993 - 1007 Link zur Publikation, Link zum Preprint ### On the spectrum of operator families on discrete groups over minimal dynamical Systems #### Autoren: Siegfried Beckus, Daniel Lenz, Marko Lindner, Christian Seifert (2017) It is well known that, given an equivariant and continuous (in a suitable sense) family of selfadjoint operators in a Hilbert space over a minimal dynamical system, the spectrum of all operators from that family coincides. As shown recently similar results also hold for suitable families of non-selfadjoint operators in p(ℤ). Here, we generalize this to a large class of bounded linear operator families on Banach-space valued p-spaces over countable discrete groups. We also provide equality of the pseudospectra for operators in such a family. A main tool for our analysis are techniques from limit operator theory. Zeitschrift: Mathematische Zeitschrift Reihe: 287 Seiten: 993 - 1007 2016 | Continuity of the Spectrum of a Field of Self-Adjoint Operators | Siegfried Beckus, Jean Bellissard Zeitschrift: Annales Henri Poincaré Reihe: 17 Seiten: 3425 - 3442 Link zur Publikation, Link zum Preprint ### Continuity of the Spectrum of a Field of Self-Adjoint Operators #### Autoren: Siegfried Beckus, Jean Bellissard (2016) Given a family of self-adjoint operators (At)tT indexed by a parameter t in some topological space T, necessary and sufficient conditions are given for the spectrum σ(At) to be Vietoris continuous with respect to t. Equivalently the boundaries and the gap edges are continuous in t. If (T,d) is a complete metric space with metric d, these conditions are extended to guarantee Hölder continuity of the spectral boundaries and of the spectral gap edges. As a corollary, an upper bound is provided for the size of closing gaps. Zeitschrift: Annales Henri Poincaré Reihe: 17 Seiten: 3425 - 3442 2013 | Spectrum of Lebesgue Measure Zero for Jacobi Matrices of Quasicrystals | Siegfried Beckus, Felix Pogorzelski Zeitschrift: Mathematical Physics, Analysis and Geometry Reihe: 16 Seiten: 289 -308 Link zur Publikation, Link zum Preprint ### Spectrum of Lebesgue Measure Zero for Jacobi Matrices of Quasicrystals #### Autoren: Siegfried Beckus, Felix Pogorzelski (2013) We study one-dimensional random Jacobi operators corresponding to strictly ergodic dynamical systems. We characterize the spectrum of these operators via non-uniformity of the transfer matrices and vanishing of the Lyapunov exponent. For aperiodic, minimal subshifts satisfying the so-called Boshernitzan condition this gives that the spectrum is supported on a Cantor set with Lebesgue measure zero. This generalizes earlier results for Schrödinger operators. Zeitschrift: Mathematical Physics, Analysis and Geometry Reihe: 16 Seiten: 289 -308 CV in englishCV in deutsch University of Potsdam since 10/2018 Post-Doc with Prof. Dr. Matthias Keller Israel Institute of Technology (Technion), Haifa 10/2016-09/2018 Postdoctoral Fellowship with Prof. Dr. Yehuda Pinchover and Prof. Dr. Ram Band Georgia Institute of Technology, Atlanta, USA 02-04/2014 Visiting Student/Research Collaborator Friedrich Schiller University Jena 03/2012-09/2016 PhD Student ## Education PhD 10/2016, Friedrich Schiller University Jena Diploma 02/2012, Friedrich Schiller University Jena ## Grants and Projects DFG Project "Periodic approximations of Schrödinger operators associated with quasicrystals" since 12/2018 (PostDoc position for 2 year and travel money) Scholarship of the DAAD Program to participate at congresses, International Workshop on Operator Theory and its Applications, Lisbon, Portugal Scholarship for "Research in Pairs" at the Mathematisches Forschungsinstitut Oberwolfach, Germany, 01/2018 Postdoctoral Fellowship Israel Institute of Technology (Technion), Haifa, Israel, 10/2016 - 09/2018 Scholarship of the DAAD Program to participate at congresses, XVIII International Congress on Mathematical Physics, Santiago de Chile, Chile Funding for the PhD Seminar "Förderung von interdisziplinären Arbeitsgruppen und Nachwuchsnetzwerken" funded by the Graduierten-Akademie in Jena ## Selected Talks at international conferences • 07/2019 International Workshop on Operator Theory and its Applications (IWOTA 2019), Instituto Superior Técnico, Lisbon (Portugal): Hunting the spectra via the underlying dynamics • 05/2019 8th Miniworkshop on Operator Theoretic Aspects of Ergodic Theory, Leipzig (Germany): Hunting the spectra via the underlying dynamics • 01/2018 Hardy-type inequalities and elliptic PDEs, Midreshet Sde Boker (Israel), Poster: Spectral Approximation of Schrödinger Operators • 10/2017 Workshop "Spectral Structures and Topological Methods in Mathematical Quasicrystals", MFO Oberwolfach (Germany): Spectral stability of Schrödinger operators in the Hausdorff metric • 07/2017 Analysis and Geometry on Graphs and Manifolds, Universität-Potsdam (Germany): Shnol type Theorem for the Agmon ground state • 05/2017 Israel Mathematical Union 2017, Acre (Israel): The space of Delone dynamical systems and related objects • 01/2017 Workshop on Mathematical Physics, Weizmann Institute of Science, Rehovot (Israel), Poster: Spectral Approximation of Schrödinger Operators • 06/2016 Thematic School "Transversal Aspects of Tilings", Oleron (France): Continuity of the spectra associated with Schrödinger operators • 09/2015 CMO-BIRS, Workshop on "Spectral properties of quasicrystals via analysis, dynamics and geometric measure theory", Oaxaca (Mexiko): Spectral approximation of Schrödinger operators: continuity of the spectrum • 07/2015 Young researcher symposium, Pontificia Universidad Catolica de Chile, Santiago de Chile (Chile): Spectral study of Schrödinger operators with aperiodic ordered potential in one-dimensional systems • 06/2015 Workshop on "Time-frequency analysis and aperiodic order", Norwegian University of Science and Technology, Trondheim (Norway): An approximation theorem for the spectrum of Schrödinger operators related to quasicrystals ## Selected Talks in seminars and colloquia • 05/2019 Justus-Liebig-Universität Gießen (Germany): Hunting the spectra via the underlying dynamics • 07/2018 Technische Universität München (Germany): When do the spectra of self-adjoint operators converge? • 10/2017 Pontificia Universidad Catolica de Chile, Santiago (Chile): Shnol type Theorem for the Agmon ground state • 10/2017 Hebrew University of Jerusalem (Israel): When do the spectra of self-adjoint operators converge? • 08/2017 University of Oslo (Norway): Spectral approximation via an approach from C*-algebras • 07/2017 Friedrich-Alexander Universität Erlangen-Nürnberg (Germany): The space of Delone dynamical systems and its application • 07/2017 RWTH Aachen (Germany): Shnol type Theorem for the Agmon ground state • 09/2016 Aalborg University (Danemark): Continuous variation of the spectra: A characterization and a tool • 07/2016 Universität Bielefeld (Germany): Hölder-continuous behavior of the spectra associated with self-adjoint operators • 05/2015 Technische Universität Chemnitz (Germany): Schrödinger operators on quasicrystals • 04/2015 Israel Institute of Technology (Technion), Haifa (Israel): The role of Gähler-Anderson-Putnam graphs in the view of Schrödinger operators • 04/2014 University of Alabama at Birmingham (USA): Gähler-Anderson-Putnam graphs of 1-dimensional Delone sets of finite local complexity • 01/2014 Technische Universität Hamburg-Harburg (Germany): Wannier transformation for Schrödinger operators with aperiodic potential ## (Co-)Supervision Franziska Sieron (Master thesis), The density of periodic configurations in strongly irreducible subshifts of finite type (joint with Prof. Dr. Daniel Lenz), 2016 Daniel Sell (Master thesis), Topological groupoids and Matuis spatial realization theorem (joint with Prof. Dr. Daniel Lenz), 2015 Franziska Sieron (Bachelor thesis), The balanced property of primitive substitutions (joint with Prof. Dr. Daniel Lenz), 2014 ## (Co-)Organized Scientific meetings Euler-Lecture, Universität Potsdam, 05/2019 PhD seminar, Friedrich-Schiller-Universität Jena, 03/2013 – 09/2016 Colloquium: "Job opportunities for mathematicians", Friedrich-Schiller-Universität Jena, 2013 – 2015 PhD symposium at the TU Chemnitz, within the Fall school Dirichlet forms, operator theory and mathematical physics, 02/2013
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024067282676697, "perplexity": 3332.2777075973504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00077.warc.gz"}
http://math.stackexchange.com/questions/117510/security-analysis-of-a-matrix-multiplication-protocol/117700
# Security analysis of a matrix multiplication protocol Suppose Alice would like to obtain the product of two $m\times m$ matrices i.e. $A$ and $B.$ Alice has $A,$ whereas Bob has $B.$ Since Alice does not want to reveal $A$ to Bob, she chooses a $m\times m$ random invertible matrix $R.$ She sends $RA$ to Bob over a secure channel. Bob obtains $RA,$ and calculates $RAB,$ and sends it to Alice over a secure channel. Alice obtains $AB$ by inverting $R$ i.e. $R^{-1}RAB$. $R$ is only utilized once. Any ideas on how to proceed with the security analysis of the above protocol? Specifically is H(A|RA) = H(A) ? - oops sorry about that. – user996522 Mar 7 '12 at 13:04 Relevant: Check out this paper cs.bgu.ac.il/~kobbi/papers/oge_tcc_camera2.pdf and the forward/backward citations. There are existing works on secure multiparty computation, and secure function evaluations. – user2468 Mar 7 '12 at 13:31 What is this supposed to achieve compared to Bob simply sending $B$ to Alice over the secure channel? – Henning Makholm Mar 8 '12 at 0:13 Its a primitive that i need as i have an idea for securely solving a linear equation which depends on the security of this. – user996522 Mar 8 '12 at 0:26 Crossposted to crypto.SE as crypto.stackexchange.com/questions/2023/… – Ilmari Karonen Mar 8 '12 at 9:05 If A is invertible (over a fixed finite field), then this protocol is information-theoretically secure. To see this, first note that, for any $A$, the ciphertext $RA$ is uniformly distributed. Furthermore, the value of $RA$ is independent of $A$. Therefore, for any prior $P$ over messages, we have $\Pr[A|RA] = \Pr[A \wedge RA] / \Pr[RA] = \Pr[A]\Pr[RA] / \Pr[RA] = \Pr[A]$. @user996522: The key is that R is chosen uniformly at random. By analogy, consider picking a point in $[0,1)$. Let A be an arbitrary point in the interval, and let let R by a uniform random value in $[0,1)$. Then $A+R$ is uniformly random over the interval $[A, A+1)$, and so the fractional part is uniform over $[0,1)$. – Jeremy Hurwitz Jul 3 '12 at 4:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354709982872009, "perplexity": 301.05631334390836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160918.28/warc/CC-MAIN-20160205193920-00090-ip-10-236-182-209.ec2.internal.warc.gz"}
https://planetmath.org/juxtapositionofautomata
# juxtaposition of automata Let $A=(S_{1},\Sigma_{1},\delta_{1},I_{1},F_{1})$ and $B=(S_{2},\Sigma_{2},\delta_{2},I_{2},F_{2})$ be two automata. We define the juxtaposition of $A$ and $B$, written $AB$, as the sextuple $(S,\Sigma,\delta,I,F,\epsilon)$, as follows: 1. 1. $S:=S_{1}\lx@stackrel{{\scriptstyle\cdot}}{{\cup}}S_{1}$, where $\lx@stackrel{{\scriptstyle\cdot}}{{\cup}}$ denotes disjoint union, 2. 2. $\Sigma:=(\Sigma_{1}\cup\Sigma_{2})\lx@stackrel{{\scriptstyle\cdot}}{{\cup}}\{\epsilon\}$, 3. 3. $\delta:S\times\Sigma\to P(S)$ given by • $\delta(s,\epsilon):=I_{2}$ if $s\in F_{1}$, and $\delta(s,\epsilon):=\{s\}$ otherwise, • $\delta|(S_{1}\times\Sigma_{1}):=\delta_{1}$, • $\delta|(S_{2}\times\Sigma_{2}):=\delta_{2}$, and • $\delta(s,\alpha):=\varnothing$ otherwise (where $\alpha\neq\epsilon$). 4. 4. $I:=I_{1}$, 5. 5. $F:=F_{2}$. Because $S_{1}$ and $S_{2}$ are considered as disjoint subsets of $S$, $I\cap F=\varnothing$. Also, from the definition above, we see that $AB$ is an automaton with $\epsilon$-transitions (http://planetmath.org/AutomatonWithEpsilonTransitions). The way $AB$ works is as follows: a word $c=ab$, where $a\in\Sigma_{1}^{*}$ and $b\in\Sigma_{2}^{*}$, is fed into $AB$. $AB$ first reads $a$ as if it were read by $A$, via transition function $\delta_{1}$. If $a$ is accepted by $A$, then one of its accepting states will be used as the initial state for $B$ when it reads $b$. The word $c$ is accepted by $AB$ when $b$ is accepted by $B$. Visually, the state diagram $G_{A_{1}A_{2}}$ of $A_{1}A_{2}$ combines the state diagram $G_{A_{1}}$ of $A_{1}$ with the state diagram $G_{A_{2}}$ of $A_{2}$ by adding an edge from each final node of $A_{1}$ to each of the start nodes of $A_{2}$ with label $\epsilon$ (the $\epsilon$-transition). ###### Proposition 1. $L(AB)=L(A)L(B)$ ###### Proof. Suppose $c=ab$ is a words such that $a\in\Sigma_{1}^{*}$ and $b\in\Sigma_{2}^{*}$. If $c\in L(AB)$, then $\delta(q,a\epsilon b)\cap F\neq\varnothing$ for some $q\in I=I_{1}$. Since $\delta(q,a\epsilon b)\cap F_{2}=\delta(q,a\epsilon b)\cap F\neq\varnothing$ and $b\in\Sigma_{2}^{*}$, we have, by the definition of $\delta$, that $\delta(q,a\epsilon b)=\delta(\delta(q,a\epsilon),b)=\delta_{2}(\delta(q,a% \epsilon),b)$, which shows that $b\in L(B)$ and $\delta(q,a\epsilon)\cap I_{2}\neq\varnothing$. But $\delta(q,a\epsilon)=\delta(\delta(q,a),\epsilon)$, by the definition of $\delta$ again, we also have $\delta(q,a)\cap F_{1}\neq\varnothing$, which implies that $\delta(q,a)=\delta_{1}(q,a)$. As a result, $a\in L(A)$. Conversely, if $a\in L(A)$ and $b\in L(B)$, then for any $q\in I=I_{1}$, $\delta(q,a)=\delta_{1}(q,a)$, which has non-empty intersection with $F_{1}$. This means that $\delta(q,a\epsilon)=\delta(\delta(q,a),\epsilon)=I_{2}$, and finally $\delta(q,a\epsilon b)=\delta(\delta(q,a\epsilon),b)=\delta(I_{2},b)$, which has non-empty intersection with $F_{2}=F$ by assumption. This shows that $a\epsilon b\in L((AB)_{\epsilon})$, or $ab\in L(AB)$. ∎ Title juxtaposition of automata JuxtapositionOfAutomata 2013-03-22 18:03:51 2013-03-22 18:03:51 CWoo (3771) CWoo (3771) 14 CWoo (3771) Definition msc 03D05 msc 68Q45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 80, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901582598686218, "perplexity": 209.98621327038796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00737.warc.gz"}
http://mathoverflow.net/questions/70397/turing-machines-that-read-the-entire-program-tape?sort=oldest
# Turing machines that read the entire program tape Consider a two tape universal Turing machine with a one-way-infinite, read-only program tape with a head that can only move right, as well as a work tape. The work tape is initialized to all zeros and the program tape is initialized randomly, with each cell being filled from a uniform distribution over the possible symbols. What are the possibilities for the probability that the head on the program tape will move infinitely far to the right in the limit? Obviously, this will depend on the specifics of the Turing machine, but it must always be in the range $[0,1-\Omega)$, where $\Omega$ is Chaitin's constant for the TM. Since this TM is universal, $\Omega$ must be in the range $(0,1)$, so the probability must always be in $[0,1)$. Is this entire range, or at least a set dense in this range and including zero, possible? - Related question: mathoverflow.net/questions/64773/… –  Joel David Hamkins Jul 15 '11 at 18:52 As far as I can see, if you consider a single TM, then you get only one specific probability, not a dense set, whereas if you let the TM vary then $\Omega$ will vary also, and the set of probabilities will contain all rational numbers in $[0,1]$ (and some other numbers too). If you fix the number of symbols but let the TM vary, it's not so clear that you'll get all the rationals, but you'll still get a dense set. EDIT to take into account the revision of the question: Given a universal TM, you can make trivial modifications that maintain universality but change the probability $p$ of going infinitely far to the right. For example, modify your original machine $M$ to an $M'$ that works like this: If the first symbol $x$ on the program tape is 0, then halt immediately; otherwise, move one step to the right and work like $M$ on the program minus the initial symbol $x$ (and, just to guarantee universality, if the computation halts, go back to $x$, erase it, and move $M$'s answer one step to the left so that it's located where answers should be). That modification decreases the probability $p$. You can increase $p$ by having an initial 0 in the program trigger a race to the right by $M'$ --- it just keeps marching to the right regardless of what symbols it sees. You can achieve some control over the amount by which $p$ increases or decreases by having the modification $M'$ begin by checking more than just one symbol at the beginning of the program. As far as I can tell, such modifications, carried out with enough care (which I don't have time for just now) should give you a dense set of $p$'s. EDIT to add some details: Given a universal TM $M$ with tape alphabet $A$, and given a subinterval of $[0,1]$, choose an integer $n$ so large that your given interval includes one of the form $[k/|A|^n,(k+1)/|A|^n]$. Let $S$ be a set of $k$ words of length $n$ over $A$, and let $w$ be another such word that is not in $S$. Modify $M$ to $M'$ that works as follows. If the first $n$ symbols on the tape are a word from $S$, then march to the right forever, ignoring everything else. If they are the word $w$, then simulate $M$ on the remainder of the tape (the part after $w$), moving any final answer into the right location, as in my previous edit. Finally, if the word consisting of the tape's first $n$ letters is neither in $w$ nor in $S$, then halt immediately. Then the probability that $M'$ moves infinitely to the right will be at least $k/|A|^n$ (the probability that the initial $n$-word on the tape is in $S$) and at most $(k+1)/|A|^n$ (the probability that this $n$-word is either $w$ or in $S$) and therefore within the originally given interval. - My question wasn't very well stated; I will revise it. –  Declan Freeman Jul 15 '11 at 4:04 Andreas considered the interpretation of your question where we fix the program and then vary the input. Let me now consider the dual version of the question, where we fix the infinite random input and vary the program. Surprisingly, there is something interesting to say. The concept of asymptotic density provides a natural way to measure the size or density of a collection of Turing machine programs. Given a set $P$ of Turing machine programs, one considers the proportion of all $n$-state programs that are in $P$, as $n$ goes to infinity. This limit, when it exists, is called the asymptotic density or probability of the set $P$, and a set with asymptotic density $1$ will contain more than 99% of all $n$-state programs, when $n$ is large enough, as close to 100% as desired. What I claim is that for your computational model, almost every program leads to a finite computation. Theorem. For any fixed infinite input (on the read-only tape), the set of Turing machine programs that complete their computation in finitely many steps has asymptotic density $1$. In other words, for fixed input, almost every program stops in finite time. The proof follows from the main result of my article: J. D. Hamkins and A. Miasnikov, The halting problem is decidable on a set of asymptotic probability one, Notre Dame J. Formal Logic 47, 2006. http://arxiv.org/abs/math/0504351. The argument depends on the convention in the one-way infinite tape context that computation stops should the head attempt to move off the end of the tape. The idea has also come up on a few other MO quesstions: What are the limits of non-halting? and Solving NP problems in (usually) polynomial time? in which it is explained that the theme of the result is the black-hole phenomenon in undecidability problems, the phenomenon by which the difficulty of an undecidable or infeasible problem is confined to a very small region, outside of which it is easy. The main result of our paper is to show that the classical halting problem admits a black hole. In other words, there is a computable procedure to correctly decide almost every instance of the classical halting problem, with asymptotic probability one. The proof method is to observe that on fixed infinite input, a random Turing machine operates something like a random walk, up to the point where it begins to repeat states. And because of Polya's recurrence theorem, it follows that with probability as close as you like to one, the work tape head will return to the starting position and fall off the tape before repeating a state. My point now is that the same observation applies to your problem. For any particular fixed infinite input, the work tape head will fall off for almost all programs. Thus, almost every program sees only finitely much of the input before stopping. - Theorem 3 in the linked paper is exactly the claim that for any fixed input the asymptotic probability one behavior of a Turing machine (in this one-way infinite tape model) is that the head falls off the tape. –  Joel David Hamkins Jul 15 '11 at 18:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728349804878235, "perplexity": 235.37465747085383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163056120/warc/CC-MAIN-20131204131736-00010-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/the-circle-has-returned-help-me-please.561726/
# THE CIRCLE HAS RETURNED(Help me please) 1. Dec 20, 2011 ### Plutonium88 1. The problem statement, all variables and given/known data PICTURE: http://imageshack.us/photo/my-images/21/circls.png/ A Force of gravity acts upon a ball on top a circle. The ball rolls a down the curve of the circle until a CERTAIN POINT. at this CERTAIN POINT the ball detaches from the circle and travels until it his the ground. What is the distance between the top of the CIRCLE and the point at which it detaches... GIVEN INFORMATION: Mo=Mas Of ball, D = Diameter Of Circle -There is NO FRICTION -Energy is conserved (only conservative forces acting-- Gravity) - 2. Relevant equations Centrepetal Force = Inward force Conservation Of Energy This problem is similiar to BANKED CURVE!!!!!!!! (this is the only way that i know of an object can be 'forced' towards the center, which is due to the horizontal component of normal force) 3. The attempt at a solution First i started by solving for forces on a banked curve... Fnety= FNSinθ - mg = 0 Therefore FN = mg/Sinθ Fnetx = FNCosθ Fnetx = MgTanθ Now i used the Horizontal component (FNET X) and made it Equal to FC because FC is the horizontal inward force. Fc = Fnetx (Simplify) Vo=√[DgTanθ/2] Now i'm using the conservation of energy and comparing it's initial moment at the top where it has a Velocity of 0. And comparing it to the "CERTAIN POINT" at which it detaches. ET1 = mgD Et2 = mg(d-x) + 1/2mVo^2 Et1 = Et2 (SIMPLIFY) X = DTanθ/4 Now i'm left with a problem... The angle... I tried to solve for it by creating a Right angled triangle with a radius line mideway through the cirlce, and a radius line going toward the object... but yea no luck.. Any ideas anyone? My teacher for some reason seems to tell me i don't need an angle... Which leads me to think that it's not on a banked curve and i'm supposed to have an angle but it must... C.c SOME ONE HELP ME!!! i've had this problem for WEEKS but i'll never give it UP!!!! I Believe my solution is correct, due to the fact that the units match up. HOWEVER i don't know how to solve for Theta... THE ANGLE the projectile detaches at!!!! :( -Can some one give me some hints on how i can get this angle... 2. Dec 20, 2011 ### Staff: Mentor My advice: Scrap the idea of using the banked curve solution to solve this problem. Instead, apply Newton's 2nd law to the ball. What's the condition that tells you when the ball just starts to lose contact? 3. Dec 20, 2011 ### Plutonium88 I would have to say... When the balls horizontal velocity is more than the balls vertical velocity... The normal force on the object would also have to be zero at the point it detaches as well.. And is it impossible to find the angle cause i dont have enough info :(? Last edited: Dec 20, 2011 4. Dec 20, 2011 ### Staff: Mentor I don't think so. That's the one. Express that mathematically. You have all the info needed to find the angle. 5. Dec 20, 2011 ### Plutonium88 omg. i think i love you if the way i'm thinking is correct... Okay so when i did the Y components on the banked curve for the force. I know FN is = 0 as you confirmed for me So, Fnety = FNSinθ - FG = 0 there fore FN = Mg/Sinθ So since FN is 0 0=mg/sinθ θ=mg/sin :)? 6. Dec 20, 2011 ### Plutonium88 pooh.. when i plug this in units don't match up.. I get a newton instead of meters. .. So if i do FN as components.. the only force acting on the ball is the conservative force of gravity... But how can this help me solve the angle? 7. Dec 20, 2011 ### Staff: Mentor OK. Nah. Consider forces in the radial direction. Apply Newton's 2nd law. (What's the acceleration in the radial direction?) 8. Dec 20, 2011 ### Plutonium88 okay so heres my next attempt then... And I'll let the banked curve idea go, as long as you promise to tell me why? or after i solved this explain how to do it using the banked curve? So i have my forces set as FN in the x direction, and FG in the y direction... I solved for Velocity at that point by FC = FN mv^2/r = 0 V=sqrt [D/2m] But if i do this, do i consider the speed at the top of the circle also which is V=sqrt[dg/2] Last edited: Dec 20, 2011 9. Dec 20, 2011 ### Plutonium88 wait. i see i just use this velocity into the energy haha one sec.. let me write this up 10. Dec 20, 2011 ### Staff: Mentor The only connection between the two is that they both involve centripetal acceleration. I don't quite understand your x and y axes. Two forces act on the ball: The normal force and the weight. Which way do they act? That's not true. Consider force components in the radial direction. (There are only two forces. What are their radial components?) 11. Dec 22, 2011 ### Plutonium88 Okay my bad i had to study for a math test.. so can you correct me plz if im wrong. So if consider the force components in the radial direction.. I have Force of gravity FG, which is in the y component and a normal force in the x component.. Now is this normal force on an angle in the radial direction? and if it is... this is the same problem as the banked curve, how am i to solve for this dang angle. http://imageshack.us/photo/my-images/14/circlesk.png/ i gotta pass out.. but ill get ur feedback tmw hopefully.. uhh and also.. Does the fact that there are two normal forces? the normal force from the circle and the normal force from the Ball itself in opposite directions? 12. Dec 22, 2011 ### Plutonium88 Also my force diagram... http://postimage.org/image/7g18q7iip/ [Broken] correct me if im wrong here too. Last edited by a moderator: May 5, 2017 13. Dec 22, 2011 ### Staff: Mentor The force of gravity is vertical, but the normal force is not generally horizontal. The normal force is in the radial direction. It's not the same problem. Once again: Apply Newton's 2nd law in the radial direction. All you care about are the forces acting on the ball. Only one normal force acts on the ball; it pushes the ball radially outward. 14. Dec 22, 2011 ### Staff: Mentor Only two forces act on the ball. Get rid of everything else. Also: Indicate the angle of the ball's position, measured from the vertical. You'll need that angle when finding the radial components of the forces. (And when you set up the equations as I suggest you'll end up solving for that angle.) Last edited by a moderator: May 5, 2017 15. Dec 23, 2011 ### Plutonium88 OKAY FINALLY CHRISTMAS BREAK. Now i can invest all my time into solving this damned problem... http://imageshack.us/photo/my-images/835/newbitmapimage3o.png/ Okay so here are my ideas... just let me know if i can't use them :O!(also your help is much appreciated through all of this. I'm sorry its taking me so long.. i just can't seem to grasp this for some reason :() okay so fnety= fncosa - mg = 0 therefore fn = mg/cosa Fnetx = fnsina fnetx= mgtana (sina/cosa = tana) fnetx = mAx mAx = mgtana Ax = Gtana Now since i know my Ay = g i thought i could make a triangle using the ax and ay vectors.. but that didn't seem to work, i'm wondering why i can't do that? anyways, with the diagram, i noticed there is some type of relation between the x and the forces, but i still don't quite see how to relate it? a force with a length? :(? 16. Dec 24, 2011 ### Staff: Mentor The diagram is OK except that you drew the point of contact at 90° from the vertical. Instead, imagine the point of contact at angle 'a' from the vertical. The acceleration isn't zero in the y-direction. It's sliding down the sphere, accelerating as it goes. Again, you're getting hung up by comparing it to the 'banked road' problem. One more time: Consider forces in the radial direction. Similar Discussions: THE CIRCLE HAS RETURNED(Help me please)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9276984930038452, "perplexity": 1221.7638912097518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887253.36/warc/CC-MAIN-20180118111417-20180118131417-00259.warc.gz"}
https://www.bodhiai.in/blogs/angles-their-measumentsin-degree-radians-area-of-sector-156/
BodhiAI Nov. 11, 2019 #### Angles & their Measuments(in degree ,Radians) ,Area of Sector): Angle an Its Measurement An angle is generate by rotating a line segment about any of its end points from some Initial position to some terminal position. The measure of an angle is the amount of rotation. Important If the rotation is in anticlockwise sense, the angle measure is positive and if the rotation is in clockwise sense, the angle measured is negative. Types of Systems of Measuring Angles A. Sexagesimal System/English Measure or British System In Sexagesimal system of measurement, the units of measurement are degrees, minutes and seconds. Degree- A right angle is divided into 90 equal parts an each part is calle a degree. One degree is denoted by 10. Minute- A degree is ivided into 60 equal parts and each part is called a minute. One minute is denoted by 1'. Second- A minute is divided into 60 equal parts and each part is called a second. One secon is denoted by 1". Important Sexagesimal system of angles 1 night angle = 90 degree (900) 1 degree = 60 minute (600) 1 minute = 60 second (600) B. Centesimal System or French System In centesimal system of measurement, the units of measurement are grades, minutes an seconds. Grade- A right angle is divided into 100 equal parts and each part is called a grade. One grade is denoted by 1g. Minute- A grade is divided into 100 equal parts and each part is called a minute. One minute is denote by 1'. Second- A minute is divied into 100 equal parts and each part is called a second. One secon is enoted by 1". Important Centersimal system of angles 1 right angle = 100 grade (1000) 1 grade = 100 minute (100') 1 minute = 100 second (100") C. Circular Measure In circular system of measurement, the unit of measurement is radian. A radian is the angle subtended at the centre of the circle by an arc equal to the length of the radius of the circle. Arc Length The length l of an arc AB of the circle of radius r subtending an angle at the centre of the circle is So, the length of one full rotation around the circle will subtend angle of 360º Such that Important The radian is a constant angle i.e. it does not depend upon the radius of the circle from which it is derived Problem Solving Trick If D, G and C are respectively the measures of an angle in degrees, grades and radians, then The Constant Number The length of the circumference of a circle always bears a constant ratio to its diameter. Thus the ratio (circumference) : (diameter) is the same for all circles. This constant ratio is always denoted by the Greek letter , so that is a real number.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422662854194641, "perplexity": 913.1664074248893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594391.21/warc/CC-MAIN-20200119093733-20200119121733-00387.warc.gz"}
http://quant.stackexchange.com/questions/8575/do-some-option-pricing-models-allow-for-misspecification-and-what-does-it-mean
Do some option pricing models allow for misspecification and what does it mean? This is to some extent a theoretical question and maybe we can work together to produce some input and output. Diverse option pricing models are reported to be misspecified in various studies. One example is the paper of Baksi et. al (1997) called "Empirical performance of alternative option pricing models". The authors come to this conclusion by estimating the implied volatilities for a full dataset, and then re-estimating these implied volatilities for six subsets based on the moneyness-maturity categories. They find differences between the values of the implied volatilities. I quote: "if each candidate option pricing model were correctly specified, the six sets of option prices, formed across either moneyness or maturity, should not have resulted in different implied parameter volatility values nor should the “implied-parameter matrix” treatment have led to any performance improvement." My first question is, what does misspecified actually mean? Isn't this difference between the implied parameters due to the presence of the volatility smile; in that case, one should say that the models are not misspecified, but that this is a result due to the data. Secondly, how do some models allow for this misspecification? If for instance, a specific model is misspecified during a particular period, it is imaginable that it produces a smaller pricing error during a different sub-period. One example I heard is the GARCH option pricing model; a constant GARCH model is nested within the GARCH framework, so that it allows for misspecification. I don't entirely understand this concept, so maybe someone can help me out? Thank you. - You are saying that "Diverse option pricing models are reported to be misspecified in various studies." Then you ask what it means...I am confused. And I feel you need to supply a lot more information to have readers understand what is under discussion: How do the authors "estimate" the implied volatilities (from what data/model), re-estimate by changing what, what are those "6 sets of option prices". I am afraid the reader cannot extrapolate from the little information in the quote provided. –  Matt Wolf Jul 26 '13 at 14:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161821007728577, "perplexity": 917.11963745157}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098468.93/warc/CC-MAIN-20150627031818-00219-ip-10-179-60-89.ec2.internal.warc.gz"}
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.112.161303
# Synopsis: Sterile Neutrino as Dark Matter Candidate A dark matter particle in the form of a noninteracting neutrino could explain the recent detection of an x-ray emission line from galaxy clusters. A hypothetical neutrino that does not interact through the weak force could be the source of a recently detected x-ray emission line coming from galaxy clusters. However, previous models using this so-called “sterile” neutrino as a form of dark matter were not able to satisfy constraints from cosmological observations. Now, writing in Physical Review Letters, Kevork Abazajian of the University of California, Irvine, shows that a sterile neutrino with a mass of $7$ kilo-electron-volts (keV) could be a viable dark matter candidate that both explains the new x-ray data and solves some long-standing problems in galaxy structure formation. Cosmologists have long considered neutrinos as possible dark matter particles. However, because of their small mass (less than about $1$ eV), conventional neutrinos are too fast, or “hot,” to form the dense dark matter structures needed to hold galaxies and galaxy clusters together. By contrast, sterile neutrinos, which result from certain neutrino theories, can have larger masses and could have been naturally produced in the big bang by neutrino flavor mixing. The problem has been that sterile neutrinos should decay, producing an x-ray signal that no one has observed—until maybe now. Earlier in 2014, an analysis of galaxy cluster data revealed an x-ray emission line, which is consistent with the decay of a $7$-keV sterile neutrino. Normally, dark matter with this mass would be too “warm” to match galaxy data. However, Abazajian showed that the sterile neutrinos could have a “cooler” momentum distribution if they were produced through resonantly enhanced neutrino flavor mixing (the MSW effect). When Abazajian plugged this neutrino into a cosmological model, he found it could explain both the small number of Milky Way satellite galaxies and their central densities, which have eluded the currently favored cold dark matter model. – Michael Schirber More Features » ### Announcements More Announcements » Optics ## Next Synopsis Materials Science ## Related Articles Cosmology ### Synopsis: Universe Preceded by an Antiuniverse? A new cosmology model suggests that our Universe has a mirror image in the form of an “antiuniverse” that existed before the big bang. Read More » Astrophysics ### Focus: Black Hole as Extreme Particle Accelerator Large-scale simulations suggest a mechanism by which supermassive black holes could accelerate particles to ultrahigh energies. Read More » Particles and Fields ### Viewpoint: Black Hole Evolution Traced Out with Loop Quantum Gravity Loop quantum gravity—a theory that extends general relativity by quantizing spacetime—predicts that black holes evolve into white holes. Read More »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959102630615234, "perplexity": 1451.5617077112222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583680452.20/warc/CC-MAIN-20190119180834-20190119202834-00570.warc.gz"}
http://qatestingblog.com/types-of-xpath/
# Types of XPath 1. Absolute XPath Absolute XPaths starts with the root of the HTML pages. Absolute XPaths are not advisable for most of the time due to following reasons 1. Absolute XPaths are lengthier and hence they are not readable 2. Absolute XPaths are brittle when minor structural changes are done to the web pages Absolute XPaths shall be used only when a relative XPath cannot be constructed. (highly unlikely). Absolute XPaths tends to break as web pages/content is changed. Hence it is not recommended to use absolute XPath in Selenium. Syntax: Absolute XPaths start with  /html Example: /html/body/div[1]/div/div[2]/form/div[2]/input 2. Relative XPath Relative XPaths is used for locating an element with respect to an element known as XPath. The element of your choice is referred relative to a known element. Syntax: Relative XPaths are started with two forward slashes ‘//’. Example: //div[@id=’divUsername’]/input Note: Absolute XPaths are faster than the relative XPaths 3. Exact XPath Locating elements using their attributes, values and inner text of the elements.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.921970009803772, "perplexity": 3419.575900724945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578534596.13/warc/CC-MAIN-20190422035654-20190422061654-00169.warc.gz"}
https://www.arxiv-vanity.com/papers/1904.12115/
# Direct capture cross section of 9Be(n,γ)10Be Peter Mohr Diakonie-Klinikum, Schwäbisch Hall D-74523, Germany Institute for Nuclear Research (Atomki), Debrecen H-4001, Hungary April 12, 2022 ###### Abstract The cross section of the Be(n,)Be reaction was calculated in the direct capture model. All parameters of the calculations were adjusted to properties of the Be + n system at thermal energies. The calculated cross section at thermonuclear energies shows the expected behavior of -wave capture at low energies, but increases towards higher energies as typical -wave capture. Excellent agreement between new experimental data in the astrophysically relevant energy region and the present calculation is found. ## I Introduction In a recent study the Be(n,)Be reaction was investigated at thermal and stellar energies Wallner et al. (2019). The main aim of that study was the measurement of the cross section at energies in the keV region which is essential to determine the astrophysical reaction rate at the high temperatures which can be found during core-collapse supernova explosions. Here the Be(n,)Be reaction may play an important role in the so-called -process under neutron-rich conditions Wallner et al. (2019). In general, the formation of C from nucleons and  particles is hindered by the gaps of stable nuclei at masses and which has to be bypassed by three-particle reactions. Depending on the  and neutron densities in the astrophysical environment, the triple-alpha () process may be supplemented by the (n) or (nn) reactions which both proceed via Be, either directly produced in (n) or indirectly in (nn) and subsequent He(,n)Be. Then C can be formed from the Be(,n)C reaction; however, Be can also be detracted from the C formation by either the Be(n,)Be or Be(,n)Be reactions (the latter becoming only relevant at high temperatures). The neutron-rich bypasses to the triple-alpha process occur in the -process in core-collapse supernovae. The onset of the -process is discussed in detail in Woosley and Hoffman (1992), and further information on the relevance of the different three-body processes is given in Görres et al. (1995); Bartlett et al. (2006). Experimental data for the Be(n,)Be reaction in the keV region are very sparse. The resonance properties of the lowest resonance in Be(n,)Be have been studied by Kitazawa et al. Kitazawa et al. (1994), and three data points with relatively large error bars are provided by Shibata in an unpublished thesis made in the same group Shibata (1992). This gap is filled now by the new experimental data of Wallner et al. Wallner et al. (2019). A very brief theoretical analysis of the new experimental data in the direct capture model is also given in Wallner et al. (2019), and it is concluded that the -wave contribution had to be scaled down by about 30% to fit the new experimental data. It is the scope of the present study to provide a more detailed analysis of the direct capture process in the Be(n,)Be reaction. It will be shown that the new data in the keV region can be well described if the parameters of the calculation are carefully chosen to reproduce the well-known properties of Be + n at thermal energies (i.e., without any additional adjustment of parameters to the new data in the keV region). Furthermore, the contribution of low-lying resonances is re-analyzed, leading to a slightly different reaction rate at very high temperatures. Obviously, there is no major change in the astrophysical reaction rate at lower temperatures because finally the calculated -wave contributions in Wallner et al. (2019) (adjusted to fit the new data in the keV region) and in this study (which fit the keV data without adjustment) are practically identical. ## Ii The direct capture model ### ii.1 Basic considerations As long as the level density in the compound nucleus (Be in the present case) is low, resonances play only a minor role, and the capture cross section is dominated by the direct capture (DC) process. Often this is the case for light nuclei, but DC may also be dominant for neutron-rich nuclei, in particular with closed neutron shells, where the low -value of neutron capture corresponds to relatively small excitation energies and thus low level densities in the compound nucleus. As a nice example, DC was experimentally confirmed for the Ca(n,)Ca reaction Beer et al. (1996), and it was possible to describe the cross section in the keV region after adjustment of the parameters to thermal properties of the Ca + n system. The full DC formalism is given by Kim et al. Kim et al. (1987) and also listed in Beer et al. (1996); Mohr et al. (1998). Basic considerations on DC have already been provided by Lane and Lynn more than 50 years ago Lane and Lynn (1960a, b). The chosen model in Wallner et al. (2019) is based on Mengoni et al. (1995) which contains the same underlying physics with a focus on direct -wave capture. Here I briefly repeat only the essential features of the DC model; for details, see Beer et al. (1996); Mohr et al. (1998); Mengoni et al. (1995); Kim et al. (1987). The DC cross sections  scale with the square of the overlap integrals I=∫dru(r)OE1/M1χ(r) (1) where is the electric or magnetic dipole operator; E2 transitions are much weaker than E1 transitions for the light nucleus Be and can be neglected for the DC calculations. The and are the bound state wave function and scattering state wave function. These wave functions are calculated from the two-body Schrödinger equation using a nuclear potential without imaginary part because the damping of the wave function in the entrance channel by the small DC cross sections is typically very small Krausmann et al. (1996). Finally, the DC cross section has to be normalized with the spectroscopic factor to obtain the capture cross section to a final state : σγ,f=(C2S)fσDCf. (2) The total capture cross section is obtained by the sum over all final states : σγ=∑fσγ,f. (3) An essential ingredient for the DC model is the nuclear potential for the calculation of the wave functions and . In the present work, a folding potential was used: V(r)=λVF(r) (4) with the strength parameter of the order of unity. For details of the folding potential, see Beer et al. (1996); Mohr et al. (1998). The advantage of the folding potential is that only one parameter, namely the strength , has to be adjusted which reduces the available parameter space significantly (compared to the widely used Woods-Saxon potentials with three parameters). ### ii.2 Adjustment of the potential For the calculation of bound state wave functions , the potential strength is adjusted to the binding energy of the respective state to ensure the correct asymptotic shape of . Thus, the only parameter of the potential is fixed for each final state , and all wave functions can be calculated without further adjustment of parameters (see Table 1). The scattering wave function for the -wave with angular momentum has to reproduce the thermal scattering length. From the bound coherent and incoherent scattering lengths fm and fm Sears (1992) it turns out that the free scattering lengths and for and are almost identical, and thus for simplicity a weighted average was used for all scattering -waves instead of and . Note that the above and result from the coupling of the neutron spin , the spin of the Be ground state , and angular momentum . The very minor variations of within about 1% do practically not affect the calculated DC cross sections. The adjustment of the potential strength for the scattering -wave is more complicated because the thermal scattering lengths are related to -wave scattering only. As an alternative, the same procedure as for the bound states was applied. Parameters were determined by adjustment to all bound () and quasi-bound () states in Be where transfer was clearly assigned in the Be(d,p)Be or Be(,He)Be reactions Tilley et al. (2004). From the average of all states one finds a significantly lower for the -wave, compared to for the -wave. Similar to the -wave, the same value for was used for both channel spins and . ### ii.3 Adjustment of spectroscopic factors Spectroscopic factors are required for neutron transfer to the , , and shells. As the potential is well-constrained for the incoming -wave at thermal energies, spectroscopic factors can be derived from the thermal neutron capture cross section of Be using Eq. (2). The thermal neutron capture cross section has been determined in several experiments, and the results are in excellent agreement. I adopt mb which results from the weighted average of mb Firestone and Revay (2016), mb Conneely et al. (1986), and mb from the new experiment Wallner et al. (2019). The branching ratios to the individual final states in Be are also taken from the recent experiment by Firestone and Revay Firestone and Revay (2016). For the bound states with , contributions of the transfers to the and shells have to be added. However, this can be simplified because the -wave capture scales approximately with for any combination of and transfer. As long as a proper adjustment to the capture cross section is made at thermal energies, the -wave capture in the keV region must also be reproduced. Therefore, an effective spectroscopic factor is listed in Table 1 which takes into account only the transfer to the shell; contributions of the transfer are neglected. The adjustment of the effective spectroscopic factors to the thermal capture cross section is fortunately possible also for the state at MeV because a weak M1 transition to this state was detected in Firestone and Revay (2016). Only for the state at MeV an adjustment of from the thermal capture cross section is not possible because no primary -ray could be detected. Consequently, for this state had to be fixed in a different way. For that purpose a procedure was used which relates the thermal scattering lengths to the spectroscopic factors of subthreshold -wave states Mohr et al. (1997). As the adjusted for the neighboring state from Eq. (2) is about 35% lower than from the procedure of Mohr et al. (1997), the same reduction factor was applied for the unknown for the state, leading to (see Table 1). This value is roughly consistent with which can be derived with huge uncertainties from a weak secondary -ray in thermal neutron capture after correction for feeding Firestone and Revay (2016). A comparison of the effective spectroscopic factors in Table 1 to spectroscopic factors from transfer reactions like Be(d,p)Be is not straightforward. First, the effective spectroscopic factors of this study are calculated for the transfer to the shell only which simplifies the present calculations (see discussion above), but complicates the comparison to data from transfer reactions. Second, spectroscopic factors from transfer depend on the chosen parameters of the underlying calculations of the reaction cross sections Mukhamedzhanov et al. (2008), which are typically based on the distorted wave Born approximation (DWBA). This is reflected by wide variations of from (d,p), (,He), and (Li,Li). In some cases there is even disagreement on the transferred angular momentum . The generally poor agreement of the from different transfer reactions is explicitly stated in the compilation of Tilley et al. Tilley et al. (2004). Third, the two levels around MeV in Be cannot be resolved easily in transfer experiments. Therefore, I restrict myself here to list the adopted spectroscopic factors from different transfer reactions in Table 1 (as compiled in the ENSDF database ENS or given in Tilley et al. Tilley et al. (2004)). The only noticeable peculiarity is the deviation for the first excited state in Be between the huge from the thermal (n,) cross section and from different transfer reactions. The thermal branching to the state at MeV is moderate with about 11%, but well-defined Firestone and Revay (2016), and thus is well-constrained in the present approach. A more detailed discussion of spectroscopic factors is omitted because of the significant uncertainties of the from the different transfer reactions. ## Iii Results and discussion After the adjustment of the potential in Sec. II.2 and of the spectroscopic factors in Sec. II.3, all parameters for the DC calculations are now completely fixed. The DC cross sections for -wave and -wave capture can now be calculated without any further adjustment of parameters. The results are shown in Fig. 1. As usual, -wave capture decreases with energy by roughly , whereas -wave capture increases with . A transition from the to the behavior is found at several tens of keV. This is a typical result for light nuclei at the upper end of the -shell like C Ohsaki et al. (1994) and in the -shell (e.g., O Igashira et al. (1995); Mohr et al. (2016) and Mg Mohr et al. (1998)). Important ingredients of the DC calculations like wave functions and overlaps are further illustrated in Fig. 2. Both bound state wave functions (shown in the upper part a as in logarithmic scale and in the middle part b as in linear scale) of the ground state and the excited state at 5.96 MeV are characterized by and thus mainly differ in the exterior which is determined by the binding energies of both states. Contrary, the state at 5.96 MeV has and one node in the interior. In the exterior, the and wave functions show the same slope because of the almost identical binding energies. The resulting integrand of the overlap integral in Eq. (1) is shown in the lower part c of Fig. 2 for a chosen energy keV. Obviously, the main contributions for the capture to the ground state come from the nuclear interior and surface at relatively small radii ( fm). Because of the smaller binding energies of the and final states, the main contributions for the transitions to these states appear in the nuclear exterior for radii 10 fm fm. Nevertheless, for all transitions noticeable cancellation effects are found between the positive and the negative areas of the integrands in Fig. 2 (part c). A similar observation has already been made in an earlier study of direct neutron capture at thermal energies Lynn et al. (1987) which is based on the model described in Raman et al. (1985). The DC calculation of -wave and -wave capture is complemented by the contributions of the four lowest known resonances which correspond to the states in Be at MeV (), 7.542 MeV (), 9.27 MeV (), and 9.56 MeV (). The properties of the resonances are listed in Table 2. For the calculation of the resonance cross sections the approximation was used because it is known that for these states Tilley et al. (2004). The radiation width of the lowest resonance was determined experimentally as eV Kitazawa et al. (1994). This resonance decays by E1 transitions to the first excited state in Be ( eV which corresponds to a noticeable strength of 31 mW.u.) and to the second excited state ( eV, corresponding to 124 mW.u.). If one assumes the same average Weisskopf units for the E1 transitions in the decay of the next resonance with at MeV, one ends up with a smaller radiation width of eV because E1 transitions can only lead to odd-parity states around MeV and thus correspond to relatively low transition energies. Because of the high transition energy of the E2 transition to the ground state, almost the same radiation width for the E2 transition can be estimated using a typical strength of about 5 W.u. for E2 transitions in this mass region Endt (1993). This leads to an overall radiation width of eV which is significantly lower than assumed by Wallner et al. who use the same eV as for the resonance. Assuming the same strengths of 75 mW.u. for E1 and 5 W.u. for E2 transitions, the resonance has only a tiny radiation width of meV which results from the E2 transition to the state at MeV. Additional -transitions may occur to the levels in Be above the neutron threshold with larger strength (e.g., for the M1 transition to the state at 7.371 MeV); however, the final state of this transition decays preferentially by neutron emission and thus does not contribute to Be production. A large radiation width is found for the state at 9.56 MeV because of strong E2 transitions to low-lying and states in Be: eV. However, this resonance is located at almost 3 MeV and thus contributes to the astrophysical reaction rate only at very high temperatures. Interference effects between the resonances are not taken into account in the present study because no experimental information is available. However, it can be estimated that interference effects will be minor because the dominating -wave resonance does not interfere with the dominating -wave DC contributions. For completeness it has to be noted that the two resonances contain a significant amount of the total strength. As these resonances are taken into account explicitly, an additional calculation of the -wave contribution of the DC cross section would double-count the strength, and thus the -wave contribution of the DC cross section is intentionally omitted. The folding potential for the -waves contains two bound states close below the neutron threshold (see Table 1). Assuming the same potential for the -wave automatically leads to the appearance of -wave resonances at low energies which are the theoretical counterparts of the experimentally observed -wave resonances (see Table 2). Overall, the agreement between the calculated total cross section and the new experimental data Wallner et al. (2019) is very good with a small per point. The dominating contribution to comes from the upper data point at keV where an average cross section of b is reported in Wallner et al. (2019). The calculated cross section at exactly 473 keV is b. Averaging the calculated cross section over the experimental energy distribution of the neutrons (see Fig. 3 of Wallner et al. (2019)) leads to b which deviates only by 1.2 from the experimental . The increase from 6.97 b to 7.18 b results from the higher calculated cross sections at the upper end of the experimental neutron energy interval. As a consequence, per point approaches 1.0 in this case. Including the Shibata points Shibata (1992) reduces the deviations further to per point. It has to be repeated that the present calculation has been made completely independent, without any adjustment to the new experimental data points in the keV region. ## Iv Astrophysical reaction rate The astrophysical reaction rate  was calculated by numerical integration of the cross sections in Sec. III. A narrow energy grid from 1 to 4000 keV was used to cover the the full temperature range up to . Because of the relatively high first excited state in Be, no stellar enhancement factor was used (as also suggested in the KADoNiS database Dillmann et al. (2006)). The result is shown in Fig. 3. At low temperatures below a few keV this energy grid is not sufficient. Therefore, the calculation of the cross section was repeated in 10 eV steps from 10 eV to 50 keV. With these settings a constant rate for the -wave capture was found down to the lowest temperatures in Fig. 3 which confirms that the numerical treatment is stable. The -wave capture dominates the low-temperature region below whereas at higher temperatures around -wave capture becomes the major contributor. At even higher temperatures the resonance contributions become comparable to -wave capture which result mainly from the lowest resonance at 559 keV. As expected, the present rate is in very good agreement with the rate by Wallner et al. Wallner et al. (2019) because their DC calculation was adjusted to their new experimental data (whereas the present calculation reproduces the new experimental data without adjustment). The only significant difference appears at relatively high temperatures around and results from the lower resonance strength of the lowest resonance in the present study (see Table 2 and discussion in Sec. III). At the highest temperature in Fig. 3 the present rate becomes similar to the Wallner rate again because the lower strength of the resonance is compensated by the additional resonances at higher energies which were not taken into account in Wallner et al. (2019). Fig. 3 also includes the recommended rate of the KADoNiS database Dillmann et al. (2006) (version 1.0) which was derived from preliminary data of Wallner et al. and thus can be recommended for astrophysical calculations. The REACLIB database Cyburt et al. (2010) also recommends to use the KADoNiS rate. However, STARLIB Sallaska et al. (2013) contains a theoretical rate which is based on the statistical model. This theoretical rate exceeds the recommended rate by far at low temperatures and shows a completely different temperature dependence (see Fig. 3). Such a discrepancy is not very surprising because the statistical model is inappropriate for such light nuclei. A comparison of the new capture data to different libraries for neutron cross sections was already given in Wallner et al. (2019) and is omitted here. The astrophysical reaction rate  was fitted using the same parametrization as in Eq. (7) of Wallner et al. (2019): NA<σv>cm3s−1mol−1= a0(1.0+a1T1/29+a2T9+a3T3/29 (5) +a4T29+a5T5/29) +a6T−3/29exp(−b0/T9) The and parameters are listed in Table 3. The deviation of the fitted rate is below 1% over the full temperature range. ## V Conclusions The cross section of the Be(n,)Be reaction was calculated in the direct capture model. All parameters of the calculations could be adjusted to thermal properties of the Be + n system, and therefore the calculation of the capture cross sections in the astrophysically relevant keV region is completely free of any adjustments. The calculated cross sections agree very well with the recently published experimental results by Wallner et al. Wallner et al. (2019) and also with earlier unpublished data by Shibata Shibata (1992). The astrophysical reaction rate of the KADoNiS database is essentially confirmed; it is based on a preliminary analysis of the Wallner et al. data. REACLIB also suggests to use the KADoNiS rate. However, the reaction rate of STARLIB should not be used because it is based on a statistical model calculation which overestimates the experimental data significantly. ###### Acknowledgements. I thank A. Wallner for encouraging discussions. This work was supported by NKFIH (K108459 and K120666). ## References Want to hear about new tools we're making? Sign up to our mailing list for occasional updates. If you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source! For everything else, email us at [email protected].
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480420351028442, "perplexity": 846.686144325915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00454.warc.gz"}
https://www.physicsforums.com/threads/abstract-algebra.183314/
# Abstract algebra 1. Sep 6, 2007 ### quasar987 1. The problem statement, all variables and given/known data I am asked to show that if E is a semi-group and if (i) there is a left identity in E (ii) there is a left inverse to every element of E then, E is a group. 3. The attempt at a solution Well I can't seem to find the solution, but it's very easy if one of the two "left" above is replaced by a "right". For instance, if we replace the existence of a left inverse condition by the existence of a right inverse, then we find that the left identity is also a right identity like so: Let a,b be in E. Then ab=a(eb)=(ae)b ==> a=ae (by multiplying by bˉ¹ from the right). So e is a right identity also. Then it follows that every right inverse is also a left inverse: aaˉ¹=e ==>(aaˉ¹)a=ea ==>a(aˉ¹a)=a ==> (aˉ¹a)=e. So, does anyone know for a fact that this question contains or does not contain a typo? 2. Sep 6, 2007 ### quasar987 No, this only means that ae=ae. 3. Sep 6, 2007 ### Hurkyl Staff Emeritus Well, I can prove that all of the left inverses of e are, in fact, equal to e. So I'm making progress. 4. Sep 6, 2007 ### Hurkyl Staff Emeritus Also, x * (left inverse of x) = (left identity) I think that the group structure follows from these facts. So I have at least as much confidence in the original problem as I do that I didn't make a mistake. (I'm not saying how much confidence that is. ) 5. Sep 6, 2007 ### Dick I'm having problems with this as well, but then I'm tired. I would suggest though, that if you really think it's wrong that you start trying to construct a counterexample. If you can't construct a counterexample then the effort may teach you what you need to do. Last edited: Sep 6, 2007 6. Sep 6, 2007 ### d_leet I know that I've done this proof before, and as I recall you need to somehow use the two facts in conjuction, and remember that if y is the left inverse of x, then y also has a left inverse say z... I remember having to use this someow, but my efforts on this tonight are not going well. Edit: With a fair amount of work I managed to prove that every left inverse is also a right inverse, and from that I think it follows a little more easily that the left identity is also a right identity. Edit 2: It actually follows almost trivially from the fact that every left inverse is also a right inverse that the left identit is also the right identity. Last edited: Sep 6, 2007 7. Sep 6, 2007 ### Hurkyl Staff Emeritus Basically, I tried writing lots of expressions that could be simplified in multiple ways to derive new properties. For example, I first wondered how left inverses of the indentity (and their left inverses, etc) behaved, then I started worrying about how inverses of general elements behaved. Incidentially, I did get started by searching for a counterexample; I decided to let 0 be the identity, 1 a left inverse of 0, 2 a left inverse of 1, and so forth, then I tried to compute how multiplication had to behave. 8. Sep 7, 2007 ### Timo I think I got it. According to my calculation, the statement is true as stated. But since I'm not a mathematician and did in fact not even know the terms left-inverse and left-identity before tackling the problem (I found left-inverse in an algebra book, didn't find left-identity) I need some sanity-check: I assumed the two conditions mean that: $$\exists \, 1_L \in E : \forall a \in E : 1_L \, a = a$$ (i) and $$\forall \, a \in E: \, \exists \, a_L \in E: a_L \, a = 1_L$$ (ii). Is this translation of the two conditions correct? Last edited: Sep 7, 2007 9. Sep 7, 2007 ### matt grime It is, though it is always preferable not to write things in logical forms like that since they are unnecessarily opaque. 10. Sep 7, 2007 ### Mr.Brown i guess this is pretty easy: i would go like this: From knowing E is a semi group you have associativity! And i know: e*a=a , for e beeing a left identity! and a^-1 * a = e , a^-1 is left inverse now i multiply a from the left and get: a*a^(-1)*a = a from using associativity i get: (a*a^(-1))*a = a*(a^(-1)*a) = a (1) hence by using both assumptions the first part of (1) implies that a*a^(-1) = e -> every left inverse is a right inverse if associativity holds! the second part implies that while we assumes a^(-1)*a= e -> every left identity is a right identity. QED 11. Sep 7, 2007 ### Timo I don't completely understand what you said Mr. Brown. Most notably, I don't get the step which seems to imply that a*e=a. 12. Sep 7, 2007 ### quasar987 You went a little too fast. Multiplying from the left by a gives a*a^(-1)*a = ae but ae is not known to be a, for e is only a left identity. 13. Sep 7, 2007 ### quasar987 Let me try something here. (Assuming the problem is stated correctly) If I show that (aˉ¹)ˉ¹=a, then this will mean that e=(aˉ¹)ˉ¹(aˉ¹)=aaˉ¹, meaning left inverses are also right inverses. Lets begin the random manipulations :) (aˉ¹)ˉ¹(aˉ¹)=e ==>(aˉ¹)ˉ¹(aˉ¹)a=ea ==> (aˉ¹)ˉ¹e=a ==> aˉ¹(aˉ¹)ˉ¹e=aˉ¹a ==>aˉ¹(aˉ¹)ˉ¹e=e ==> aˉ¹(aˉ¹)ˉ¹=e ==> aˉ¹=((aˉ¹)ˉ¹)ˉ¹. If every element can be seen as the left inverse of another, then I have succeeded. But is this implied? Gotta go. 14. Sep 7, 2007 ### Timo ae=a strikes back. I don't think it's a good idea to label the left-inverse and left-identity $$a^{-1}$$ and e/1. That nomenclature imho cries for stupid mistakes caused by that you usually label real inverses and identities with these symbols. Might differ from person to person, but I did chose different names for exactly the reason that I screwed up too many steps otherwise. Last edited: Sep 7, 2007 15. Sep 7, 2007 ### Hurkyl Staff Emeritus How'd you manage that? You have neither proven that e is a right identity, that e has a unique left inverse, or anything else I've noticed that would allow you to conclude that. 16. Sep 7, 2007 ### quasar987 I accepted w/o proof that 17. Sep 7, 2007 ### learningphysics I was stuck trying to figure out this same problem recently. I looked it up online... I can't find the link right now. But the trick is to first prove: If x*x = x, then x = e, for any element x. This is simple x*x = x x^-1 * x * x = x^-1 * x (left multiply both sides by x^-1) e * x = e x = e Then you can show that every left inverse is a right inverse x*x^-1 = x * (e * x^-1) = x * (x^-1 * x) * x^-1 (write out e as x^-1*x) = (x * x^-1) * (x * x^-1) So using the previous solution we know that x * x^-1 = e, so the right inverse part is proven So to prove e is a right identity: x * e = x * (x^-1 * x) = (x * x^-1) * x = e * x = x Then you can also prove that e is the unique left identity and unique right identity. 18. Sep 7, 2007 ### quasar987 Cheers! Have something to add? Similar Discussions: Abstract algebra
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578505158424377, "perplexity": 1234.4240856204976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00353-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.groundai.com/project/robin-problems-with-indefinite-linear-part-and-competition-phenomena/
[ # [ ###### Abstract We consider a parametric semilinear Robin problem driven by the Laplacian plus an indefinite potential. The reaction term involves competing nonlinearities. More precisely, it is the sum of a parametric sublinear (concave) term and a superlinear (convex) term. The superlinearity is not expressed via the Ambrosetti-Rabinowitz condition. Instead, a more general hypothesis is used. We prove a bifurcation-type theorem describing the set of positive solutions as the parameter varies. We also show the existence of a minimal positive solution and determine the monotonicity and continuity properties of the map . Indefinite potential, Robin boundary condition, strong maximum principle, truncation, competing nonlinear, positive solutions, regularity theory, minimal positive solution. Nonlinear Robin problems] Robin problems with indefinite linear part and competition phenomena N.S. Papageorgiou, V.D. Rădulescu and D.D. Repovš] \subjclassPrimary: 35J20, 35J60; Secondary: 35J92. thanks: Corresponding author: Vicenţiu D. Rădulescu Nikolaos S. Papageorgiou Department of Mathematics, National Technical University Zografou Campus, Athens 15780, Greece Vicenţiu D. Rădulescu Department of Mathematics, Faculty of Sciences, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia Department of Mathematics, University of Craiova, Street A.I. Cuza No. 13, 200585 Craiova, Romania Dušan D. Repovš Faculty of Education and Faculty of Mathematics and Physics, University of Ljubljana, Kardeljeva ploščad 16, SI-1000 Ljubljana, Slovenia (Communicated by Xuefeng Wang) ## 1 Introduction Let () be a bounded domain with a -boundary . In this paper we study the following parametric Robin problem ⎧⎪⎨⎪⎩−Δu(z)+ξ(z)u(z)=λg(z,u(z))+f(z,u(z)) in Ω∂u∂n+β(z)u=0 on ∂Ω.⎫⎪⎬⎪⎭ (Pλ) In this problem, is a parameter, () is a potential function which is indefinite (that is, sign changing) and in the reaction, and are Carathéodory functions (that is, for all , are measurable and for almost all , are continuous). We assume that for almost all , is strictly sublinear near (concave nonlinearity), while for almost all , is strictly superlinear near (convex nonlinearity). Therefore the reaction in problem () exhibits the combined effects of competing nonlinearities (“concave-convex problem”). The study of such problems was initiated with the well-known work of Ambrosetti, Brezis and Cerami [2], who dealt with a Dirichlet problem with zero potential (that is, ) and the reaction had the form λxq−1+xr−1 for all x≥0 with 1 They proved a bifurcation-type result for small values of the parameter . The work of Ambrosetti, Brezis and Cerami [2] was extended to more general classes of Dirichlet problems with zero potential by Bartsch and Willem [4], Li, Wu and Zhou [9], and Rădulescu and Repovš [19]. Our aim in this paper is to extend all the aforementioned results to the more general problem (). Note that when , we recover the Neumann problem with an indefinite potential. Robin and Neumann problems are in principle more difficult to deal with, due to the failure of the Poincaré inequality. Therefore in our problem, the differential operator (left-hand side of the equation) is not coercive (unless , ). Recently we have examined Robin and Neumann problems with indefinite linear part. We mention the works of Papageorgiou and Rădulescu [13, 14, 16]. In [13] the problem is parametric with competing nonlinearities. The concave term is , , (so it enters into the equation with a negative sign) while the perturbation is Carathéodory, asymptotically linear near and resonant with respect to the principal eigenvalue. We proved a multiplicity result for all small values of the parameter , producing five nontrivial smooth solutions, four of which have constant sign (two positive and two negative). In this paper, using variational tools together with truncation, perturbation and comparison techniques, we prove a bifurcation-type theorem, describing the existence and multiplicity of positive solutions as the parameter varies. We also establish the existence of a minimal positive solution and determine the monotonicity and continuity properties of the map . ## 2 Preliminaries Let be a Banach space and its topological dual. By we denote the duality brackets for the dual pair . Given , we say that satisfies the “Cerami condition” (the “C-condition” for short), if the following property is satisfied: “Every sequence such that is bounded and (1+||un||)φ′(un)→0 in X∗ as n→∞, admits a strongly convergent subsequence”. This is a compactness-type condition on the functional . It leads to a deformation theorem from which one can derive the minimax theory for the critical values of (see, for example, Gasinski and Papageorgiou [6]). The following notion is central to this theory. ###### Definition 2.1. Let be a Hausdorff topological space and nonempty, closed sets such that . We say that the pair is linking with in if: • ; • For any such that , we have . Using this topological notion, one can prove the following general minimax principle, known in the literature as the “linking theorem” (see, for example, Gasinski and Papageorgiou [6, p. 644]). ###### Theorem 2.2. Assume that is a Banach space, are nonempty, closed subsets such that is linking with in , satisfies the -condition supE0φ and , where . Then and is a critical value of (that is, there exists such that ). With a suitable choice of the linking sets, we can produce as corollaries of Theorem 2.2, the main minimax theorems of the critical point theory. For future use, we recall the so-called “mountain pass theorem”. ###### Theorem 2.3. Assume that is a Banach space, satisfies the -condition, , max{φ(u0),φ(u1)} and with . Then and is a critical value of . ###### Remark 1. Theorem 2.3 can be deduced from Theorem 2.2 if we have , , In the analysis of problem (), we will use the following spaces: the Sobolev space , the Banach space and the boundary Lebesgue spaces , . By we denote the norm of the Sobolev space . So ||u||=[||u||22+||Du||22]12 for all u∈H1(Ω). The space is an ordered Banach space with positive cone C+={u∈C1(¯¯¯¯Ω): u(z)≥0 for all z∈¯¯¯¯Ω}. We will use the open set defined by D+={u∈C+:u(z)>0 for all z∈¯¯¯¯Ω}. On we consider the -dimensional Hausdorff (surface) measure Using this measure, we can define the Lebesgue spaces () in the usual way. Recall that the theory of Sobolev spaces says that there exists a unique continuous linear map , known as the “trace map”, such that γ0(u)=u|∂Ω for all u∈H1(Ω)∩C(¯¯¯¯Ω). This map is not surjective and it is compact into if and into In what follows, for the sake of notational simplicity, we drop the use of the map . All restrictions of Sobolev functions on are understood in the sense of traces. Let be a Carathéodory function such that |f0(z,x)|≤a0(z)(1+|x|r−1) for almost all z∈Ω and all x∈R, with We set . Also, let and with on . We consider the -functional defined by φ0(u)=12ϑ(u)−∫ΩF0(z,u)dz, where ϑ(u)=||Du||22+∫Ωξ(z)u2dz+∫∂Ωβ(z)u2dσ for all u∈H1(Ω). The next result follows from Papageorgiou and Rădulescu [12, Proposition 3] using the regularity theory of Wang [20]. ###### Proposition 1. Let be a local -minimizer of , that is, there exists such that φ0(u0)≤φ0(u0+h) for all h∈C1(¯¯¯¯Ω) with ||h||C1(¯¯¯Ω)≤ρ0. Then with and is also a local -minimizer of , that is, there exists such that φ0(u0)≤φ0(u0+h) for all h∈H1(Ω) with ||h||≤ρ1. We will need some facts concerning the spectrum of with Robin boundary condition. Details can be found in Papageorgiou and Rădulescu [12, 16]. So, we consider the following linear eigenvalue problem −Δu(z)+ξ(z)u(z)=^λu(z) in Ω, ∂u∂n+β(z)u=0 on ∂Ω. (1) We know that there exists such that ϑ(u)+μ||u||22≥c0||u||2 for all u∈H1(Ω) and for some c0>0. (2) Using (2) and the spectral theorem for compact self-adjoint operators, we generate the spectrum of (1), which consists of a strictly increasing sequence such that . Also, there is a corresponding sequence of eigenfunctions which form an orthonormal basis of and an orthogonal basis of . In fact, the regularity theory of Wang [20] implies that . By (for every ) we denote the eigenspace corresponding to the eigenvalue . We have the following orthogonal direct sum decomposition H1(Ω)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⊕k≥1E(^λk). Each eigenspace has the so-called “unique continuation property” (UCP for short) which says that if vanishes on a set of positive Lebesgue measure, then . The eigenvalues have the following properties: ∙ ^λ1 is simple (that is, dimE(^λ1)=1); ∙ ^λ1=inf[ϑ(u)||u||22:u∈H1(Ω),u≠0]; ∙ for m≥2 we have ^λm= sup[ϑ(u)||u||22:u∈⊕mk=1E(^λk),u≠0] = inf[ϑ(u)||u||22:u∈¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⊕k≥mE(^λk),u≠0] (4) In (2) the infimum is realized on . In (2) both the supremum and the infimum are realized on . From these properties, it is clear that the elements of have constant sign while for the elements of are nodal (that is, sign changing). Let denote the -normalized (that is, ) positive eigenfunction corresponding to . As we have already mentioned, . Using Harnack’s inequality (see, for example Motreanu, Motreanu and Papageorgiou [11, p. 212]), we have that for all . Moreover, if , then using the strong maximum principle, we have . The following useful inequalities are also easy consequences of the above properties. ###### Proposition 2. • If then . • If then . Finally, let us fix some basic notations and terminology. So, by we denote the linear operator defined by ⟨A(u),h⟩=∫Ω(Du,Dh)RNdz for all u,h∈H1(Ω). A Banach space is said to have the “Kadec-Klee property” if the following holds ‘‘un\lx@stackrelw→u in X and%  ||un||→||u||⇒un→u in X". Locally uniformly convex Banach spaces, in particular Hilbert spaces, have the Kadec-Klee property. Let . We set and for we define u±(⋅)=u(⋅)±. We know that u±∈H1(Ω), |u|=u++u−, u=u+−u−. By we denote the Lebesgue measure on . Also, if then Kφ={u∈X:φ′(u)=0} (the critical % set of φ). If , then and . Finally, we set n0=max{k∈N:^λk≤0}. If for all (this is the case if and or ), then we set . ## 3 Positive solutions The hypotheses on the data of problem () are the following: . . • for every , there exists such that g(z,x)≤aρ(z) for almost all z∈Ω and all x∈[0,ρ]; • uniformly for almost all ; • there exist constants and such that c3xq−1≤g(z,x) for almost all z∈Ω and all x≥0, limsupx→0+g(z,x)xq−1≤c4 % uniformly for almost all z∈Ω; • if , then  for almost all and all ; • for every , there exists such that for almost all the function x↦g(z,x)+^ξρx is nondecreasing on . is a Carathéodory function such that • for almost all and all with ; • uniformly for almost all ; • uniformly for almost all and there exists such that f(z,x)≥0 for almost all z∈Ω and all x∈[0,δ0]; • for every , there exists such that for almost all the function x→f(z,x)+~ξρx is nondecreasing on We set and define γλ(z,x)=λg(z,x)+f(z,x)−2[λG(z,x)+F(z,x)] % for all (z,x)∈Ω×R+. For every , there exists such that γλ(z,x)≤γλ(z,y)+eλ(z) for % almost all z∈Ω and all 0≤x≤y. ###### Remark 2. Since we are looking for positive solutions and all of the above hypotheses concern the positive semi-axis , we may assume without any loss of generality that g(z,x)=f(z,x)=0 for almost all z∈Ω all x≤0 (note that hypotheses and imply that for almost all ). Hypothesis implies that for almost all is strictly sublinear near . This, together with hypothesis , implies that is globally the “concave” contribution to the reaction of problem (). On the other hand, hypothesis implies that for almost all is strictly superlinear near . Hence is globally the “convex” contribution to the reaction of (). Therefore on the right-hand side (reaction) of problem (), we have the competition of concave and convex nonlinearities (“concave-convex problem”). We stress that the superlinearity of is not expressed using the well-known Ambrosetti-Rabinowitz condition (see Ambrosetti and Rabinowitz [3]). Instead, we use hypothesis , which is a slightly more general version of a condition used by Li and Yang [10]. Hypothesis is less restrictive than the Ambrosetti-Rabinowitz superlinearity condition and permits the consideration of superlinear terms with “slower” growth near , which fail to satisfy the AR-condition (see the examples below). Hypothesis is a quasimonotonicity condition on and it is satisfied if there exists such that for almost all , x↦λg(z,x)+f(z,x)x is nondecreasing on (see [10]). Examples. The following pair satisfies hypotheses and : g(z,x)=a(z)xq−1, f(z,x)=b(z)xr−1 for all x≥0 with for almost all and . If , this is the reaction pair used by Ambrosetti, Brezis and Cerami [2] in the context of Dirichlet problems with zero potential (that is, ). The above reaction pair was used by Rădulescu and Repovš [19], again for Dirichlet problems with . Another possibility of a reaction pair which satisfies hypotheses and are the following functions (for the sake of simplicity, we drop the -dependence): g(x)={2xq−1−xτ−1if 0≤x≤1xη−1if 1 In this pair, the superlinear term fails to satisfy the Ambrosetti-Rabinowitz condition. Let be as in (2) and . Let be the Carathéodory function defined by kλ(z,x)=λg(z,x)+f(z,x)+μx+. (5) We set and consider the -functional defined by ^φλ(u)=12ϑ(u)+μ2||u||22−∫ΩKλ(z,u)dz for all u∈H1(Ω). ###### Proposition 3. If hypotheses and hold, then for every the functional satisfies the C-condition. ###### Proof. Let be a sequence such that |^φλ(un)|≤M1 for some M1>0 and all n∈N, (6) (1+||un||)^φ′λ(un)→0 in H1(Ω)∗ as n→∞. (7) By (7) we have ∣∣∣⟨A(un),h⟩+∫Ω(ξ(z)+μ)unhdz+∫∂Ωβ(z)unhdσ−∫Ωkλ(z,un)hdz∣∣∣≤ϵn||h||1+||un||, (8) for all with . In (8) we choose . Then ϑ(u−n)+μ||u−n||22≤ϵn for all n∈N (see (???)), (9) ⇒ c0||u−n||2≤ϵn for all n∈N (see (???)), ⇒ u−n→0 in H1(Ω). It follows from (6) and (9) that ϑ(u+n)−∫Ω2[λG(z,u+n)+F(z,u+n)]dz≤M2 for some M2>0 and all n∈N. (10) If in (8) we choose , then −ϑ(u+n)+∫Ω[λg(z,u+n)+f(z,u+n)]u+ndz≤ϵn for all n∈N. (11) Adding (10) and (11), we obtain ∫Ωγλ(z,u+n)dz≤M3 for some M3>0 and all n∈N. (12) Claim. is bounded. We argue by contradiction. So, suppose that the claim is not true. By passing to a subsequence if necessary, we may assume that . Let , . Then ||yn||=1, yn≥0 for all n∈N and so we may assume that yn\lx@stackrelw→y in H1(Ω) and yn→y in L2s′(Ω) % and in L2(∂Ω),y≥0. (13) Suppose that and let . Then and u+n(z)→+∞ for almost all z∈Ω∗. We have G(z,u+n)||u+n||2=G(z,u+n)(u+n)2y2n→0 for a.a. z∈Ω∗ (% see hypothesis H(g)(ii)), (14) F(z,u+n)||u+n||2=F(z,u+n)(u+n)2y2n→+∞ for a.a. z∈Ω∗ (see hypothesis H(f)(ii)). (15) It follows from (14), (15) and Fatou’s lemma that limn→∞[λ∫ΩG(z,u+n)||u+n||2dz+∫ΩF(z,u+n)||u+n||2dz]=+∞. (16) On the other hand, (6) and (9) imply that λ∫ΩG(z,u+n)||u+n||2dz+∫ΩF(z,u+n)||u+n||2dz≤M1||u+n||2+12ϑ(yn)+μ2||yn||22≤M4 (17) for some , all . Comparing (16) and (17) we obtain a contradiction. Next, suppose that . For we set . Then in and so we have ∫ΩG(z,^yn)dz→0 and ∫ΩF(z,^yn)dz→0. (18) Since , we can find such that (2η)12||u+n||∈(0,1] for all n≥n1. (19) We choose such that ^φλ(tnu+n)=max[^φλ(tu+n):0≤t≤1]. (20) From (19), (20) we have ^φλ(tnu+n)≥ ^φλ(^yn) (see (???)) = 12ϑ(^yn)−λ∫ΩG(z,^yn)dz−∫ΩF(z,^yn)dz (see (???)) ≥ η−λ∫ΩG(z,^yn)dz−∫ΩF(z,^yn)dz ≥ 12η for all n≥n2≥n1 (% see (???)). (21) Since is arbitrary, we infer from (3) that ^φλ(tnu+n)→+∞ as n→∞. (22) We know that ^φλ(0)=0 and ^φλ(u+n)≤M5 for some M5>0 and all n∈N (see (???) and (???)), ⇒ tn∈(0,1) for all n≥n3 (see (???)). So, (20) implies that tnddt^φλ(tu+n)|t=tn=0, (23) ⇒ ⟨^φ′λ(tnu+n),tnu+n⟩=0 (by the chain rule), ⇒ ϑ(tnu+n)=∫Ω[λg(z,tnu+n)+f(z,tnu+n)](tnu+n)dz for all n≥n3. We have . Then hypothesis
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946456551551819, "perplexity": 788.9243868343932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911092.63/warc/CC-MAIN-20200710144305-20200710174305-00591.warc.gz"}
http://tex.stackexchange.com/questions/12930/bibtex-with-plain-tex?answertab=votes
BibTeX with Plain TeX The manpage of bibtex says that it can be used with both LaTeX and TeX. However, I did not find any resource how to do it and also no TeX book of mine explains it. Can someone provide a minimalistic example? - You can look at the btxmac.tex from Eplain (usually found in texmf-dist/tex/eplain). It includes an example of use with plain TeX. - Here is the example given in http://www.tug.org/TUGboat/tb24-1/patashnik.pdf: \input btxmac The \TeX{}book~\cite{knuth:tex} is good. \medskip \leftline{\bf References} \bibliography{mybib} \bibliographystyle{plain} \bye There is nothing really special. - The eplain manual describes how to use bibtex with some of the eplain commands. It looks really similar to the way it's done in LaTeX. - Could you provide a working example? – qubyte Feb 1 '12 at 5:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766854643821716, "perplexity": 1727.602172487724}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116886.38/warc/CC-MAIN-20160428161516-00125-ip-10-239-7-51.ec2.internal.warc.gz"}
https://itectec.com/matlab/matlab-matrix-operation-with-equation/
# MATLAB: Matrix operation with equation. matricesmatrixmatrix manipulation I have a matrix (5×1) I = [-6;-5;9;-7;-3]; How to perform calculation in MATLAB command window for each value in this matrix, the equation is A(I) = 1/(1+e^-I), where I = -6, -5, 9, -7, and -3. and return the answer in matrix form (5×1). Thanks. • A = 1./(1+exp(-I));
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671274781227112, "perplexity": 1883.8921712984277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00429.warc.gz"}
https://collegemathteaching.wordpress.com/2014/09/02/using-convolutions-and-fourier-transforms-to-prove-the-central-limit-theorem/
# College Math Teaching ## September 2, 2014 ### Using convolutions and Fourier Transforms to prove the Central Limit Theorem Filed under: probability — Tags: , , — collegemathteaching @ 5:40 pm I’ve used the presentation in the our Probability and Statistics text; it is appropriate given that many of our students haven’t seen the Fourier Transform. But this presentation is excellent. Upshot: use the convolution to derive the density function for $S_n = X_1 + X_2 + ....X_n$ (independent, identically distributed random variables of finite variance), assume mean is zero, variance is 1 and divide $S_n$ by $\sqrt{n}$ to obtain the variance of the sum to be 1. Then use the Fourier transform on the whole thing (the normalized version) to turn convolution into products, use the definition of Fourier transform and use the Taylor series for the $e^{i 2 \pi x \frac{s}{\sqrt{n}}}$ terms, discard the high order terms, take the limit as $n$ goes to infinity and obtain a Gaussian, which, of course, inverse Fourier transforms to another Gaussian.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931216239929199, "perplexity": 382.19635535090083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188550.58/warc/CC-MAIN-20170322212948-00660-ip-10-233-31-227.ec2.internal.warc.gz"}