url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://genealogy.math.ndsu.nodak.edu/id.php?id=8176 | ## Joseph Eugene Rowe
Ph.D. The Johns Hopkins University 1910
Dissertation: Important Covariant Curves and a Complete System of Invariants of the Rational Quartic Curve | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623337984085083, "perplexity": 2491.0695683799836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007797.72/warc/CC-MAIN-20141125155647-00084-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://openstudy.com/updates/4f45c884e4b065f388ddbb92 | ## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## Suju43ver Group Title How to find the complex cube roots of -1? 2 years ago 2 years ago Edit Question Delete Cancel Submit
• This Question is Closed
1. vengeance921
Best Response
You've already chosen the best response.
1
-1=i^2 so complex number cube root would be $\sqrt[3]{i ^{2}}$ $(i ^{2})^{1/3}$ $i ^{2/3}$ $i ^{2}-i ^{3}$ $-1-\sqrt{-1}$ but the final answer would just be until i^2/3 i think since it is asked in terms of complex numbers
• 2 years ago
2. rivermaker
Best Response
You've already chosen the best response.
0
It may be simpler to do as follows: $x^3= -1 \rightarrow x^3+1=0 \rightarrow (x+1)(x^2-x+1)=0$ This gives x = -1 as the real root and the two complex roots can be obtained by solving the quadratic equation$x^2-x+1$
• 2 years ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885715842247009, "perplexity": 4806.279590021967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006716.69/warc/CC-MAIN-20141125155646-00073-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/technical-question-re-le-tensor-of-cochain-complexes-and-isomorphism-of-complexes.405291/ | # Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Complexes
1. May 24, 2010
### Bacle
Hi, everyone:
1) I am going over the Leray-Hirsch theorem in Hatcher's AT , which gives the conditions
under which we can obtain the cohomology of the top space of the bundle
from the tensor product of the cohomology of the fiber, and that of the base
( a sort of relative to Kunneth's theorem), and I see the statement, that
(paraphrase) the isomorphism:
H* (E;R)=H*(B;R)(x)H*(F;R)
where R is a ring, and (x) is the tensor product "is not always a ring homomorphism"
question: is this then an isomorphism of cochain complexes.?. If so, does
anyone know the def. of iso. of cochain complexes.?.
2)How do we tensor cochains.?. How do we tensor Cochain complexes
The isomorphism above is described explicitly, and uses the tensor product of chains.
Anyone know how to define this.?
How about the tensor product of cochain complexes C,C'.?. My naive guess would be:
H_n( C(x)C') = (+)(H_i(C;R)(x)H_(n-i)(C';R)) as a set
but I don't see how to define the coboundary. I tried to imitate the construction
of the tensor of chain complexes, but I am just going in circles.
Any Ideas.?
Thanks.
2. May 24, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
I think the Leray-Hirsch Theorem says this.
First, it requires certain cohomology classes to exist in the total space of the bundle. These classes restrict to a basis for the finite dimensional cohomology of the fiber.
Given these cohomology classes, the isomorphism is an isomorphism of H*(B:R) - modules where the module structure is derived directly from the tensor product on one case and derives from the cup product in the other.
3. May 24, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
the tensor product of two cochain complexes is just the direct sum of all pairwise tensor products of modules, the first module coming from the first cochain complex, the second module coming for the second cochain complex. usually the module of degree m in the tensor product is the sum of the tensor products of pairs of modules the sum of whose degrees equals m. The coboundary operator is easily defined and is described in your text book.
4. May 24, 2010
### Bacle
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
O.K, thanks, yes, it is a module, guess graded module isomorphism. But I don't
see anywhere in Hatcher's AT where tensor of cochain complexes is defined;
I tried defining the coboundary in a similar way in which the boundary of the
tensor product of chain complexes is defined, but it did not seem to work. Anyway,
I will expand on what failed a bit later.
Also: do you know how we define the tensor product of cochains.?. This
tensor product is part of the statement of the actual isomorphism.
5. May 24, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
I do not believe that the tensor product of cochains is part of the statement of the isomorphism. I believe that it is the tensor product of cohomology classes.
6. May 25, 2010
### Bacle
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
You're right, Lavinia, but isn't it ultimately a product on cochains, since homology classes
are represented by cochains.?. Sorry if this is dumb; I am out of my element (tho
trying to learn). I understand that the basis of the tensor product is the sum of the
tensors of the respective bases, but I am still kind of confused. But the elements on
the right-hand side of the isomorphism:
http://www.math.cornell.edu/~hatcher/AT/AT.pdf (p.432)
i.e., p*(b)\/c , (with \/ cupping) is a cochain, so the expression on the left should also
be a cochain, albeit a representing cochain.
I think I figured out
the def. of the coboundary, but there is a small problem with a (-1) I need to take
care of , to show that the square of the coboundary as I defined it (same as with the tensor of chain complexes) is zero.
Thanks for Any Suggestions.
7. May 25, 2010
### Bacle
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
Actually, I may be stuck, since it is too late tomorrow I will look at it again.
8. May 25, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
You are right that the cup product is defined on cochains but it induces a product on cohomology classes. Cohomology classes are equivalence classes of cochains - not cochains.
In the Leray Hirsch theorem the map that takes
$$H^*(B,R) \otimes H^*(F,R) -> H^*(E,R)$$
is defined on the level of cohomology classes.
Perhaps you are thinking that the tensor product of the base cohomology with the fiber cohomology derives from a tensor product of the cochain complexes. But I do not think so. I think it is just a tensor product of R-modules. The reason that you get a mapping into the cohomology of the total space is that H*(B) pulls back into H*(E) via the bundle projection map and H*(F) maps into H*(E) by assumption the the total space contains cohomology classes that restrict to a basis for the cohomology of each fiber. One just takes these basis elements in the fiber and maps them back into their corresponding classes in the fiber.
It may be though that actually demonstrating the isomorphism requires an argument on the level of cochains. Perhaps you could outline how the proof actually works. I do not have Hatcher's book.
Last edited: May 25, 2010
9. May 25, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
in the case of a product bundle you get a map on cochains.
The two projections
$$E -> B$$
and
$$E -> F$$
induce maps on cochains. The cup product of these cochains gives you the map,
$$C^*(B,R) \otimes C^*(F,R) -> C^*(E,R)$$
In the case where B is a manifold, I think the Leray-Hirsh theorem should follow from the case of a product bundle and the usual Meyer-Vietoris argument. A Meyer-Vietoris argument can be reduced to the level of cochains so this may be the way to do the whole thing with cochains - at least in the case of a manifold.
I strongly recommend that you read the section on De Rham theory in Bott and Tu's book, Differential Forms in Algebraic Topology. This is a wonderfully clear book and the use of calculus on manifolds simplifies many of the arguments.
Last edited: May 25, 2010
10. May 25, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
I was thinking about how this theorem applies to circle bundles over the 2-sphere.
In the case of $$S^2 \times S^1$$
you just have the Kunneth formula and the generator of the cohomology of the circle gives you the class in total space that restricts to a basis in the cohomology of the fiber.
But for the Hopf fibration there can not be such a class that restricts to a basis of the cohomology of the fiber circles because $$S^3$$ is simply connected. The same thing applies to the tangent circle bundle using real coefficients.
What about the cohomology of the tangent circle bundle using Z/2 coefficients? I am not sure here.
11. May 25, 2010
### zhentil
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
You should jump ahead to the Serre spectral sequence for a fibration. Then it all becomes either crystal clear or not, depending on your perspective :)
12. May 25, 2010
### zhentil
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
What's the Z/2 cohomology of RP^3?
13. May 26, 2010
### Bacle
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
But the spaces I am working with at the moment are simple-enough that Spectral
Sequences are not necessary, and Leray-Hirsch, or even simpler techniques are enough. I am off spectral sequences after having done some work on the Vasiliev.
14. May 26, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
Z/2 in every dimension.
So LeRay-Hirsch might be right - but I don't know about the fiber orientation classes.
I think that in the case of the tangent circle bundle the cochain that restricts to the generator of the cohomology of the fiber circles is not closed. In fact now that I think of it just consider any connection 1-form wrsp to a Riemannian metric. The exterior derivative of the connection 1-form is minus the Gauss curvature times the pull back of the volume element. But the 2 sphere can not have identically zero Gauss curvature so the form is not closed.
Another look at this just observes that the real cohomology of RP^3 is zero so there can be no orientation class over the reals.
Over Z the same is true - I think - because isn't the integer cohomology of RP^3 is Z/2?
Yes, this seems right. Then there can be no orientation class over Z/2 as is seen form the commutative diagram,
$$H^1(RP^3,Z/2) \leftarrow H^1(RP^3,Z)$$
$$\mbox{ \ \ \ \ \ \ \ \ \ \ i* } \downarrow \mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ i* } \downarrow$$
$$H^1(S^1,Z/2) \mbox{\ \ \ \} \longleftarrow \mbox{\ \ \ \ \ \ 0 }$$
Where $$S^1}$$ is an arbitrary fiber and i is the inclusion map.
Last edited: May 26, 2010
15. May 26, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
How about the cohomology of the Klein bottle considered as a circle bundle over the circle?
Here you do have an orientation class - if you like you can prove this using the Hochschild-Serre spectral sequence - but the cup product structure is not the same as in a product of two circles.
Does this mean that the Leray-Hirsch isomorphism does not derive from a mapping of the cochain complexes?
16. May 26, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
can you explain this a little?
17. May 26, 2010
### zhentil
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
The E2 page of the Serre spectral sequence is what one would obtain by Leray-Hirsch. So in a sense, the Serre spectral sequence can be seen as measuring the obstruction to finding cohomology classes that restrict to generators of the fiber.
18. May 26, 2010
### zhentil
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
I'm not sure quite sure what this means. In the case of a Kunneth theorem, you would have this, but in general there's no map from a fiber bundle to the fiber. This may be misunderstanding your question, but if you mean a Kunneth-style argument where you pull back cohomology classes based on two projections, that can't work in a nontrivial fiber bundle. Think of the proof of Thom isomorphism in de Rham cohomology: you don't get the fiber cohomology class for free, i.e. it's not pulled back from anything.
19. May 26, 2010
### lavinia
Re: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Compl
no but since there is a mapping that respects the product structure on might think that there is a mapping of chain complexes. that was the original question I think. No one was expecting a Kunneth argument. Still there definitely are non-trivial bundles where the fiber cohomology is pulled back from classes in the total space of the bundle.
Last edited: May 26, 2010
Similar Discussions: Technical Question, re: Le: Tensor of Cochain Complexes, and Isomorphism of Complexes | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592518210411072, "perplexity": 707.2183826984564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824537.24/warc/CC-MAIN-20171021005202-20171021025202-00297.warc.gz"} |
http://link.springer.com/article/10.1007/BF00167898 | , Volume 34, Issue 5, pp 559-564
# Chinese hamster ovary cell growth and interferon production kinetics in stirred batch culture
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
## Summary
Recombinant human interferon-λ production by Chinese hamster ovary cells was restricted to the growth phase of batch cultures in serum-free medium. The specific interferon production rate was highest during the initial period of exponential growth but declined subsequently in parallel with specific growth rate. This decline in specific growth rate and interferon productivity was associated with a decline in specific metabolic activity as determined by the rate of glucose uptake and the rates of lactate and ammonia production. The ammonia and lactate concentrations that had accumulated by the end of the batch culture were not inhibitory to growth. Glucose was exhausted by the end of the growth phase but increased glucose concentrations did not improve the cell yield or interferon production kinetics. Analysis of amino acid metabolism showed that glutamine and asparagine were exhausted by the end of the growth phase, but supplementation of these amino acids did not improve either cell or product yields. When glutamine was omitted from the growth medium there was no cell proliferation but interferon production occurred, suggesting that recombinant protein production can be uncoupled from cell proliferation.
Offprint requests to: P. M. Hayter | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875080347061157, "perplexity": 4876.247251615138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815050.22/warc/CC-MAIN-20140820021335-00295-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/154050/on-the-generating-functions-for-euler-characteristic-of-hilbert-schemes-of-point | On the generating functions for Euler characteristic of Hilbert schemes of points
Let $d>1$ be an integer. If $n\geq 0$ is an integer we have a notion of $d$-dimensional partitions of $n$; the number of these, denoted $p_d(n)$, is the number of ways we can stack $n$ ($d$-dimensional) boxes in a corner of a $d$-dimensional "room". No closed formula is known for $p_d$, for any $d>1$. As far as I know, the generating function $\mathcal P_d$ for $p_d$ is known for $d=2,3$, but for no higher $d$'s: \begin{align} \mathcal P_2=\sum_{n\geq 0}p_2(n)t^n&=\prod_{k\geq 1}(1-t^k)^{-1},\notag\\ \mathcal P_3=\sum_{n\geq 0}p_3(n)t^n&=\prod_{k\geq 1}(1-t^k)^{-k}.\notag \end{align}
However, it seems to me that to find $p_d(n)$ is to find the number of "higher dimensional Young Tableaux", and these correspond to monomial ideals in $\mathbb C^d$. So it should be true that $$p_d(n)=\chi(\textrm{Hilb}^n(\mathbb C^d)_0),$$ the topological Euler characteristic of the punctual Hilbert scheme. It is also true that, if $S$ is a smooth projective surface and $Y$ is a smooth projective threefold, then \begin{align} \sum_{n\geq 0}\chi(\textrm{Hilb}^nS)t^n&=\mathcal P_2^{\chi(S)}\,\,\,\,\,\,\,\textrm{(Göttsche's formula)}\notag\\ \sum_{n\geq 0}\chi(\textrm{Hilb}^nY)t^n&=\mathcal P_3^{\chi(Y)} \,\,\,\,\,\,\,\textrm{(Cheah's formula)}\notag \end{align}
Question: do we have such formulas for any $d$? in other words, do we have $$\sum_{n\geq 0}\chi(\textrm{Hilb}^nX)t^n=\mathcal P_d^{\chi(X)}$$ for any smooth projective $X$ of dimension $d$?
-
Can anyone please help me fixing the dieresis on "o"? (There is nothing worse than quoting names badly, sorry for that.) – Brenin Jan 9 '14 at 10:57
In fact, a version of that equality holds even for 'universal' Euler characteristics, i.e. in the Grothendieck ring of varieties. See arxiv.org/pdf/math/0407204v1.pdf . – Vivek Shende Jan 10 '14 at 4:01
@VivekShende: thanks, this is a very nice reference to know about! – Brenin Jan 10 '14 at 15:34
1 Answer
Yes.
Write $\mathcal P_d= 1 + p_d$, so $\mathcal P_d^{\chi(X)}= \sum_{k=0}^{\infty} \left( \begin{array}{c} \chi( X) \\ k \end{array}\right) p_d^k$.
I will show that $\left( \begin{array}{c} \chi( X) \\ k \end{array}\right) p_d^k$ is the generating function for the stratum of $Hilb^n X$ consisting of subschemas that are supported on $k$ distinct points.
This subscheme is a fibration over the variety $\left( \begin{array}{c} X \\ k \end{array}\right)$, the variety of all sets of $k$ distinct points in $X$. We can easily check that the Euler characteristic of $\left( \begin{array}{c} X \\ k \end{array}\right)$ is $\left( \begin{array}{c} \chi(X) \\ k \end{array}\right)$. The Euler characteristic of a fibration is the Euler characteristic of the base times the Euler characteristic of the fiber. So we must show that the Euler characteristic of the fiber is $p_d^k$. But this is clear - it's just the Hilbert scheme of subschemes supported exactly at $k$ distinct fixed points, which is just a $k$-fold product of the hilbert scheme of nonempty subschemes supported at a single point, which is $p_d$.
-
Thanks for your answer! Following your proof, I find instead $\chi(\textrm{Hilb}^n_kX)=\binom{\chi(X)}{k}p_d(n)^k$ (small $p$), so that the generating series $\sum_n\chi(\textrm{Hilb}^n_kX)t^n=\binom{\chi(X)}{k}\sum_np_d(n)^kt^n\neq \binom{\chi(X)}{k}P_d^k$. (the latter is big $P$.) Where is my mistake? (Here $\textrm{Hilb}^n_kX$ is the Hilb of subschemes supported on $k$ distinct points.) – Brenin Jan 10 '14 at 15:32
It's not $p_d(n)^k$, because the whole thing having degree $n$ does not mean the points have degree $n$. Instead, the degrees sum to $n$. – Will Sawin Jan 10 '14 at 15:45
Sorry, I do not get it. According to what $\mathcal P_d$ is, we have $p_d=\sum_{n\geq 1}p_d(n)t^n$. How can $p_d^k$ equal $\chi(\textrm{fiber})$? If I got your last comment, the fiber is $\prod_{1\leq i\leq k}\textrm{Hilb}^i(X)_{x}$, so its $\chi$ is $\prod_{1\leq i\leq k}p_d(i)$. Sorry to bother you! – Brenin Jan 10 '14 at 17:11
No the fiber is the disjoint union over all partitions of $n$ into $k$ numbers $a_1,\dots,a_k$ of the product of $Hilb^{a_i}(X)_x = Hilb^{a_i}(\mathbb C^d)_0$. – Will Sawin Jan 10 '14 at 17:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9386497139930725, "perplexity": 260.4792034510485}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068098.37/warc/CC-MAIN-20150827025428-00098-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/derive-the-general-formula-of-the-equation-of-a-circle-for-the-points.702805/ | # Homework Help: Derive the general formula of the equation of a circle for the points
1. Jul 24, 2013
### lionely
1. The problem statement, all variables and given/known data
1. (a) Find the equation of the circle with the straight line joining A(1;-1) and C(3; 4)
as diameter.
(b) Hence or otherwise, derive the general formula
(x - x1)(x - x2) + (y - y1)(y - y2)=0
of the equation of a circle for the points A(x1; y1) and C(x2; y2):
2. Relevant equations
3. The attempt at a solution
I did part a) but part b) is baffling me could someone give me a hint on what to do?
Last edited: Jul 24, 2013
2. Jul 24, 2013
### Staff: Mentor
Something missing here? I would think that a "general formula" would be an equation, which the above is not. Is this the complete problem statement?
3. Jul 24, 2013
### lionely
sorry it was supposed to be equal to 0
4. Jul 24, 2013
### LCKurtz
If you did part a, do part b the same way using the variables instead of the numbers. Then see if you can get it in the required form.
Alternatively use this hint: Those points and the point (x,y) can be used to make an angle inscribed in a semicircle, so...
5. Jul 24, 2013
### lionely
Yeah I tried and took the gradient of the lines and got it. Thanks.
6. Jul 25, 2013
### HallsofIvy
Impossible-this is NOT the equation of a circle!
7. Jul 25, 2013
### Infrared
It looks like the solution to me
$$(x-x_1)(x-x_2)+(y-y_1)(y-y_2)=x^2-(x_1+x_2)x+x_1x_2+y^2-(y_1+y_2)y+y_1y_2 \\ =(x-\frac{x_1+x_2}{2})^2-(\frac{x_1+x_2}{2})^2+x_1x_2+(y-\frac{y_1+y_2}{2})^2-(\frac{y_1+y_2}{2})^2+y_1y_2=0 \\ (x-\frac{x_1+x_2}{2})^2+(y-\frac{y_1+y_2}{2})^2= \frac{x_1^2+2x_1x_2+x_2^2}{4}-x_1x_2+\frac{y_1^2+2y_1y_2+y_2^2}{4}-y_1y_2 \\ (x-\frac{x_1+x_2}{2})^2+(y-\frac{y_1+y_2}{2})^2= \frac{x_1^2-2x_1x_2+x_2^2}{4}+\frac{y_1^2-2y_1y_2+y_2^2}{4} \\ (x-\frac{x_1+x_2}{2})^2+(y-\frac{y_1+y_2}{2})^2= \frac{(x_1-x_2)^2+(y_1-y_2)^2}{4}$$
This looks like the equation of a circle with the right center and radius.
8. Jul 25, 2013
### ehild
The centre of the circle is the midpoint of the diameter, at O((x1+x2)/2;(y1+y2)/2). The radius is half of the diameter: R=0.5sqrt((x2-x1)2+(y2-y1)2). Writing up the equation of a circle with these parameters, arranging, factorising and simplifying, you get the desired formula.
ehild
9. Jul 25, 2013
### LCKurtz
Lionely seemed to get my hint. The equation is trivially correct if you note that if $P=(x,y)$ is a point on the circle with $A=(x_1,y_1),B=(x_2,y_2)$ then angle $APB$, being inscribed in a semicircle, is a right angle. The equation is simply $\vec{AP}\cdot \vec{BP}=0$.
10. Jul 25, 2013
### ehild
That is the nicest solution!
ehild
11. Jul 25, 2013
### lionely
I didn't use vectors I took the gradient but your solution was like 2 lines mine was like 5 or more . Using vectors like that is ... awesome.
12. Jul 25, 2013
### symbolipoint
I'm confused what you want in part (b). The general equation for circle in two dimensions is
$(x-a)^2+(y-b)^2=r^2$
and comes from using the distance formula for an arbitrary center of the circle.
If you would take your two given points, use middle-point formula, this would be your circle's center. Half the length of the segment joining your two given points has length of your radius.
Use Distance formula with the general point, (x,y), the express distance from center to this general (x,y), set it equal to r, and then derive your equation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266257882118225, "perplexity": 844.6772623726382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00552.warc.gz"} |
http://math.stackexchange.com/questions/41888/lim-a-b-when-limb-does-not-exist | # lim (a + b) when lim(b) does not exist?
Suppose $a$ and $b$ are functions of $x$. Is it guaranteed that $$\lim_{x \to +\infty} a + b\text{ does not exist}$$ when $$\lim_{x \to +\infty} a = c\quad\text{and}\quad \lim_{x \to +\infty} b\text{ does not exist ?}$$
-
I suggest that the account of @crocs be suspended. I don't think there's any point in further efforts at cooperation; there is clearly no intent whatsoever of cooperating on his or her part. I don't usually object to strong language, but if the only response to diverse respectful and patient attempts at explaining the problems is just calling two of the people undertaking those efforts a bitch, that's a bit too much. – joriki May 29 '11 at 2:52
@Bill: It is not the OP's prerogative to "emphasize what they desire in whatever manner they so desire" - profanity is unacceptable, for example. Would you really allow, say, "this f---ing problem is really a piece of s---, can someone help me"? At any rate, even if the OP's post is within the bounds of decency, as crocs' was, as the FAQ indicates, "If you are not comfortable with the idea of your questions and answers being edited by other trusted users, this may not be the site for you." – Zev Chonoles May 29 '11 at 19:39
Please take this conversation to meta. (Bill, I don't know why you're automatically siding with the OP, whose offensive comments were all deleted by the time I got here, when there is still clear evidence that many other reasonable users were responding to very offensive behavior on the OP's part. I think you enjoy the narrative of the rest of math.SE oppressing new users too much to ask yourself whether it's actually true in this case.) – Qiaochu Yuan May 29 '11 at 20:21
@Bill: Know better than to... what, exactly? Make minor, yet justified, edits? I don't understand why you think experienced users should have to walk on eggshells on the off-chance the OP might start hurling profanity as soon as something happens that they don't like. Nothing anyone else did could possibly have been expected to provoke crocs' reaction - it was completely out of proportion. There isn't anything anyone could have done to avoid it; this was simply a consequence of crocs having a bad temper. – Zev Chonoles May 29 '11 at 20:45
@Bill: You do not know whether this could have easily been avoided because you do not know the facts of the case. The users who were present before all of the offensive comments were deleted actually know the facts, and you should be deferring to their judgment. Otherwise you have nothing to fall back on but your own prejudices against the system. Your comment that "users memories of heated conversations are not always reliable" is ridiculous: you don't even have any memories of the relevant incident, so why is your judgment more trustworthy than theirs? – Qiaochu Yuan May 29 '11 at 20:46
Yes. If the limit of $a+b$ existed, it would follow that
$$\lim_{x \to +\infty}b=\lim_{x \to +\infty} [(a + b) - a]=\lim_{x \to +\infty}(a+b)-\lim_{x \to +\infty}a\;.$$
-
Thank you! That helped very much! – crocs May 28 '11 at 23:21
Suppose, to get a contradiction, that our limit exists. That is, suppose $$\lim_{x\rightarrow \infty} a(x)+b(x)=d$$ exists. Then since $$\lim_{x\rightarrow \infty} -a(x)=-c,$$ and as limits are additive, we conclude that $$\lim_{x\rightarrow \infty} a(x)+b(x)-a(x)=d-c$$ which means $$\lim_{x\rightarrow \infty} b(x)=d-c.$$ But this is impossible since we had that $b(x)$ did not tend to a limit.
Hope that helps,
-
Thank you! That helped very much! – crocs May 28 '11 at 23:21
HINT $\$ This follows immediately from the fact that functions whose limit exists at $\rm\:\infty\:$ are closed under subtraction, i.e. they comprise an additive subgroup of all functions. Therefore, abstractly, this is essentially the same as the proof that the sum of an integer and non-integer is a non-integer. For further discussion of this complementary view of a subgroup see this post.
-
His answer is right only if $c$ is a number, which seems to be the case from the way you wrote the question.
Anyhow if you deal also with infinite limits, it is possible that $\lim_{x \to \infty}b$ does not exist and $\lim_{x \to \infty} a+b$ exists.
Just take $a=x-\sin(x)$ and $b=\sin(x)$.
-
when the limit is $\infty$, it does not exist. Writing $\lim\limits_{x\to a}f = \infty$ is a way of saying "the limit does not exist" and specifying why it does not exist. Similarly with limits "equal" to $-\infty$", and with limits as $x\to\infty$ or as $x\to-\infty$. Otherwise, limit theorems need to have a whole bunch of exceptions (e.g., it would no longer be true that if the limits of $f$ and $g$ 'exist', then so does the limit of $f+g$ and the latter is equal to their sum). – Arturo Magidin May 29 '11 at 1:17
I think that this depends from book to book and instructor to instructor. And the sum theorem, holds as long as it is not an indeterminate form, I am also pretty sure I saw somewhere that theorem stated for finite limits.. – N. S. May 29 '11 at 1:37
@Arturo: One could also argue as follows: In general, it's possible to consider convergence in the extended reals with respect to the order topology; this yields the usual definition of the limits $\pm\infty$. However, in that case not only the limits but also the function values may be $\pm\infty$. Then addition wouldn't be defined, so by using addition the OP was implying that the underlying set was just the reals. Allowing infinite limits but only finite values would be like allowing real limits but only rational values, i.e. considering elements outside the set as potential limits. – joriki May 29 '11 at 1:40
BTW: In most books L'H appears in the form: If bla bla and $\lim\frac{f'(x)}{g'(x)}$ exists then bla bla... Does that cover the case when this limit is $\pm \infty$? Of course I realize that this is not really a good point since most Calculus books are very poorly written, but I am used to make a distinction between limit exists and limit is finite. This might come from the fact that I learned Analysis the old way, sequences first, and there convergence is already finite, so we always use the distinction "convergent" vs. "has a limit". For functions we replaced convergence with finite lim – N. S. May 29 '11 at 1:45
"The limit exists" is relative to the set you consider. You can consider the reals, the extended reals, or even the one-point compactification of the reals with just one point at infinity. The limit of the sequence $(-1)^nn$ exists in the one-point compactification but not in the extended reals. This shows that it's not just a matter of adding "enough" limit points, but also of choosing a topology. Again, there's an analogy to the rationals, which can be extended either to the reals or to the $p$-adic numbers. – joriki May 29 '11 at 1:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687444925308228, "perplexity": 542.5955046052069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345774525/warc/CC-MAIN-20131218054934-00093-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-geometry/158016-linear-subspace-banach-space-closed-if-only-if-complete.html | # Thread: Linear Subspace of a Banach Space closed if and only if it is complete.
1. ## Linear Subspace of a Banach Space closed if and only if it is complete.
I'm not really sure if I'm approaching this correctly, since I'm not using linearity. What exactly do I need to prove here?
2. I don't believe that for any linear subspace of a Banach space, every convergent sequence is Cauchy. Maybe I'm wrong?
3. Convergent sequences are always Cauchy. Cauchy sequences always converge in complete spaces (by definition, almost, depending on the author), but they don't have to converge if the space is not complete.
There isn't much to this proof, and I think you're certainly got the general idea. Closedness and completeness, as this proof shows, are pretty much the same thing. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.961773157119751, "perplexity": 242.48735497263607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867885.75/warc/CC-MAIN-20180625131117-20180625151117-00400.warc.gz"} |
https://math.stackexchange.com/questions/794709/prove-that-if-u-cdot-v-u-cdot-w-then-v-w/794717 | # Prove that if $u \cdot v = u \cdot w$ then $v = w$
I've tried putting it up as:
$$[u_1 v_1 + \ldots + u_n v_n] = [u_1 w_1 + \ldots + u_n w_n]$$
But this doesn't make it immediately clear...I can't simply divide by $u_1 + \ldots + u_n$ as these ($u$, $v$ and $w$) are vectors...
Any hints?
• There must be more to this. Perhaps it's supposed to say that if, for every vector $u$ one has $u\cdot v = u\cdot w$, then $v=w$? As it is, the statement is simply false.
– MJD
May 14 '14 at 15:31
• (As an example of what MJD's talking about) $(1,0,0)\cdot(0,1,0)=0$ and $(1,0,0)\cdot(0,0,1)=0$ but but $(0,1,0)\neq(0,0,1)$. Is the question you really want to ask the on MJD mentions? May 14 '14 at 15:31
• What if $u=0$?! May 14 '14 at 15:36
• It just says: Suppose we know that $u v = u w$, does it follow that $v = w$? So I guess I could say: ''No, if u=0, then this does not need to hold''? May 14 '14 at 16:04
• . . . or you could give examples where $u\ne0$. May 14 '14 at 16:54
If $u\cdot v=u\cdot w$ for all $u$ (equivalently $u\cdot(v-w)=0$), then with $u=v-w$, we get $\|v-w\|^2=(v-w)\cdot(v-w)=0$. Hence $v=w$.
P.S.: Of course, if $v$ are $w$ assumed to be vectors from some inner-product space $S$ with a basis $s_1,\ldots,s_k$, then "for all $u$" can be replaced by "for $u=s_i$, $i=1,\ldots,k$".
• Hah, that's even better :) May 14 '14 at 15:37
• I'd never seen this one before; gorgeous! May 14 '14 at 15:41
• But what does $u=s_i=v-w$ mean then? Feb 26 '18 at 7:43
Presumably you want to add "for all $u$" to that question.
Rearranging, you get $u\cdot (v-w)=0$.
If $v-w\neq 0$, can you see how to pick a $u$ so that $u\cdot(v-w)\neq 0$? A very simple choice of $u$ would work.
By contrapositive, you will have proved that if $u\cdot (v-w)=0$ for all $u$, then $v=w$.
$$u\cdot v=u\cdot w$$
Others have shown how to show that $v=w$ if one assumes the above for all values of $u$.
To show that it's now true if one just assumes $u$, $v$, $w$ are some vectors, let's look at the circumstances in which it would fail. Recall that $u\cdot v = \|u\| \|v\|\cos\theta$ where $\theta$ is the angle between the vectors $u$ and $v$.
Thus one circumstance in which the conclusion does not hold is when $v$ and $w$ are of equal lengths, i.e. $\|v\|=\|w\|$, and both are at the same angle with $u$. Just draw a picture. One can rotate $v$ about an axis in which the vector $u$ lies and get many vectors $w$ having the same length as $v$ and making the same angles with $u$.
Another circumstance in which it fails is this: picture $u$ and $v$ as arrow pointing out from the origin, and draw a plane or hyperplane at right angles to $u$ passing through the endpoint of the arrowhead of $v$. Choose an arbitrary point in that hyperplane, and draw an arrow from the origin to that point. Call that vector $w$. Then show that $u\cdot v=u\cdot w$.
Can I not choose $u=(1,0,0)$, $u=(0,1,0)$ and $u=(0,0,1)$. When I plug them into $u\cdot v=u\cdot w$, I get three equations: $v_1=w_1$, $v_2=w_2$ and $v_3=w_3$, so $v$ must be equal to $w$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8952318429946899, "perplexity": 172.49270131607815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00120.warc.gz"} |
https://textilesgreen.in/miscible-liquids/ | Miscible liquids can be easily dissolved in any other liquid. you can able to see one layer or single layer on liquid.
Hello, reders welcome to another fresh article on “textilesgreen.in” today we will discuss about miscible liquids and immiscible liquids.
So, hold your seat end of out, i provide valuable information regarding this topic.
Lets get started,
## Miscible liquids
It can easily dissolved or properly dissolved in any liquid and it is mixed properly with each other in all proportions. these liquid is called miscible liquid.
if two substance mixed together and results it make one layer on liquid. these are called miscible liquids.
For example;
Ethanol and water
In other words, it is define as,
If polar molecule (water) mixed with other polar substance (ethanol) and it dissolved one another. it show one layer in mixed solution. these are called miscible liquids.
Where,
Ethanol = polar substance
Water = polar substance
If (ethanol) polar substance + (water) polar substance = dissolved one another.
If it is not look like separate layer in mixed solution then it is called immiscible liquids.
Where as,
if it is look like one layer in mixed solution then it is called miscible liquids.
### Miscible and immiscible liquids
Miscible liquids;
If one or two liquids that completely or properly dissolved each other that are miscible liquids.
Miscible liquids is dissolved each other in all proportions. it can easily dissolved in any other liquids and if liquid is miscible then you will able to see a one layer on liquid it is not look like a separate layer on liquid. it is look like a single or one layer, on liquid, these are called a miscible liquids.
For example;
ethanol and water (both are polar substance)
If both polar substance (ethanol and water) mixed together, it is given a single layer in solution. then it is miscible liquids. So, ethanol and water is the best example of miscible liquids.
immiscible liquids
If one or two liquids that are not dissolve each other. these are immiscible liquids.
polar water and non polar oil are immiscible liquids and it is not mixed and results make a solution“.
immiscible liquids is not dissolved each other in all proportion. it is not dissolved in any other liquids. and it is not dissolved each other. if liquid is immiscible then you will able to see separate layer on liquid. these are called immiscible liquids.
For example,
Oil and water (oil is polar where water is non polar substance)
it is the best example of immiscible liquids.
But,
oil = non polar substance whereas,
Water = polar substance
So, if polar substance (water) mixed with non polar substance (oil) then it is not dissolved each other. it make a separate layer in solution. in this condition it is called immiscible liquids.
### Consider the following solvent pairs when mixed which are immiscible/miscible?
Consider the following solvent pairs when mixed: which are immiscible/miscible. if solvents are immiscible, which solvent would be in the top layer?
1. ethanol and water
2. methylene chloride and water
3. ether and water
4. hexane and water
### Which compound do you expect to be miscible with octane (c8h18)?
which compound do you expect to be miscible with octane (c8hl8)?
1. CH3OH
2. CBr4
3. H2O
4. NH3
Ans: We know that octane is an example of a hydrocarbon. it is made from carbon and hydrogen.
You know that, hydrocarbons are non polar compounds
So, octane is an non polar molecule
non polar + non polar = miscible
CBr4 + C8h18 = miscible
so, in this option, | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8958571553230286, "perplexity": 4788.959367526356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00310.warc.gz"} |
https://ri.itba.edu.ar/items/af708a69-7463-4372-9e3d-f0c5444ecdc5 | ## Effect of physical distancing on the speed-density relation in pedestrian dynamics
2021-04
##### Autores
Echeverría Huarte, Iñaki
Garcimartín, Ángel
Parisi, Daniel
Martín-Gómez, César
##### Resumen
"We report experimental results of the speed-density relation emerging in pedestrian dynamics when individuals keep a prescribed safety distance among them. To this end, we characterize the movement of a group of people roaming inside an enclosure varying different experimental parameters: (i) global density, (ii) prescribed walking speed, and (iii) suggested safety distance. Then, by means of the Voronoi diagram we are able to compute the local density associated to each pedestrian, which is afterward correlated with its corresponding velocity at each time. In this way, we discover a strong dependence of the speed-density relation on the experimental conditions, especially with the (prescribed) free speed. We also observe that when pedestrians walk slowly, the speed-density relation depends on the global macroscopic density of the system, and not only on the local one. Finally, we demonstrate that for the same experiment, each pedestrian follows a distinct behavior, thus giving rise to multiple speed-density curves." | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518161177635193, "perplexity": 1065.2403701699675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00341.warc.gz"} |
https://911weknow.com/interpreting-the-impact-size-of-logistic-regression-coefficients | # Interpreting the Impact Size of Logistic Regression Coefficients
## For Discrete, Continuous, and Standardized Variables
The logit function
If you?re trying to learn machine learning nowadays, chances are that you have encountered logistic regression at some point. As one of the most popular and approachable machine learning algorithms, the theory behind the logistic regression has been explained in and out by so many people. One area that is less explained, however, is how to translate coefficients into exact impact size measures. Rather than ranking coefficients and concluding feature A is important than feature B, we want to interpret the result of logistic regression as something like ?flipping feature A doubles the odds of the positive outcome and increasing feature B by 1 unit decreases the odds of the positive outcome by 60%? .
Logistic Regression Review
To start with, let?s review some concepts in logistic regression. The dependent variable of logistic regression is binary and the ?log-odds? of the dependent variable?s probability is modeled by a linear combination of independent variables:
The logit function is defined as the logged odds of probability p:
The odds of an event is the probability of it happens over the probability of it doesn?t happen. For example, if the probability of an event is 0.8, the odds of the event occurring is 0.8/0.2 = 4, and that is also to say that the event will occur 4 times for every time the event does not occur and this event is 300% more likely to happen than not.
Let?s take an example of predicting diabetes (diabetes = 1, not diabetes = 0) by patient?s age, gender, body mass index, blood pressure and let?s assume the data has been fitted with logistic regression and that the performance of the model has been validated using cross-validation (very important to check to prevent overfitting). The coefficients for each variables have been estimated and we want to interpret them in terms of impact size.
Binary variables:
In the example, gender is a binary variable (male = 0 and female = 1) and let?s pretend that the trained logistic regression gives this feature a coefficient of 0.6. It?s straight forward to interpret the impact size if the model is a linear regression: increase of the independent variable by 1 unit will result in the increase of dependent variable by 0.6. With the logit transformation, the changes in the target of logistic regression is not as obvious.
To illustrate the derivation, let?s plug in the coefficients and variables representing the gender of patients in the equation above, we have:
To cancel the changing factors of other variables, the difference of the two previous equations:
This means providing all the other metrics are the same, and flipping the gender from male to female, the log-odds of getting diabetes will increase by 0.6.
To convert log-odds to odds, we want to take the exponential on both sides of equation which results in the ratio of the odds being 1.82.
From the derivation, we can see the impact size of the logistic regression coefficients can be directly translated to an Odds Ratio, which is a relative measure of impact size that is not necessarily related to the innate probability of the event. If the odds ratio is equal to 1, it means the odds of the events in the numerator is the same as the odds of the events in the denominator, and if the odds ratio is above 1, the events in the numerator has favorable odds comparing to the events in the denominator.
Coming back to the example, the coefficient of the gender feature being 0.6 can be interpreted as the odds of females getting diabetes over the odds of males getting diabetes is 1.82 with all the other variables fixed. In terms of percentage change, the odds for females getting diabetes are 82% higher than the odds for male getting diabetes.
Continuous variables:
Another variable in the example of predicting diabetes is age, which is a continuous variable, and let?s say the trained logistic regression coefficient for this variable is -1.5. Let?s repeat the same exercise as we did for the binary variable by increasing the patient?s age by one year:
Cancel the common factors by taking a difference,
and then express the impact size by odds ratio
This result says that, holding all the other variables fixed, by increasing one year of age we expect to see the odds of getting diabetes reduce by about 78%.
Standardized variables:
One common pre-processing step when performing logistic regression is to scale the independent variables to the same level (zero mean and unit variance). The motivation of this type of scaling, named standardization, is to make the feature coefficient scales comparable with each other and to facilitate the convergence of the regression algorithm. The regression coefficients obtained from standardized variables are called standardized coefficients. In our example, age and blood pressure have completely different scales and units – with standardized coefficients we are able to say which feature has greater impacts towards diabetes. But how do we get from these standardized coefficients back to odds ratio with interpretable units?
The trick here is to convert the standardized unit back to the original unit of the feature. For example, in the diabetes study the patients have a standard deviation of 10, and the fitted logistic regression gives this feature a standardized coefficient of 2. This means by increasing one standardized unit of age, the odds ratio of getting diabetes is exp(2) = 7.39 (i.e. the odds of getting diabetes increase by 639%). From the scaling transformation we know that one standardized unit of age equals to 10 years, which is the standard deviation of age before the transformation. Plugging this information back, we can conclude that increasing patients age by 10 years will lead to an increase of odds of getting diabetes by 639%. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8998881578445435, "perplexity": 592.2333042482803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00724.warc.gz"} |
https://homework.zookal.com/questions-and-answers/my-batmobile-car-has-a-mass-of-1000-kg-it-899508739 | 1. Engineering
2. Mechanical Engineering
3. my batmobile car has a mass of 1000 kg it...
# Question: my batmobile car has a mass of 1000 kg it...
###### Question details
My Batmobile car has a mass of 1000 kg. It starts from rest and drives slowly up the hill, At the top, it has increased its elevation by 5 m and has a speed of 8 m/s. While doing this it has given off 7 kJ of heat to the surroundings. How much work did the engine need to provide to the car in this process?
Neglect any change in internal energy of the car. Neglect any friction that the car had to overcome. Hint: Consider analyzing the car as the system (with the weightless engine outside of the system).
Answer: absolute value is about 90 kJ. Be sure to correctly provide the sign. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8140119910240173, "perplexity": 757.7522182038804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00387.warc.gz"} |
http://astronomy.stackexchange.com/questions/4720/if-an-object-with-mass-were-to-somehow-go-the-speed-of-light-would-it-destroy-t | # If an object with mass were to somehow go the speed of light, would it destroy the whole universe?
Would an object with mass traveling the speed of light destroy the whole universe because it would have infinite energy / mass? According to http://en.wikipedia.org/wiki/Speed_of_light#Upper_limit_on_speeds and other sources, it takes infinite energy to go the speed of light, which should be impossible. However, for the sake of the question, let's assume it were somehow achieved: (e.g. potentially as described here: http://www.popsci.com/technology/article/2012-03/pro-tip-flying-faster-speed-light-could-have-devastating-consequences). My premise behind it possibly destroying the whole universe is that an object with infinite energy should also have infinite mass (E=mc^2, and infinity / c^2 is still infinity). Infinite mass should have infinite gravity, which may well destroy the entire universe (although, would this destruction travel at the speed of light, so it would "slowly" destroy the universe vs. instantly?). One thought is that it could just form a black hole instead, but as I understand it, a black hole is an infinitely small (singularity) of infinite density, with a "net mass" equal to the net mass that it has ingested.
-
The proposed damaging effects of the Alcubierre drive should only cause damage to a small area of space at its destination, if any damage is incurred. – HDE 226868 Aug 5 at 20:07
Would an object with mass traveling the speed of light destroy the whole universe because it would have infinite energy / mass?
If we understand the question as a limiting process, which is the only way it makes any sense, the answer is no. For illustrative simplicity, take a spherically symmetric isolated body, so that its exterior gravitational field is the Schwarzschild spacetime. Now boost this to ultrarelativistic speeds. In the speed of light limit, the result is a linearly polarized axisymmetric gravitational pp-wave, the Aichelburg-Sexl ultraboost.
Its effect on test particles it passes is an impulse: it instantaneously bends the worldlines, but does not destroy anything. You can think of this as infinite force acting for an infinitesimally small time, making the overall effect finite and nondestructive, somewhat analogously a particle encountering a Dirac-delta potential in Newtonian physics.
This is similar to ultrarelativistic limit of a moving electric charge. The electric field lines are Lorentz-contracted along the direction of travel, squeezing them together (image credit: wikipedia):
In the ultrarelativistic limit, the electric field in the transeverse direction becomes infinitely strong, but it is also Lorentz-contracted to be infinitely thin. Without ignoring the magnetic field, the result is an electromagnetic plane wave with a Dirac delta profile. The gravitational case is analogous, though quantitatively different. Some details on this can be found in arXiv:gr-qc/0110032.
My premise behind it possibly destroying the whole universe is that an object with infinite energy should also have infinite mass (E=mc^2, and infinity / c^2 is still infinity).
The concept of 'relativistic mass' is largely depreciated in modern physics. Mass is more properly related to energy and momentum via $(mc^2)^2 = E^2 - (pc)^2$.
One thought is that it could just form a black hole instead, ...
No, that's completely wrong. It doesn't become a black hole for the same reason it doesn't destroy the universe: the gravitating body moving relativistically past a stationary observer is physically equivalent to a relativistic observer moving past a stationary gravitating body. But the observer moving relativistically obviously does not nothing to the gravitating body!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8859418034553528, "perplexity": 558.7340042847172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663460.43/warc/CC-MAIN-20140930004103-00262-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/55711/is-perovskite-symmetric-under-exchange-of-a-and-b-atoms | # Is perovskite symmetric under exchange of A and B atoms?
A perovskite is any crystal with the general formula $\ce{ABX3}$ where if the $\ce{B}$ atoms form a cube, an $\ce{A}$ atom sits in its center and the $\ce{X}$ atoms sit between neighboring $\ce{B}$ atoms.
Is the perovskite structure symmetric under exchange of $\ce{A}$ and $\ce{B}$ atoms? That is, if I replace every $\ce{A}$ atom with a $\ce{B}$ atom and vice versa, will I end up with the same exact structure as before? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282763004302979, "perplexity": 511.70337735420145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00077.warc.gz"} |
http://link.springer.com/article/10.2478%2Fs13540-012-0022-3 | , Volume 15, Issue 2, pp 304-313
Date: 18 Mar 2012
The mean value theorems and a Nagumo-type uniqueness theorem for Caputo’s fractional calculus
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Abstract
We generalize the classical mean value theorem of differential calculus by allowing the use of a Caputo-type fractional derivative instead of the commonly used first-order derivative. Similarly, we generalize the classical mean value theorem for integrals by allowing the corresponding fractional integral, viz. the Riemann-Liouville operator, instead of a classical (firstorder) integral. As an application of the former result we then prove a uniqueness theorem for initial value problems involving Caputo-type fractional differential operators. This theorem generalizes the classical Nagumo theorem for first-order differential equations.
Dedicated to the memory of my teacher, Professor Dr. Helmut Braß | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9441453218460083, "perplexity": 667.4032910257619}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917663.12/warc/CC-MAIN-20140901014517-00366-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://onepetro.org/ISOPEIOPEC/proceedings-abstract/ISOPE96/All-ISOPE96/ISOPE-I-96-241/23794 | ABSTRACT
The present paper deals with the efficiency of floating breakwaters consisting of an array of truncated vertical cylinders, which can be used to protect nearshore or offshore cites. Extensive experimental data for several configurations of multiple truncated cylinders placed in a wave flume and exposed to the action of both regular and random waves are presented. The transmitted wave field and the exciting forces on particular members of the cylinder array are measured and comparisons are made with corresponding numerical predictions. In addition, first- and mean second - order forces for a number of prototype configurations in the open sea are also given.
INTRODUCTION
In deep waters, where the construction of a conventional breakwater is either impossible or very expensive, floating breakwaters provide an efficient alternative solution to the problem of the protection of nearshore or offshore cites. The present paper deals with the efficiency of floating breakwaters consisting of an array of truncated vertical cylinders. A set of different configurations of cylinders have been experimentally investigated in a systematic way, 1: 10 scaled models of vertical truncated cylinders have been placed in the wave flume of the Laboratory for Ship and Marine Hydrodynamics of the National Technical University of Athens and exposed to the action of both regular waves and irregular seas. Numerical and experimental aspects of the problem are considered. In the numerical study, first- and mean second-order loads on several prototype configurations of multiple truncated vertical cylinders in the open sea are given. The prediction of the mean wave loads on the cylinders is a prerequisite for the design of proper mooring arrangement for the floating breakwater. The first-order diffraction and radiation problems are solved using an exact formulation which makes use of single body hydrodynamic characteristics and takes into account the interaction phenomena through the physical idea of multiple scattering (Mavrakos and Koumoutsakos, 1987; Mavrakos, 1991).
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009293675422668, "perplexity": 981.5210634346215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00325.warc.gz"} |
https://www.computer.org/csdl/trans/tp/2002/09/i1286-abs.html | Issue No. 09 - September (2002 vol. 24)
ISSN: 0162-8828
pp: 1286-1290
ABSTRACT
<p><b>Abstract</b>—In the energy spectrum of an occlusion sequence, the distortion term has the same orientation as the velocity of the occluding signal. Recent works claimed that this oriented structure can be used to distinguish the occluding velocity from the occluded one. Here, we argue that the orientation structure of the distortion cannot always work as a reliable feature due to the rapidly decreasing energy contribution. This already weak orientation structure is further blurred by a superposition of distinct distortion components. We also indicate that the superposition principle of Shizawa and Mase for multiple motion estimation needs to be adjusted.</p>
INDEX TERMS
Optical flow, occlusion, motion discontinuities, spectral analysis.
CITATION
Weichuan Yu, Gerald Sommer, Steven Beauchemin, Kostas Daniilidis, "Oriented Structure of the Occlusion Distortion: Is It Reliable?", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 24, no. , pp. 1286-1290, September 2002, doi:10.1109/TPAMI.2002.1033220
CITATIONS
SHARE
99 ms
(Ver 3.1 (10032016)) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8051055669784546, "perplexity": 3533.658329390265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725451.13/warc/CC-MAIN-20161020183845-00501-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://infoscience.epfl.ch/record/214056 | ## First measurement of the differential branching fraction and CP asymmetry of the B-+/- -> pi(+/-)mu(+/-)mu(-) decay
The differential branching fraction with respect to the dimuon invariant mass squared, and the CP asymmetry of the B-+/- -> pi(+/-)mu(+/-)mu(-) decay are measured for the first time. The CKM matrix elements vertical bar V-td vertical bar, and vertical bar V-ts vertical bar, and the ratio vertical bar V-td/V-ts vertical bar are determined. The analysis is performed using proton-proton collision data corresponding to an integrated luminosity of 3.0 fb(-1), collected by the LHCb experiment at centre-of-mass energies of 7 and 8 TeV. The total branching fraction and CP asymmetry of B-+/- -> pi(+/-)mu(+/-)mu(-) decays are measured to be B(B-+/- -> pi(+/-)mu(+/-)mu(-)) = (1.83 +/- 0.24 +/- 0.05) x 10(-8) and A(cp)(B-+/- -> pi(+/-)mu(+/-)mu(-)) = -0.11 +/- 0.12 +/- 0.01, where the first uncertainties are statistical and the second are systematic. These are the most precise measurements of these observables to date, and they are compatible with the predictions of the Standard Model.
Published in:
Journal of High Energy Physics, 10, 053
Year:
2015
Publisher:
New York, Springer
Keywords:
Laboratories: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870726466178894, "perplexity": 2455.4856951174306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591332.73/warc/CC-MAIN-20180719222958-20180720002958-00133.warc.gz"} |
https://www.physicsforums.com/threads/proof-by-contradiction.277882/ | # Proof By Contradiction
1. Dec 7, 2008
### kathrynag
1. The problem statement, all variables and given/known data
I just need to decide how to show this by contradiction.
If either A or B is the empty set then AxB=$$\oslash$$.
2. Relevant equations
3. The attempt at a solution
Here is how I started:
Assume either A or B is the empty set and AxB$$\neq$$$$\oslash$$
2. Dec 7, 2008
### statdad
Reasonable start. What would $$A \times B \ne \emptyset$$ mean?
3. Dec 7, 2008
### kathrynag
Is that the correct way to do a proof by contradiciton?
AxB is defined as the set consisting of all ordered pairs (x,y) in which x is an element of A and y is an element of B. So, x and y exist?
4. Dec 7, 2008
### statdad
Close: if you assume $$A \times B \ne \emptyset$$, there must be at least
one element $$(a,b) \in A \times B$$. If you think about the definition of cartesian products, this will lead to a contradiction - about what? (Hint: what did you assume about $$A \text{ and } B$$?)
5. Dec 7, 2008
### kathrynag
Ok here's my idea for the proof.
Let A = null set and B be arbitrary. Then AxB= null set because of the definition of AxB. But there is no x which is an element of A. Therefore AxB=null set. Thus, contradiciton.
6. Dec 7, 2008
### statdad
No - you can't assume $$A \times B = \emptyset$$ and try to proceed with a proof by contradiction.
Assume $$A= \emptyset$$ ($$B$$ may or may not be empty: that is unimiportant).
If $$A \times B \ne \emptyset$$, then (by definition of the Cartesian Product and non-empty set)
you can find an element of the product, say $$(a,b) \in A \times B$$.
This means $$b \in B$$. From where do you get the object $$a$$?
Answering the second question gives the contradiction.
7. Dec 7, 2008
### kathrynag
a is an element of A.
Oh, but then that mean A is nonempt and this a contradicition?
8. Dec 7, 2008
### statdad
"a is an element of A.
Oh, but then that mean A is nonempt and this a contradicition?"
- yup - it contradicts $$A = \emptyset$$
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Similar Discussions: Proof By Contradiction | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291213154792786, "perplexity": 978.8414662578148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00541-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/103133/is-the-number-of-twists-of-a-curve-with-a-section-in-a-given-field-finite?sort=votes | # Is the number of twists of a curve with a section in a given field finite
Let $X$ be a smooth projective geometrically connected curve over a number field $k$ of genus $g\geq 2$.
Is the number of twists of $X$ always infinite? (The answer is no, because there aren't any twists of $X$ if the automorphism group of $X_{\bar K}$ is trivial. To obtain an example of such a curve consider a general curve.)
Let $L/K$ be a finite field extension. Are there only finitely many twists $Y$ of $X$ defined over $K$ such that $Y_L=X_L$?
-
Fact 1 (The Hurwitz Bound): If $X$ is a smooth projective connected curve of genus $g\ge 2$ over $\mathbf{C}$ then $$| Aut_{\mathbf C }(X)| \le 84(g-1)$$
Fact 2: $Aut_\mathbf{C}(X) = Aut_{\overline K}(X)$ (The sentence that says "If $\phi$ is an automorphism of $X$ then $\phi$ must be one of these possibilites" is first order and thus by the first order completeness of algebraically closed fields of characteristic zero...)
Fact 3 (See Silverman's Arithmetic of Elliptic Curves chapter 10 or Serre's Galois Cohomology or Berhuy's notes or...): The twists $W_{/K}$ of a variety $V_{/K}$ are given up to isomorphism by the pointed set $H^1(Gal(\overline K /K),Aut_{\overline K}(V))$. Assume now that $L/K$ is Galois. If not you can just replace $L$ by its Galois closure. The twists which resolve over $L$ are given up to isomorphism by $H^1(Gal(L/K),Aut_{\overline K}(V))$
Fact 4 (Exercise): $$|H^1(Gal(L/K),Aut_{\overline K}(V))| \le 84(g-1) | Gal(L/K)|$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815965294837952, "perplexity": 64.04688647207341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986615.83/warc/CC-MAIN-20150728002306-00228-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/133144/the-equation-x22-y3-admits-a-unique-solution-in-positive-integer/133149 | # The equation x^2+2=y^3 admits a unique solution in positive integer [closed]
How to prove that the equation x^2+2=y^3 admits a unique solution in positive integer?
-
The first step is to switch the roles of $x$ and $y$, since $y^2 = x^3 - 2$ is the usual way it is written nowadays. Then read about Mordell's equation, which can be found in many references on Diophantine equations or elliptic curves. Your question is not a research-level question, so it's more suitable for math.stackexchange if you need further assistance. – KConrad Jun 8 at 13:23
...though $x^2 + 2 = y^3$ is the equivalent form that starts the usual proof via unique factorization in ${\bf Z}[\sqrt{-2}]$ (as in Geoff Robinson's answer), which is more down-to-earth than elliptic curves. But yes, this is too well-known and elementary for MO. – Noam D. Elkies Jun 8 at 18:41
You don't need the general theory of Mordell's equation to handle this particular case. Since $\mathbb{Z}[\sqrt{-2}]$ is a Euclidean domain whose only units are $\pm 1,$ it follows that there are integers $a$ and $b$ such that $x + \sqrt{-2} = (a+b\sqrt{-2})^{3}.$ Then $1 = 3a^{2}b -2b^{3}.$ Hence $b = \pm 1.$ The case $b = 1$ leads to $a = \pm 1$ and $y=3$. The case $b = -1$ leads to a contradiction. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9516786932945251, "perplexity": 157.22244125621984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163901500/warc/CC-MAIN-20131204133141-00056-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://synasc.ro/2021/cristian-s-calude/ | ## Gödel Incompleteness and Proof Assistants
### by Cristian S. Calude, University of Auckland, New Zealand
Abstract:
Gödel’s first incompleteness theorem states that in every consistent system of axioms which contains a modicum of arithmetic and its theorems can be listed by an algorithm, there exist true statements about natural numbers that are unprovable in the system. The second incompleteness theorem states that such a system cannot demonstrate its own consistency. J. von Neumann called them “a landmark which will remain visible far in space and time.”
The talk will briefly show that both incompleteness results follow from the undecidability of the Halting problem. Then we will discuss theoretically whether the incompleteness results give a “coup de grâce” to Hilbert’s Programme. Finally, we will argue that the current practice of mathematics, which uses extensively proof-assistants, is changing the theoretical views about the impact of incompleteness. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083044290542603, "perplexity": 906.1242092254026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00528.warc.gz"} |
https://math.stackexchange.com/questions/695320/partial-sums-and-the-leibniz-formula-for-pi/2878748 | # Partial Sums and the Leibniz Formula for Pi
How do I calculate the first few partial sums for the Leibniz Formula? I have to do a total of ten can someone post how to calculate the first few, I'm a bit lost.
$$\sum_{n=0}^{\infty}(-1)^{n} \frac{1}{2n+1}$$
• Like, $1$, then $1-\frac13 = \frac23$, then $1-\frac13+\frac15 = \frac{13}{15}$? – Daniel Fischer Mar 1 '14 at 18:49
• Of course, to calculate $\pi$, you have to multiply the partial sums by $4$... – Thomas Andrews Mar 1 '14 at 19:15
## 3 Answers
The $N$th partial sum of a series $\sum_{n=0}^\infty a_n$ is defined to be $$S_N=\sum_{n=0}^Na_n=a_0+a_1+\cdots+a_N.$$ (see Wikipedia). Thus the first few partial sums of the series $$\sum_{n=0}^\infty(-1)^n\frac{1}{2n+1}$$ are \begin{align*} S_0 & =\left[(-1)^0\frac{1}{2\cdot 0+1}\right]=1\\\\\\ S_1 & =\left[(-1)^0\frac{1}{2\cdot 0+1}\right]+\left[(-1)^1\frac{1}{2\cdot 1+1}\right]=1-\frac{1}{3}=\frac{2}{3}\\\\\\ S_2 & =\left[(-1)^0\frac{1}{2\cdot 0+1}\right]+\left[(-1)^1\frac{1}{2\cdot 1+1}\right]+\left[(-1)^2\frac{1}{2\cdot 2+1}\right]=1-\frac{1}{3}+\frac{1}{5}=\frac{13}{15}\\ \end{align*} I leave it to you calculate them up to $S_{10}$ (you will probably want to use Wolfram Alpha or a calculator.)
To calculate the sum of the series:
We know that the power series of $\frac1{1+x}$ is $$\frac1{1+x}=\sum_{n=0}^\infty (-1)^nx^n$$ so we integrate term by term for $|x|<1$ $$\arctan x=\int_0^x\frac{dt}{1+t^2}=\sum_{n=0}^\infty \frac{(-1)^n}{2n+1}x^{2n+1}$$
the series is convergent for $x=1$ then we find
$$\sum_{n=0}^\infty \frac{(-1)^n}{2n+1}=\arctan 1=\frac\pi4$$
• That was not the question asked. – Thomas Andrews Mar 1 '14 at 19:16
Answering 4 year later! I wanted to see if there was a closed nice form for the partial sums and this post is about as good as anywhere to place my findings.
$$S_k=\sum_{n=0}^k{\frac{(-1)^n}{2n+1}}$$
$S_0=1,\\ S_1=\frac{2}{3},\\ S_2=\frac{13}{15},\\ S_3= \frac{76}{105},\\S_4=\frac{789}{945},\\ S_5=\frac{7734}{10395},\\ S_6=\frac{110937}{135135} ,\\ S_7=\frac{1528920}{2027025},\\ S_8=\frac{28018665}{34459425},\\ S_{9}=\frac{497895210}{654729075}$
And in general,
$S_n =\frac{a_n}{b_n}$
Where $a_n$ is A024199$(n)$ and $b_n=(2n+1)!!$ is A001147$(n)$ and double factorial is defined in the classic way. Based on how A024199 is defined, it doesn't look like there is a nice way to express the fraction $S_n$ without invoking $\Sigma$ notation.
• Downvote? I will delete if I get a second downvote. I can't quite see how this post hurts this site... Care to explain? – Mason Aug 11 '18 at 4:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9814298152923584, "perplexity": 383.70655258466167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526254.26/warc/CC-MAIN-20190719140355-20190719162355-00125.warc.gz"} |
https://www.lessonplanet.com/teachers/biochemistry-assignment | # Biochemistry Assignment
In this biochemistry worksheet, students complete a table by filling in the missing information about different elements. Students draw the Bohr diagram and the Lewis dot diagram for several atoms. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332849383354187, "perplexity": 4034.0454376432062}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424610.13/warc/CC-MAIN-20170723202459-20170723222459-00485.warc.gz"} |
https://math.stackexchange.com/questions/939261/how-to-prove-c-from-a-leftrightarrow-b-leftrightarrow-c-and-a-leftrigh | # How to prove $C$ from $A \leftrightarrow (B \leftrightarrow C)$ and $A \leftrightarrow B$?
How does one prove $C$ from the premises: $A \leftrightarrow (B \leftrightarrow C)$ and $A \leftrightarrow B$ ?
I've tried to prove $C$ by contradiction, using a sub-proof which presumes $\neg C$, but although I can conclude all of the following in the subproof: $\neg A$, $\neg B$, $\neg (B \leftrightarrow C)$, I'm unable to find a contradiction this way.
I've been stuck on this for the whole day, and I think I might be over-thinking the problem.
Note: I want to prove this using the basic first-order logic rules (I'm using the First-Order Logic from the Language, Proof and Logic book).
• There are several 'basic FOL rules'. I take it you're using the ones from the LPL book because you mentioned it in this question. If this is the case, then I suggest you instead say it's the rules from this book. – Git Gud Sep 20 '14 at 18:34
• You are indeed correct. I did not know that there were so many different kinds of FOL basics. I have appended it in the question. – Qqwy Sep 20 '14 at 18:49
• I would try proving that ((A↔(B↔C))$\rightarrow$((A↔B)↔C)) first. I could put up an answer a proof in a different natural deduction system than yours if you'd like. The system I refer to has no negation introduction rule. – Doug Spoonwood Sep 20 '14 at 21:43
Due to the transitivity of $\leftrightarrow$ and due to the fact that $A$ comes up on both premises 'at the same level', I find it natural to focus on $A$ and let it act as a pivot of sorts.
Start by proving $A\lor \neg A$ and perform $\lor$-$\text{Elim}$ on this disjunction.
In the first case just use $\leftrightarrow$-$\text{Elim}$ successively on the premises to get $C$.
In the second case (where one starts a subproof with the premise $\neg A$), use the premise $A\leftrightarrow B$ to get $\neg B$ and the premise $A\leftrightarrow (B \leftrightarrow C)$ to get $\neg(B\leftrightarrow C)$ (in both cases by negation introduction).
Now assume $\neg C$, prove $\neg B\leftrightarrow \neg C$ and from this last statement get $B\leftrightarrow C$.
At this point you can find a contradiction allowing you to conclude $C$ in the subproof whose premise is $\neg A$.
I leave the proof below.
• The software that comes with the book doesn't seem to deal well with big proofs.This was the best possible presentation I could get. One can note that in justification of the steps there are some missing steps, the software itself did that. – Git Gud Sep 20 '14 at 22:18
This basically follows from the associativity of $\leftrightarrow$. But let's pretend that we didn't know that.
We consider two exhaustive, mutually exclusive cases.
Case 1: Suppose that $B$ is true. Then since $A \leftrightarrow B$ is true, we know that $A$ is true. Thus, since $A \leftrightarrow (B \leftrightarrow C)$ is true, we know that $B \leftrightarrow C$ is true. But then since $B$ is true, we know that $C$ is true, as desired.
Case 2: Suppose that $B$ is false. Then since $A \leftrightarrow B$ is true, we know that $A$ is false. Thus, since $A \leftrightarrow (B \leftrightarrow C)$ is true, we know that $B \leftrightarrow C$ is false. But then since $B$ is false, we know that $C$ is true (otherwise, if $C$ was actually false, then $B \leftrightarrow C$ would be true, a contradiction). So we're done!
• Thanks a lot! The only thing I still am having trouble with is how conclude $C$ from $\neg (B \leftrightarrow C)$ and $\neg B$ – Qqwy Sep 20 '14 at 20:33
• Use a proof by contradiction. Suppose instead that $C$ is actually false. Now since $\neg B$ is true, we know that $B$ is false. But then since $B$ and $C$ are both false, we know that $B \leftrightarrow C$ is true. But this contradicts the fact that $\neg(B \leftrightarrow C)$ is true. – Adriano Sep 20 '14 at 20:41
• It is unfortunate that I cannot mark two answers as accepted. This answer was enough for me to arrive at a conclusion. However, I did mark Git Gud's answer because other people who stumble across the same problem might have more use for the complete proof themselves. – Qqwy Sep 21 '14 at 0:31
\newcommand{\calc}{\begin{align} \quad &} \newcommand{\calcop}[2]{\\ #1 \quad & \quad \text{"#2"} \\ \quad & } \newcommand{\endcalc}{\end{align}}Your proof by contradiction approach is fine, here is how you can complete your proof.
Assume $\;C\;$ is false, then $$\calc A \leftrightarrow (B \leftrightarrow C) \calcop{\leftrightarrow}{using what we know about \;C\;} A \leftrightarrow (B \leftrightarrow \text{false}) \calcop{\leftrightarrow}{left hand side: use \;A \leftrightarrow B\;; right hand side: simplify} B \leftrightarrow \lnot B \calcop{\leftrightarrow}{logic} \text{false} \endcalc$$
which is a contradiction. Therefore $\;C\;$ is true.
I've tried to prove $$C$$ by contradiction, using a sub-proof which presumes $$\neg C$$, but although I can conclude all of the following in the subproof: $$\neg A$$, $$\neg B$$, $$\neg (B \leftrightarrow C)$$, I'm unable to find a contradiction this way.
This is a wee bit late, but if a contradiction may be derived from assuming $$\neg C$$, then you should also be able to derive $$\def\too{\leftrightarrow}(B\too C)$$ too, and from that you can obviously derive $$A$$, $$B$$, and $$C$$ in turn.
$$\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}\fitch{~~1.~~A\too(B\too C)\\~~2.~~A\too B}{~~3.~~A\to(B\too C)\hspace{10ex}\too\!\mathsf e~1\\~~4.~~(B\too C)\to A\hspace{10ex}\too\!\mathsf e~1\\~~5.~~A\to B\hspace{17ex}\too\!\mathsf e~2\\~~6.~~B\to A\hspace{17ex}\too\!\mathsf e~2\\\fitch{~~7.~~\lnot C\hspace{18ex}\mathsf a}{\fitch{~~8.~~B\hspace{16ex}\mathsf a}{~~9.~~A\hspace{15ex}\to\!\mathsf e~8,6\\10.~~B\too C\hspace{10ex}\to\!\mathsf e~9,3\\11.~~B\to C\hspace{10ex}\too\!\mathsf e~10\\12.~~C\hspace{15.5ex}\to\!\mathsf e~8,12}\\13.~~B\to C\hspace{13.5ex}\to\!\mathsf i~8{-}12\\\fitch {14.~~C\hspace{16ex}\mathsf a}{15.~~\bot\hspace{16ex}\neg~\mathsf e\,14,17\\16.~~B\hspace{16ex}\mathsf x~15}\\17.~~C\to B\hspace{13ex}\to\!\mathsf i~14{-}16\\18.~~B\too C\hspace{13ex}\too\!\mathsf i~13,17\\19.~~A\hspace{18ex}\to\!\mathsf e~18,4\\20.~~B\hspace{18ex}\to\!\mathsf e~19,5\\21.~~C\hspace{18ex}\to\!\mathsf e~20,13\\22.~~\bot\hspace{18ex}\neg\,\mathsf e~21,7}\\23.~~\neg\neg C\hspace{18ex}\neg~\mathsf i~7{-}22\\24.~~C\hspace{21ex}\neg\neg\,\mathsf e~23}$$
I would use the algebraic machinery:
$A\leftrightarrow B\quad$ iff $\quad 1\oplus A\oplus B$
and get $(A\leftrightarrow (B\leftrightarrow C))\wedge (A\leftrightarrow B)\quad$ iff $\quad (1\oplus A\oplus 1\oplus B\oplus C)(1\oplus A\oplus B)=$ $=(A\oplus B\oplus C)(1\oplus A\oplus B)$ $=A\oplus B\oplus C\oplus A\oplus AB\oplus AC\oplus AB\oplus B\oplus BC=$ $=C\oplus AC\oplus BC=C(1\oplus A\oplus B)\quad$ iff $\quad C\wedge(A\leftrightarrow B)$, which imply C.
Which considering the note may not be as relevant, but... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9683115482330322, "perplexity": 205.00953847590864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00311.warc.gz"} |
https://marcofrasca.wordpress.com/tag/yang-mills-propagators/ | ## Do quarks grant confinement?
21/07/2014
In 2010 I went to Ghent in Belgium for a very nice Conference on QCD. My contribution was accepted and I had the chance to describe my view about this matter. The result was this contribution to the proceedings. The content of this paper was really revolutionary at that time as my view about Yang-Mills theory, mass gap and the role of quarks was almost completely out of track with respect to the rest of the community. So, I am deeply grateful to the Organizers for this opportunity. The main ideas I put forward were
• Yang-Mills theory has an infrared trivial fixed point. The theory is trivial exactly as the scalar field theory is.
• Due to this, gluon propagator is well-represented by a sum of weighted Yukawa propagators.
• The theory acquires a mass gap that is just the ground state of a tower of states with the spectrum of a harmonic oscillator.
• The reason why Yang-Mills theory is trivial and QCD is not in the infrared limit is the presence of quarks. Their existence moves the theory from being trivial to asymptotic safety.
These results that I have got published on respectable journals become the reason for rejection of most of my successive papers from several referees notwithstanding there were no serious reasons motivating it. But this is routine in our activity. Indeed, what annoyed me a lot was a refeee’s report claiming that my work was incorrect because the last of my statement was incorrect: Quark existence is not a correct motivation to claim asymptotic safety, and so confinement, for QCD. Another offending point was the strong support my approach was giving to the idea of a decoupling solution as was emerging from lattice computations on extended volumes. There was a widespread idea that the gluon propagator should go to zero in a pure Yang-Mills theory to grant confinement and, if not so, an infrared non-trivial fixed point must exist.
Recently, my last point has been vindicated by a group that was instrumental in the modelling of the history of this corner of research in physics. I have seen a couple of papers on arxiv, this and this, strongly supporting my view. They are Markus Höpfer, Christian Fischer and Reinhard Alkofer. These authors work in the conformal window, this means that, for them, lightest quarks are massless and chiral symmetry is exact. Indeed, in their study quarks not even get mass dynamically. But the question they answer is somewhat different: Acquired the fact that the theory is infrared trivial (they do not state this explicitly as this is not yet recognized even if this is a “duck” indeed), how does the trivial infrared fixed point move increasing the number of quarks? The answer is in the following wonderful graph with $N_f$ the number of quarks (flavours):
From this picture it is evident that there exists a critical number of quarks for which the theory becomes asymptotically safe and confining. So, quarks are critical to grant confinement and Yang-Mills theory can happily be trivial. The authors took great care about all the involved approximations as they solved Dyson-Schwinger equations as usual, this is always been their main tool, with a proper truncation. From the picture it is seen that if the number of flavours is below a threshold the theory is generally trivial, so also for the number of quarks being zero. Otherwise, a non-trivial infrared fixed point is reached granting confinement. Then, the gluon propagator is seen to move from a Yukawa form to a scaling form.
This result is really exciting and moves us a significant step forward toward the understanding of confinement. By my side, I am happy that another one of my ideas gets such a substantial confirmation.
Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3
Markus Hopfer, Christian S. Fischer, & Reinhard Alkofer (2014). Running coupling in the conformal window of large-Nf QCD arXiv arXiv: 1405.7031v1
Markus Hopfer, Christian S. Fischer, & Reinhard Alkofer (2014). Infrared behaviour of propagators and running coupling in the conformal
window of QCD arXiv arXiv: 1405.7340v1
## Nailing down the Yang-Mills problem
22/02/2014
Millennium problems represent a major challenge for physicists and mathematicians. So far, the only one that has been solved was the Poincaré conjecture (now a theorem) by Grisha Perelman. For people working in strong interactions and quantum chromodynamics, the most interesting of such problems is the Yang-Mills mass gap and existence problem. The solutions of this problem would imply a lot of consequences in physics and one of the most important of these is a deep understanding of confinement of quarks inside hadrons. So far, there seems to be no solution to it but things do not stay exactly in this way. A significant number of researchers has performed lattice computations to obtain the propagators of the theory in the full range of energy from infrared to ultraviolet providing us a deep understanding of what is going on here (see Yang-Mills article on Wikipedia). The propagators to be considered are those for the gluon and the ghost. There has been a significant effort from theoretical physicists in the last twenty years to answer this question. It is not so widely known in the community but it should because the work of this people could be the starting point for a great innovation in physics. In these days, on arxiv a paper by Axel Maas gives a great recount of the situation of these lattice computations (see here). Axel has been an important contributor to this research area and the current understanding of the behavior of the Yang-Mills theory in two dimensions owes a lot to him. In this paper, Axel presents his computations on large volumes for Yang-Mills theory on the lattice in 2, 3 and 4 dimensions in the SU(2) case. These computations are generally performed in the Landau gauge (propagators are gauge dependent quantities) being the most favorable for them. In four dimensions the lattice is $(6\ fm)^4$, not the largest but surely enough for the aims of the paper. Of course, no surprise comes out with respect what people found starting from 2007. The scenario is well settled and is this:
1. The gluon propagator in 3 and 4 dimensions dos not go to zero with momenta but is just finite. In 3 dimensions has a maximum in the infrared reaching its finite value at 0 from below. No such maximum is seen in 4 dimensions. In 2 dimensions the gluon propagator goes to zero with momenta.
2. The ghost propagator behaves like the one of a free massless particle as the momenta are lowered. This is the dominant behavior in 3 and 4 dimensions. In 2 dimensions the ghost propagator is enhanced and goes to infinity faster than in 3 and 4 dimensions.
3. The running coupling in 3 and 4 dimensions is seen to reach zero as the momenta go to zero, reach a maximum at intermediate energies and goes asymptotically to 0 as momenta go to infinity (asymptotic freedom).
Here follows the figure for the gluon propagator
and for the running coupling
There is some concern for people about the running coupling. There is a recurring prejudice in Yang-Mills theory, without any support both theoretical or experimental, that the theory should be not trivial in the infrared. So, the running coupling should not go to zero lowering momenta but reach a finite non-zero value. Of course, a pure Yang-Mills theory in nature does not exist and it is very difficult to get an understanding here. But, in 2 and 3 dimensions, the point is that the gluon propagator is very similar to a free one, the ghost propagator is certainly a free one and then, using the duck test: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck, the theory is really trivial also in the infrared limit. Currently, there are two people in the World that have recognized a duck here: Axel Weber (see here and here) using renormalization group and me (see here, here and here). Now, claiming to see a duck where all others are pretending to tell a dinosaur does not make you the most popular guy in the district. But so it goes.
These lattice computations are an important cornerstone in the search for the behavior of a Yang-Mills theory. Whoever aims to present to the World his petty theory for the solution of the Millennium prize must comply with these results showing that his theory is able to reproduce them. Otherwise what he has is just rubbish.
What appears in the sight is also the proof of existence of the theory. Having two trivial fixed points, the theory is Gaussian in these limits exactly as the scalar field theory. A Gaussian theory is the simplest example we know of a quantum field theory that is proven to exist. Could one recover the missing part between the two trivial fixed points as also happens for the scalar theory? In the end, it is possible that a Yang-Mills theory is just the vectorial counterpart of the well-known scalar field, the workhorse of all the scholars in quantum field theory.
Axel Maas (2014). Some more details of minimal-Landau-gauge Yang-Mills propagators arXiv arXiv: 1402.5050v1
Axel Weber (2012). Epsilon expansion for infrared Yang-Mills theory in Landau gauge Phys. Rev. D 85, 125005 arXiv: 1112.1157v2
Axel Weber (2012). The infrared fixed point of Landau gauge Yang-Mills theory arXiv arXiv: 1211.1473v1
Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6
Marco Frasca (2009). Mapping a Massless Scalar Field Theory on a Yang-Mills Theory: Classical
Case Mod. Phys. Lett. A 24, 2425-2432 (2009) arXiv: 0903.2357v4
Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3
## Ending and consequences of Terry Tao’s criticism
21/09/2013
Summer days are gone and I am back to work. I thought that Terry Tao’s criticism to my work was finally settled and his intervention was a good one indeed. Of course, people just remember the criticism but not how the question evolved since then (it was 2009!). Terry’s point was that the mapping given here between the scalar field solutions and the Yang-Mills field in the classical limit cannot be exact as it is not granted that they represent an extreme for the Yang-Mills functional. In this way the conclusions given in the paper are not granted being based on this proof. The problem can be traced back to the gauge invariance of the Yang-Mills theory that is explicitly broken in this case.
Terry Tao, in a private communication, asked me to provide a paper, to be published on a refereed journal, that fixed the problem. In such a case the question would have been settled in a way or another. E.g., also a result disproving completely the mapping would have been good, disproving also my published paper.
This matter is rather curious as, if you fix the gauge to be Lorenz (Landau), the mapping is exact. But the possible gauge choices are infinite and so, there seems to be infinite cases where the mapping theorem appears to fail. The lucky case is that lattice computations are generally performed in Landau gauge and when you do quantum field theory a gauge must be chosen. So, is the mapping theorem really false or one can change it to fix it all?
In order to clarify this situation, I decided to solve the classical equations of the Yang-Mills theory perturbatively in the strong coupling limit. Please, note that today I am the only one in the World able to perform such a computation having completely invented the techniques to do perturbation theory when a perturbation is taken to go to infinity (sorry, no AdS/CFT here but I can surely support it). You will note that this is the opposite limit to standard perturbation theory when one is looking for a parameter that goes to zero. I succeeded in doing so and put a paper on arxiv (see here) that was finally published the same year, 2009.
The theorem changed in this way:
The mapping exists in the asymptotic limit of the coupling running to infinity (leading order), with the notable exception of the Lorenz (Landau) gauge where it is exact.
So, I sighed with relief. The reason was that the conclusions of my paper on propagators were correct. But these hold asymptotically in the limit of a strong coupling. This is just what one needs in the infrared limit where Yang-Mills theory becomes strongly coupled and this is the main reason to solve it on the lattice. I cited my work on Tao’s site, Dispersive Wiki. I am a contributor to this site. Terry Tao declared the question definitively settled with the mapping theorem holding asymptotically (see here).
In the end, we were both right. Tao’s criticism was deeply helpful while my conclusions on the propagators were correct. Indeed, my gluon propagator agrees perfectly well, in the infrared limit, with the data from the largest lattice used in computations so far (see here)
As generally happens in these cases, the only fact that remains is the original criticism by a great mathematician (and Terry is) that invalidated my work (see here for a question on Physics Stackexchange). As you can see by the tenths of papers I published since then, my work stands and stands very well. Maybe, it would be time to ask the author.
Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6
Marco Frasca (2009). Mapping a Massless Scalar Field Theory on a Yang-Mills Theory: Classical
Case Mod. Phys. Lett. A 24, 2425-2432 (2009) arXiv: 0903.2357v4
Attilio Cucchieri, & Tereza Mendes (2007). What’s up with IR gluon and ghost propagators in Landau gauge? A puzzling answer from huge lattices PoS LAT2007:297,2007 arXiv: 0710.0412v1
## Kyoto, arXiv and all that
12/11/2012
Today, Kyoto conference HCP2012 has started. There is already an important news from LHCb that proves for the first time the existence of the decay $B_s\rightarrow\mu^+\mu^-$. They find close agreement with the Standard Model (see here). Another point scored by this model and waiting for new physics yet. You can find the program with all the talks to download here. There is a lot of expectations from the update on the Higgs search: The great day is Thursday. Meantime, there is Jester providing some rumors (see here on twitter side) and seem really interesting.
I have a couple of papers to put to the attention of my readers from arXiv. Firstly, Yuan-Sen Ting and Bryan Gin-ge Chen provided a further improved redaction of the Coleman’s lectures (see here). This people is doing a really deserving work and these lectures are a fundamental reading for any serious scholar on quantum field theory.
Axel Weber posted a contribution to a conference (see here) summing up his main conclusions on the infrared behavior of the running coupling and the two-point functions for a Yang-Mills theory. He makes use of renormalization group and the inescapable conclusion is that if one must have a decoupling solution, as lattice computations demand, then the running coupling reaches an infrared trivial fixed point. This is in close agreement with my conclusions on this matter and it is very pleasant to see them emerge from another approach.
Sidney Coleman (2011). Notes from Sidney Coleman’s Physics 253a arXiv arXiv: 1110.5013v4
Axel Weber (2012). The infrared fixed point of Landau gauge Yang-Mills theory arXiv arXiv: 1211.1473v1
## Confinement revisited
27/09/2012
Today it is appeared a definitive updated version of my paper on confinement (see here). I wrote this paper last year after a question put out to me by Owe Philipsen at Bari. The point is, given a decoupling solution for the gluon propagator in the Landau gauge, how does confinement come out? I would like to remember that a decoupling solution at small momenta for the gluon propagator is given by a function reaching a finite non-zero value at zero. All the fits carried out so far using lattice data show that a sum of few Yukawa-like propagators gives an accurate representation of these data. To see an example see this paper. Sometime, this kind of propagator formula is dubbed Stingl-Gribov formula and has the property to have a fourth order polynomial in momenta at denominator and a second order one at the numerator. This was firstly postulated by Manfred Stingl on 1995 (see here). It is important to note that, given the presence of a fourth power of momenta, confinement is granted as a linear rising potential can be obtained in agreement with lattice evidence. This is also in agreement with the area law firstly put forward by Kenneth Wilson.
At that time I was convinced that a decoupling solution was enough and so I pursued my analysis arriving at the (wrong) conclusion, in a first version of the paper, that screening could be enough. So, strong force should have to saturate and that, maybe, moving to higher distances such a saturation would have been seen also on the lattice. This is not true as I know today and I learned this from a beautiful paper by Vicente Vento, Pedro González and Vincent Mathieu. They thought to solve Dyson-Schwinger equations in the deep infrared to obtain the interquark potential. The decoupling solution appears at a one-gluon exchange level and, with this approximation, they prove that the potential they get is just a screening one, in close agreement with mine and any other decoupling solution given in a close analytical form. So, the decoupling solution does not seem to agree with lattice evidence that shows a linearly rising potential, perfectly confining and in agreement with what Wilson pointed out in his classical work on 1974. My initial analysis about this problem was incorrect and Owe Philipsen was right to point out this difficulty in my approach.
This question never abandoned my mind and, with the opportunity to go to Montpellier this year to give a talk (see here), I presented for the first time a solution to this problem. The point is that one needs a fourth order term in the denominator of the propagator. This can happen if we would be able to get higher order corrections to the simplest one-gluon exchange approximation (see here). In my approach I can get loop corrections to the gluon propagator. The next-to-leading one is a two-loop term that gives rise to the right term in the denominator of the propagator. Besides, I am able to get the renormalization constant to the field and so, I also get a running mass and coupling. I gave an idea of the way this computation should be performed at Montpellier but in these days I completed it.
The result has been a shocking one. Not only one gets the linear rising potential but the string tension is proportional to the one obtained in d= 2+1 by V. Parameswaran Nair, Dimitra Karabali and Alexandr Yelnikov (see here)! This means that, apart from numerical factors and accounting for physical dimensions, the equation for the string tension in 3 and 4 dimensions is the same. But we would like to note that the result given by Nair, Karabali and Yelnikov is in close agreement with lattice data. In 3 dimensions the string tension is a pure number and can be computed explicitly on the lattice. So, we are supporting each other with our conclusions.
These results are really important as they give a strong support to the ideas emerging in these years about the behavior of the propagators of a Yang-Mills theory at low energies. We are even more near to a clear understanding of confinement and the way mass emerges at macroscopic level. It is important to point out that the string tension in a Yang-Mills theory is one of the parameters that any serious theoretical approach, pretending to go beyond a simple phenomenological one, should be able to catch. We can say that the challenge is open.
Marco Frasca (2011). Beyond one-gluon exchange in the infrared limit of Yang-Mills theory arXiv arXiv: 1110.2297v4
Kenneth G. Wilson (1974). Confinement of quarks Phys. Rev. D 10, 2445–2459 (1974) DOI: 10.1103/PhysRevD.10.2445
Attilio Cucchieri, David Dudal, Tereza Mendes, & Nele Vandersickel (2011). Modeling the Gluon Propagator in Landau Gauge: Lattice Estimates of Pole Masses and Dimension-Two Condensates arXiv arXiv: 1111.2327v1
M. Stingl (1995). A Systematic Extended Iterative Solution for QCD Z.Phys. A353 (1996) 423-445 arXiv: hep-th/9502157v3
P. Gonzalez, V. Mathieu, & V. Vento (2011). Heavy meson interquark potential Physical Review D, 84, 114008 arXiv: 1108.2347v2
Marco Frasca (2012). Low energy limit of QCD and the emerging of confinement arXiv arXiv: 1208.3756v2
Dimitra Karabali, V. P. Nair, & Alexandr Yelnikov (2009). The Hamiltonian Approach to Yang-Mills (2+1): An Expansion Scheme and Corrections to String Tension Nucl.Phys.B824:387-414,2010 arXiv: 0906.0783v1
## Running coupling and Yang-Mills theory
30/07/2012
Forefront research, during its natural evolution, produces some potential cornerstones that, at the end of the game, can prove to be plainly wrong. When one of these cornerstones happens to form, even if no sound confirmation at hand is available, it can make life of researchers really hard. It can be hard time to get papers published when an opposite thesis is supported. All this without any certainty of this cornerstone being a truth. You can ask to all people that at the beginning proposed the now dubbed “decoupling solution” for propagators of Yang-Mills theory in the Landau gauge and all of them will tell you how difficult was to get their papers go through in the peer-review system. The solution that at that moment was generally believed the right one, the now dubbed “scaling solution”, convinced a large part of the community that it was the one of choice. All this without any strong support from experiment, lattice or a rigorous mathematical derivation. This kind of behavior is quite old in a scientific community and never changed since the very beginning of science. Generally, if one is lucky enough things go straight and scientific truth is rapidly acquired otherwise this behavior produces delays and impediments for respectable researchers and a serious difficulty to get an understanding of the solution of a fundamental question.
Maybe, the most famous case of this kind of behavior was with the discovery by Tsung-Dao Lee and Chen-Ning Yang of parity violation in weak interactions on 1956. At that time, it was generally believed that parity should have been an untouchable principle of physics. Who believed so was proven wrong shortly after Lee and Yang’s paper. For the propagators in the Landau gauge in a Yang-Mills theory, recent lattice computations to huge volumes showed that the scaling solution never appears at dimensions greater than two. Rather, the right scenario seems to be provided by the decoupling solution. In this scenario, the gluon propagator is a Yukawa-like propagator in deep infrared or a sum of them. There is a very compelling reason to have such a kind of propagators in a strongly coupled regime and the reason is that the low energy limit recovers a Nambu-Jona-Lasinio model that provides a very fine description of strong interactions at lower energies.
From a physical standpoint, what does it mean a Yukawa or a sum of Yukawa propagators? This has a dramatic meaning for the running coupling: The theory is just trivial in the infrared limit. The decoupling solution just says this as emerged from lattice computations (see here)
What really matters here is the way one defines the running coupling in the deep infrared. This definition must be consistent. Indeed, one can think of a different definition (see here) working things out using instantons and one see the following
One can see that, independently from the definition, the coupling runs to zero in the deep infrared marking the property of a trivial theory. This idea appears currently difficult to digest by the community as a conventional wisdom formed that Yang-Mills theory should have a non-trivial fixed point in the infrared limit. There is no evidence whatsoever for this and Nature does not provide any example of pure Yang-Mills theory that appears always interacting with Fermions instead. Lattice data say the contrary as we have seen but a general belief is enough to make hard the life of researchers trying to pursue such a view. It is interesting to note that some theoretical frameworks need a non-trivial infrared fixed point for Yang-Mills theory otherwise they will crumble down.
But from a theoretical standpoint, what is the right approach to derive the behavior of the running coupling for a Yang-Mills theory? The answer is quite straightforward: Any consistent theoretical framework for Yang-Mills theory should be able to get the beta function in the deep infrared. From beta function one has immediately the right behavior of the running coupling. But in order to get it, one should be able to work out the Callan-Symanzik equation for the gluon propagator. So far, this is explicitly given in my papers (see here and refs. therein) as I am able to obtain the behavior of the mass gap as a function of the coupling. The relation between the mass gap and the coupling produces the scaling of the beta function in the Callan-Symanzik equation. Any serious attempt to understand Yang-Mills theory in the low-energy limit should provide this connection. Otherwise it is not mathematics but just heuristic with a lot of parameters to be fixed.
The final consideration after this discussion is that conventional wisdom in science should be always challenged when no sound foundations are given for it to hold. In a review process, as an editorial practice, referees should be asked to check this before to kill good works on shaky grounds.
I. L. Bogolubsky, E. -M. Ilgenfritz, M. Müller-Preussker, & A. Sternbeck (2009). Lattice gluodynamics computation of Landau-gauge Green’s functions in the deep infrared Phys.Lett.B676:69-73,2009 arXiv: 0901.0736v3
Ph. Boucaud, F. De Soto, A. Le Yaouanc, J. P. Leroy, J. Micheli, H. Moutarde, O. Pène, & J. Rodríguez-Quintero (2002). The strong coupling constant at small momentum as an instanton detector JHEP 0304:005,2003 arXiv: hep-ph/0212192v1
Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3
## Millenium prize on Yang-Mills theory: The situation in physics
05/06/2012
Yang-Mills theory with the related question of the mass gap appears today an unsolved problem and, from a mathematical standpoint, the community did not recognized anybody to claim the prize so far. But in physics the answer to this question has made enormous progress mostly by the use of lattice computations and, quite recently, with the support of theoretical analysis. Contrarily to common wisdom, the most fruitful attack to this problem is using Green functions. The reason why this was not a greatly appreciated approach relies on the fact that Green functions are gauge dependent. Anyhow, they contain physical information that is gauge independent and this is exactly what we are looking for: The mass gap.
In order to arrive to such a conclusion a lot of work has been needed since ’80 and the main reason was that at the very start of these studies computational resources were not enough to arrive to a deep infrared region. So, initially, the scenario people supported was not the right one and some conviction arose that the gluon propagator could not say too much about the question of the mass gap. There was no Källen-Lehman representation to help and rather, the propagator seemed to not behave as a massive one but theoretical analysis pointed to a gluon propagator going to zero lowering momenta. This is the now dubbed scaling solution.
Running coupling from the lattice
In the first years of this decade things changed dramatically both due to increase of computational power and by a better theoretical understanding. As pointed out by Axel Weber (see here and here), three papers unveiled what is now called the decoupling solution (see here, here and here). The first two papers were solving Dyson-Schwinger equations by numerical methods while the latter is a theoretical paper solving Yang-Mills equations. Decoupling solution is in agreement with lattice results that in those years started to come out with more powerful computational resources. At larger lattices the gluon propagator reaches a finite non-zero value, the ghost propagator is the one of a free massless particle and the running coupling bends toward zero aiming to a trivial infrared fixed point (see here, here and here). Axel Weber, in his work, shows that the decoupling solution is the only stable one with respect a renormalization group flow.
Gluon propagators for SU(2) from the lattice
These are accepted facts in the physical community so that several papers are now coming out using them. The one I have seen today is from Kenji Fukushima and Kouji Kashiwa (see here). In this case, given the fact that the decoupling solution is the right one, these authors study the data for non-zero temperature and discuss the Polyakov loop for this case. Fukushima is very well-known for his works in QCD at finite temprature.
We can claim, without any possible confutation, that in physics the behavior of a pure Yang-Mills theory is very clear now. Of course, we can miss much of the rigor that is needed in mathematics and this is the reason why no proclamation is heard yet.
Axel Weber (2011). Epsilon expansion for infrared Yang-Mills theory in Landau gauge arXiv arXiv: 1112.1157v2
A. C. Aguilar, & A. A. Natale (2004). A dynamical gluon mass solution in a coupled system of the
Schwinger-Dyson equations JHEP0408:057,2004 arXiv: hep-ph/0408254v2
Ph. Boucaud, Th. Brüntjen, J. P. Leroy, A. Le Yaouanc, A. Y. Lokhov, J. Micheli, O. Pène, & J. Rodríguez-Quintero (2006). Is the QCD ghost dressing function finite at zero momentum ? JHEP 0606:001,2006 arXiv: hep-ph/0604056v1
Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6
Attilio Cucchieri, & Tereza Mendes (2007). What’s up with IR gluon and ghost propagators in Landau gauge? A
puzzling answer from huge lattices PoS LAT2007:297,2007 arXiv: 0710.0412v1
I. L. Bogolubsky, E. -M. Ilgenfritz, M. Müller-Preussker, & A. Sternbeck (2007). The Landau gauge gluon and ghost propagators in 4D SU(3) gluodynamics in large lattice volumes PoSLAT2007:290,2007 arXiv: 0710.1968v2
O. Oliveira, P. J. Silva, E. -M. Ilgenfritz, & A. Sternbeck (2007). The gluon propagator from large asymmetric lattices PoSLAT2007:323,2007 arXiv: 0710.1424v1
Kenji Fukushima, & Kouji Kashiwa (2012). Polyakov loop and QCD thermodynamics from the gluon and ghost propagators arXiv arXiv: 1206.0685v1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640945553779602, "perplexity": 785.2214101003682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094491.62/warc/CC-MAIN-20150627031814-00199-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/writing-net-ionic-equations.593595/ | # Writing Net Ionic Equations
1. Apr 4, 2012
### chick06
1. The problem statement, all variables and given/known data
When writing chemical formulas in Mastering, indicate the physical states using the abbreviation , , or for solid, liquid, or gas, respectively. Use for aqueous solution.
Write the total ionic equation (also known as the complete ionic equation) for the reaction of lithium carbonate with hydrochloric acid. Be sure to include the charges on the ionic species.
2. The attempt at a solution
This is what i came up with Li2CO3(aq)+2HCl(aq)---->H2CO3(l)+2LiCl(aq)
2. Apr 5, 2012
### Staff: Mentor
What you wrote is not an ionic equation.
Similar Discussions: Writing Net Ionic Equations | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294350147247314, "perplexity": 4282.171056648054}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124662.41/warc/CC-MAIN-20170823225412-20170824005412-00634.warc.gz"} |
https://hal-univ-paris-dauphine.archives-ouvertes.fr/CEREMADE-DAUPHINE/hal-01438314v1 | # Computation of Gaussian orthant probabilities in high dimension
Abstract : We study the computation of Gaussian orthant probabilities, i.e. the probability that a Gaussian variable falls inside a quadrant. The Geweke–Hajivassiliou–Keane (GHK) algorithm (Geweke, Comput Sci Stat 23:571–578 1991, Keane, Simulation estimation for panel data models with limited dependent variables, 1993, Hajivassiliou, J Econom 72:85–134, 1996, Genz, J Comput Graph Stat 1:141–149, 1992) is currently used for integrals of dimension greater than 10. In this paper, we show that for Markovian covariances GHK can be interpreted as the estimator of the normalizing constant of a state-space model using sequential importance sampling. We show for an AR(1) the variance of the GHK, properly normalized, diverges exponentially fast with the dimension. As an improvement we propose using a particle filter. We then generalize this idea to arbitrary covariance matrices using Sequential Monte Carlo with properly tailored MCMC moves. We show empirically that this can lead to drastic improvements on currently used algorithms. We also extend the framework to orthants of mixture of Gaussians (Student, Cauchy, etc.), and to the simulation of truncated Gaussians.
Mots-clés :
Type de document :
Article dans une revue
Statistics and Computing, Springer Verlag (Germany), 2016, 26 (4), 〈10.1007/s11222-015-9578-1〉
Domaine :
https://hal.archives-ouvertes.fr/hal-01438314
Contributeur : Christine Okret-Manville <>
Soumis le : mardi 17 janvier 2017 - 15:52:40
Dernière modification le : jeudi 11 janvier 2018 - 06:12:21
### Citation
James Ridgway. Computation of Gaussian orthant probabilities in high dimension. Statistics and Computing, Springer Verlag (Germany), 2016, 26 (4), 〈10.1007/s11222-015-9578-1〉. 〈hal-01438314〉
### Métriques
Consultations de la notice | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92060387134552, "perplexity": 3614.501721267403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214538.44/warc/CC-MAIN-20180819012213-20180819032213-00582.warc.gz"} |
https://www.beatthegmat.com/john-took-a-test-that-had-60-questions-numbered-from-1-to-60-how-many-of-the-questions-did-he-answer-correctly-t326997.html?sid=1ec8c871c06c42a9a62bf6a725163a90 | ## John took a test that had 60 questions numbered from 1 to 60. How many of the questions did he answer correctly?
##### This topic has expert replies
Legendary Member
Posts: 2125
Joined: 14 Oct 2017
Followed by:3 members
### John took a test that had 60 questions numbered from 1 to 60. How many of the questions did he answer correctly?
by VJesus12 » Sun Oct 03, 2021 10:11 am
00:00
A
B
C
D
E
## Global Stats
John took a test that had 60 questions numbered from 1 to 60. How many of the questions did he answer correctly?
(1) The number of questions he answered correctly in the first half of the test was 7 more than the number he answered correctly in the second half of the test.
(2) He answered 5/6 of the odd-numbered questions correctly and 4/5 of the even-numbered questions correctly. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8178879022598267, "perplexity": 841.6800532477048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00243.warc.gz"} |
http://mathoverflow.net/questions/32714/what-is-the-right-way-to-define-the-nerve-of-an-unbiased-monoidal-category | # What is the right way to define the nerve of an unbiased monoidal category?
I've been toying around with unbiased composition in higher categorical structures on and off for a while now. In particular, I've been playing around with unbiased monoidal 2-categories. One motivation for this, as I discussed in my last question on the matter, is that unbiased tensor products and compositions often seem to be better descriptions of what goes on "in nature" than biased ones.
Another motivation was the hope that such gadgets would provide a cleaner notion of nerve than what one gets in the biased setting, where higher associators are floating around everywhere. However, directly transcribing the ordinary notion of nerve seems to work poorly, even for unbiased monoidal categories, for two reasons. First, in each dimension, one is forced to consider products of fixed numbers of objects, which is antithetical to the unbiased philosophy. Secondly, degeneracies are difficult to write down because one has, in place of a unit object, a zero-fold tensor product, which requires a bit of care to handle. A more natural "nerve" for an unbiased monoidal categories might involve having simplicies of dimension $n$ correspond to nested tensor products of depth $n$. I can't quite get such a definition to work, although I'm pretty sure that something like it should be possible.
Is there a construction of the nerve of an unbiased monoidal category that is natural to write down? (The definition of unbiased monoidal category can be found in section 3.1 of Leinster's Higher Operads, Higher Categories.) It strikes me that the problem might be simplicial sets themselves; are there some more exotic combinatorial objects that are better suited to capturing unbiased compositions? I'm aware of the existence of things like opetopes, but I have no idea if they're relevant to this particular issue.
EDIT:
I'd like to clarify why I'm interested in nerves (and consequently, why I'd really prefer that my nerve be a simplicial set instead of something more exotic, unless I can be convinced that more exotic objects can be easily adapted to my needs).
My poking around in all of this was inspired by the preprint by Etingof, Nikshych, and Ostrik, "Fusion categories and homotopy theory." The main results of this paper are proved by formulating their questions in terms of classical obstruction theory on the nerves of certain 3-groupoids. The obstruction theory itself can be justified using elementary fiddling with simplicial sets, as the reference Gregory Arone provided to my earlier question on obstrucion theory reveals. However, I wanted to understand the category theory side of the equation better, which led me to try to formulate things in terms of unbiased monoidal 2-categories.
So ultimately, the goal is to have a definition of the nerve to which I can apply my favorite classical obstruction theory techniques. While some people appear to have studied obstruction theory in more general settings, it's not clear to me how to squeeze out the appropriate concrete computational gadgets (e.g., the cohomology groups $H^n(X; \pi_{n - 1}(Y))$) from the relevant abstract nonsense. Of course, if somebody could elucidate how that works, that would be wonderful, although perhaps that should be the subject of another question...
-
Monoidal categories are a special kind of (coloured) operad. It is this viewpoint in which the unbiased version is most natural. In light of this, it would be most natural to consider the nerve of the associated operad. So, you're on to something in your suspecting that simplicial sets are not the most natural choice for capturing the nerve of a monoidal category. If your monoidal category is symmetric, then its nerve can be taken as a dendroidal set (otherwise, you it should be taken as a planar dendroidal set). Dendroidal sets were invented (discovered?) by Ieke Moerdijk and Ittay Weiss. Roughly speaking, dendroidal sets are to operads as simplicial sets are to categories. Like simplicial sets, they are presheaves on a certain test-category. Simplicial sets are $Set^{\Delta^{op}}$ where $\Delta$ consists of finite linear orders. Dendroidal sets are $Set^{\Omega^{op}}$, where $\Omega$ consists of finite rooted trees. One can even extend Joyal's model structure for quasicategories to dendroidal sets, and start talking about infinity-operads.
In fact, there is some very general machinery developed by Mark Weber which automatically produces the "correct" combinatorial objects to use to take a nerve of a certain type of algebraic object (given as an algebra for a monad). See: http://golem.ph.utexas.edu/category/2008/01/mark_weber_on_nerves_of_catego.html . Basically, given a monad $T$, this machinery produces a category $\Theta(T)$, consisting of certain "linear-like" free $T$-algebras, which has $Set^{\Theta(T)}$ some how the canonical choice for taking nerves of $T$-algebras. So, you COULD plug in the free-unbiased-monoidal category monad into this machinery and see what you get out, and use THIS to take the nerve. However, I think this is a bit over-kill. Using dendroidal sets should do just fine. However, it is worth mentioning that Cisinski has extended Joyal's model structure in this setting as well. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405435681343079, "perplexity": 253.77693952679184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883858.16/warc/CC-MAIN-20140722025803-00123-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://www.techspot.com/community/topics/how-can-i-know-are-there-any-files-missed-for-winxp-system.34209/ | # How can I know are there any files missed for winXP system
By yonnie · 4 replies
Sep 30, 2005
1. I always got some unexpected error from winXP
its tell me I missed some of the winXP system files
(sorry that i havent mark down what file missed)
everytime got the same missed notice from windows
it ask me to insert the original winXP cd and press "retry"
after retry, seems everythings OK
but when I use /sfc again at the later time
windows still ask me for the winxp disc
Can anyone pls tell me how can I scan or search
are there any system files I missed for winXP?
2. ### BillGatesTS RookiePosts: 88
what is this /sfc you talk about and if you bought the windows xp cd all the system files should be contained within the system32 folder on the XP cd. If you have got the disc from a good manufacturer then it should also be the same. If not you only have a recovery cd and not a full install which is very very bad because it may not contain everything you need.
So if it ask you for a file write down the name of the file and location, then get your xp cd and take that file and drag and drop it to that location on your computer that the computer was looking for in the first place.
ex.
Error:
C:\WINDOWS\system32\cmd.exe could not be found.
Then you goto d:\winxpsetup\windows\system32\cmd.exe----->then drag it to C:\WINDOWS\system32
3. ### BillGatesTS RookiePosts: 88
and if you want to try and search it open up two search windows and type in d or whatever your cd/dvd drive is and scan the system32 folder on your harddrive and cd.
If something is missing on your harddrive that is on the cd then put it on your harddrive in the correct folders.
I did this for a repair install for win98/2000 because the setup supposibly finished but it keep saying insert this and insert that so I decided to just copy all the files to the harddrive and I got it to work.
4. ### yonnieTS RookieTopic Starter
/sfc is a command used at "RUN"
which make windows auto search is there any missing system protected files in your system
(I get this command from windows official support site)
thx a lot, i got it, thank you very much for your advice ^^~
5. ### RealBlackStuffTS RookiePosts: 6,503
just in case: the proper command (in the Run-box) is: sfc /scannow
Topic Status:
Not open for further replies. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8964522480964661, "perplexity": 2816.9110917302833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607960.64/warc/CC-MAIN-20170525010046-20170525030046-00108.warc.gz"} |
https://www.monsterthinks.in/2022/11/ncert-solutions-for-class-10-maths_20.html | NCERT Solutions For Class 10 Maths Chapter 6 Ex 6.4 | Monster Thinks - Monster Thinks
# NCERT Solutions For Class 10 Maths Chapter 6 Triangles Ex 6.4
Get Free NCERT Solutions for Class 10 Maths Chapter 6 Ex 6.4Triangles Class 10 Maths NCERT Solutions are extremely helpful while doing your homework. Exercise 6.4 Class 10 Maths NCERT Solutions were prepared by Experienced Garry Academy Teachers. Detailed answers to all the questions in Chapter 6 Maths Class 10 Triangles Exercise 6.4 are provided in NCERT TextBook.
### NCERT Solutions For Class 10 Maths Chapter 6 Triangles Ex 6.4
NCERT Solutions for Class 10 Maths Chapter 6 Triangles Ex 6.4 are part of Class 10 Maths NCERT Solutions. Here we have given NCERT Solutions for Class 10 Maths Chapter 6 Triangles Exercise 6.4
Board CBSE Textbook NCERT Class Class 10 Subject Maths Chapter Chapter 6 Chapter Name Triangles Exercise Ex 6.4 Number of Questions Solved 8 Category NCERT Solutions
### NCERT Solutions For Class 10 Maths Chapter 6 Ex 6.4
Ex 6.4 Class 10 Maths Question 1.
Let ∆ABC ~ ∆DEF and their areas be, respectively, 64 cm2 and 121 cm2. If EF = 15.4 cm, find BC.
Solution:
Ex 6.4 Class 10 Maths Question 2.
Diagonals of a trapezium ABCD with AB || DC intersect each other at the point O. If AB = 2 CD, find the ratio of the areas of triangles AOB and COD.
Solution:
Ex 6.4 Class 10 Maths Question 3.
In the given figure, ABC and DBC are two triangles on the same base BC. If AD intersects BC at O, show that: ar(ABC)ar(DBC)=AODO
Solution:
Ex 6.4 Class 10 Maths Question 4.
If the areas of two similar triangles are equal, prove that they are congruent.
Solution:
Ex 6.4 Class 10 Maths Question 5.
D, E and F are respectively the mid-points of sides AB, BC and CA of ∆ABC. Find the ratio of the areas of ∆DEF and ∆ABC.
Solution:
Ex 6.4 Class 10 Maths Question 6.
Prove that the ratio of the areas of two similar triangles is equal to the square of the ratio of their corresponding medians.
Solution:
Ex 6.4 Class 10 Maths Question 7.
Prove that the area of an equilateral triangle described on one side of a square is equal to half the area of the equilateral triangle described on one of its diagonals.
Solution:
Ex 6.4 Class 10 Maths Question 8.
Tick the correct answer and justify
(i) ABC and BDE are two equilateral triangles such that D is the mid-point of BC. Ratio of the areas of triangles ABC and BDE is
(a) 2 :1
(b) 1:2
(c) 4 :1
(d) 1:4
(ii) Sides of two similar triangles are in the ratio 4 : 9. Areas of these triangles are in the ratio
(a) 2 : 3
(b) 4 : 9
(c) 81 : 16
(d) 16 : 81
### NCERT Solutions For Class 10 Maths Chapter 6 Triangles Exercise wise links below :
CBSE NCERT Solutions of Chapter 1
We hope the NCERT Solutions for Class 10 Maths Chapter 6 Triangles Ex 6.4, help you. If you have any queries regarding NCERT Solutions for Class 10 Maths Chapter 6 Triangles Exercise 6.4, drop a comment below and we will get back to you at the earliest. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086809158325195, "perplexity": 1782.4597459770366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00606.warc.gz"} |
http://farside.ph.utexas.edu/teaching/315/Waveshtml/node43.html | Next: Multi-Dimensional Waves Up: Traveling Waves Previous: Wave Propagation in Inhomogeneous
# Exercises
1. Write the traveling wave as a superposition of two standing waves. Write the standing wave as a superposition of two traveling waves propagating in opposite directions. Show that the following superposition of traveling waves,
can be written as the following superposition of standing waves,
2. Show that the solution of the wave equation,
subject to the initial conditions
for , can be written
This is known as the d'Alembert solution.
3. Demonstrate that for a transverse traveling wave propagating on a stretched string,
where is the mean energy flux along the string due to the wave, is the mean wave energy per unit length, and is the phase velocity of the wave.
4. A transmission line of characteristic impedance occupies the region , and is terminated at . Suppose that the current carried by the line takes the form
for , where is the amplitude of the incident signal, and the amplitude of the signal reflected at the end of the line. Let the end of the line be open circuited, such that the line is effectively terminated by an infinite resistance. Find the relationship between and . Show that the current and voltage oscillate radians out of phase everywhere along the line. Demonstrate that there is zero net flux of electromagnetic energy along the line.
5. Suppose that the transmission line in the previous exercise is short circuited, such that the line is effectively terminated by a negligible resistance. Find the relationship between and . Show that the current and voltage oscillate radians out of phase everywhere along the line. Demonstrate that there is zero net flux of electromagnetic energy along the line.
6. A lossy transmission line has a resistance per unit length , in addition to an inductance per unit length , and a capacitance per unit length . The resistance can be considered to be in series with the inductance. Demonstrate that the Telegrapher's equations generalize to
where and are the voltage and current along the line. Derive an energy conservation equation of the form
where is the energy per unit length along the line, and the energy flux. Give expressions for and . What does the right-hand side of the previous equation represent? Show that the current obeys the wave-diffusion equation
Consider the low resistance, high frequency, limit . Demonstrate that a signal propagating down the line varies as
where , , , and . Show that : that is, the decay length of the signal is much longer than its wavelength. Estimate the maximum useful length of a low resistance, high frequency, lossy transmission line.
7. Suppose that a transmission line consisting of two uniform parallel conducting strips of width and perpendicular distance apart , where , is terminated by a strip of material of uniform resistance per square meter . Such material is known as spacecloth. Demonstrate that a signal sent down the line is completely absorbed, with no reflection, by the spacecloth. Incidentally, the resistance of a uniform strip of material is proportional to its length, and inversely proportional to its cross-sectional area.
8. At normal incidence, the mean radiant power from the Sun illuminating one square meter of the Earth's surface is kW. Show that the amplitude of the electric component of solar electromagnetic radiation at the Earth's surface is . Demonstrate that the corresponding amplitude of the magnetic component is . [From Pain 1999.]
9. According to Einstein's famous formula, , where is energy, is mass, and is the velocity of light in vacuum. This formula implies that anything that possesses energy also has an effective mass. Use this idea to show that an electromagnetic wave of mean intensity (energy per unit time per unit area) has an associated mean pressure (momentum per unit time per unit area) . Hence, estimate the pressure due to sunlight at the Earth's surface (assuming that the sunlight is completely absorbed).
10. A glass lens is coated with a non-reflecting coating of thickness one quarter of a wavelength (in the coating) of light whose wavelength in air is . The index of refraction of the glass is , and that of the coating is . The refractive index of air can be taken to be unity. Show that the coefficient of reflection for light normally incident on the lens from air is
where is the wavelength of the incident light in air. Assume that , and that this value remains approximately constant for light whose wavelengths lie in the visible band. Suppose that , which corresponds to green light. It follows that for green light. What is for blue light of wavelength , and for red light of wavelength ? Comment on how effective the coating is at suppressing unwanted reflection of visible light incident on the lens. [From Crawford 1968.]
11. A glass lens is coated with a non-reflective coating whose thickness is one quarter of a wavelength (in the coating) of light whose frequency is . Demonstrate that the coating also suppresses reflection from light whose frequency is , , et cetera, assuming that the refractive index of the coating and the glass is frequency independent.
12. A plane electromagnetic wave, linearly polarized in the -direction, and propagating in the -direction through an electrical conducting medium of conductivity , is governed by
where and are the electric and magnetic components of the wave. (See Appendix C.) Derive an energy conservation equation of the form
where is the electromagnetic energy per unit volume, and the electromagnetic energy flux. Give expressions for and . What does the right-hand side of the previous equation represent? Demonstrate that obeys the wave-diffusion equation
where . Consider the high frequency, low conductivity, limit . Show that a wave propagating into the medium varies as
where , , and . Demonstrate that : that is, the wave penetrates many wavelengths into the medium. Estimate how far a high frequency electromagnetic wave penetrates into a low conductivity conducting medium.
13. Sound waves travel horizontally from a source to a receiver. Let the source have the speed , and the receiver the speed (in the same direction). In addition, suppose that a wind of speed (in the same direction) is blowing from the source to the receiver. Show that if the source emits sound whose frequency is in still air then the frequency recorded by the receiver is
where is the speed of sound in still air. Note that if the velocities of the source and receiver are the same then the wind makes no difference to the frequency of the recorded signal. [Modified from French 1971.]
Next: Multi-Dimensional Waves Up: Traveling Waves Previous: Wave Propagation in Inhomogeneous
Richard Fitzpatrick 2013-04-08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9773074388504028, "perplexity": 421.53089884597995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00530.warc.gz"} |
https://www.physicsforums.com/threads/finding-c-in-a-joint-pdf.842106/ | # Finding c in a joint PDF
Tags:
1. Nov 8, 2015
### whitejac
1. The problem statement, all variables and given/known data
E = { (x,y) | |x| + |y| ≤ 1}
fx,y (x,y) =
{
c (x,y) ∈ E
0 otherwise
}
Find C.
Find the Marginal PDFs
Find the conditional X given Y=y, where -1 ≤ y ≤ 1.
Are X and Y independent.
2. Relevant equations
I'm taking a guess here in the solution...
but F(x,y) = F(x)F(y)
and f(x,y) = f(x)f(y)
These will be used later, when I'm wishing to find the Marginals and the independence.
3. The attempt at a solution
So, this is a uniform distribution (if it's not stated in my pdf, it's stated in the problem's text.)
Considering that this an "area" I should just be able to integrate this with respect to the bondaries correct?
That would be ∫0,1∫00,1cdxdy? Then c = 1, or do I base it off of E? Then it should be bounded from [-1,1]?
This is what I believe it to be, but I'm not entirely sure. My professor gave a solution that was probably more general where he found something else first, but i didn't quite get it because he was trying to rush it at the end of class.
After finding C, the marginals are the integrals with respect to y and x to give us the "trace" of the density function.
2. Nov 8, 2015
### andrewkirk
Because it's a pdf, we must have $\int_{-\infty}^\infty\int_{-\infty}^\infty f_{X,Y} (x,y)dx dy =\int_E c dA=c\int_E dA=1$ where the $dA$ indicates integrating by area.
So just work out that last integral, which is the area of $E$, and then figure out $c$ by the requirement for the final equality to hold.
You'll find it easier to work out E if you first draw a picture.
3. Nov 8, 2015
### whitejac
Okay, that's what I thought after reviewing a bit of what this meant.
Drawing E, we have what looks essentially like a diamond where each point is at y = 1, y = -1, x = 1, x = -1. This would bound the area from [-1,1] for dx and dy.
My question now is when evaluating the area, do we use E as the function of integration or do we use the PDF? I'm trying to grapple the idea of two things that are related by not the same. We have a distribution of probabilities... across a geometric area E?
4. Nov 8, 2015
### andrewkirk
You are trying to get the cumulative probability of all (x,y) pairs, which requires integrating the pdf. So it's the latter. E is a set - a region in the number plane - not a function that can be integrated in this context. The relevance of E is that you know that the pdf is only nonzero inside E, so you can restrict your integration to inside E without changing the result.
5. Nov 8, 2015
### whitejac
Oh okay, and within that set it has a uniform probability (0,1) where each point is smaller in probability by a factor of 1/4 because E takes the shape it does?
6. Nov 8, 2015
### LCKurtz
Remember that $\iint_E c~dydx = c\iint_E 1~dydx = c\cdot \text{Area of }E$. You shouldn't need calculus and integrals to figure out that last expression.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
Similar Discussions: Finding c in a joint PDF | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88166344165802, "perplexity": 651.4175917901948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00153.warc.gz"} |
http://www.complexnetworks.fr/titre-a-venir-9/ | # Motifs Distribution in Exchangeable Random Networks
Pierre-André Maugis
Vendredi 28 février 2014 à 11h, salle 25-26/101
In this talk I will show how the relationship between the local and global characteristics of random graphs can be used for statistical inference. There exists a long history of research on graphs/networks as mathematical objects. However, the need for methods allowing for statistical inference based on network data is but recent, and was prompted by the current boom in available network datasets along with their relevance to research in the social and biological sciences. The problem we face, set in the classical statistical paradigm, consists in seeing the networks as issuing from a random process, and in trying to infer from the observed network some characteristics of the said random process. The difficulty is both theoretical and practical: we only observe one realisation of the network (where statisticians usually assume they have a large number of repeated measurements), and networks are large objects, easily involving millions of connections, which raises computational issues. Studying networks through the local characteristics that are motifs (e.g. triangles, squares, cliques, …) offers a solution to both problems at once. Motifs are small (and hence computationally amenable), and occur multiple times throughout the network. Moreover, as we will show, under the assumption of exchangeability one can relate the random process from which the network ensued and the distribution of realised motifs. Using these results we will describe how one can use motifs to produce sound statistical inference on network data. This is a joint Work with Sofia Olhede and Patrick Wolfe. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989769220352173, "perplexity": 740.1379914078385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00676.warc.gz"} |
https://www.physicsforums.com/threads/nested-trig-functions.10283/ | # Nested trig functions?
1. Dec 4, 2003
### Allday
How do I go about integrating
sin[y*sin(x)]*sin(x) wrt x from -pi to pi,
I've got that its an even function so I can change the limits to 0 to pi and double it, but I cant find the analytic answer. By parts? substitution? although for substitution I assume there needs to be a cos function somewhere. Any ideas?
Thanks
2. Dec 4, 2003
### HallsofIvy
Staff Emeritus
So y is a constant here?
Do you have any reason to think that there IS an elementary anti-derivative? (Most functions do not.)
3. Dec 10, 2003
### Allday
Actually, I have no reason to believe it is doable. It came up in a proof I was working on involving some bessel function. I was just wondering if their was a standard method for nested trig functions. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498013257980347, "perplexity": 827.9711999844632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00275-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://mathhelpforum.com/discrete-math/206135-periphery-graph-print.html | # periphery of a graph
• Oct 26th 2012, 11:24 AM
xixi
periphery of a graph
Let (eccentricity of v) $e(v)=Max_{u \in V(G)} d(u,v)$ and $diam(G)= Max_{v \in V(G)} e(v)$, where V(G) is the set of vertices of G and d(u,v) is the distance between u and v. Let (periphery of G) Pe(G) be the graph induced by the vertices of G that have eccentricity equivalent to diam(G). Now suppose that the graph H has n vertices. Prove that H is Pe(G) if and only if $\Delta(H) \le n-2$. ( $\Delta(H)$ is the maximum degree in H)
• Oct 26th 2012, 11:51 AM
johnsomeone
Re: periphery of a graph
You need an additional constraint/premise of some kind. It's not true as stated.
$\text{If } K_n \text{ is the complete graph on } n \text{ verticies } (n \ge 2), \text{ then}$
$diam(K_n) = 1, \text{ and } e(v) = 1 = diam(K_n) \ \forall \ v \in V(K_n),$
$\text{and so } Pe(K_n) = K_n.$
$\text{But } \Delta(K_n) = n-1 = |V(K_n)| - 1,$
$\text{which contradicts this problem's claim that}$
$H = Pe(K_n) (= K_n) \Rightarrow \Delta(H) \le |V(H)| - 2.$
• Oct 26th 2012, 07:17 PM
xixi
Re: periphery of a graph
You're right. Assume that no vertex in H has eccentricity 1.
• Oct 26th 2012, 11:39 PM
johnsomeone
Re: periphery of a graph
$\text{Let } G \text{ be a connected graph such that } diam(G) > 1 \text{ (i.e connected, not complete, having 3+ verticies.)}$
$\text{ASSUME } \exists v \in H = Pe(G) \ni deg_H(v) = |V(H)|-1.$
$\text{Since } v \in Pe(G), \exists w \in G \ni e(v) = dist(v, w) = diam(G).$
$\text{Since } diam(G) = dist(v, w) \le e(w) \le diam(G),$
$\text{have that } e(w) = diam(G), \text{ and so } w \in H.$
$\text{But then } v, w \in H, \text{ and so since } deg_H(v) = |V(H)|-1,$
$\text{have that } (v,w) \in E(H).$
$\text{Since } H \text{ is an induced subgraph of } G, \text{ have that } (v,w) \in E(G).$
$\text{But } (v,w) \in E(G) \Rightarrow dist(v, w) = 1 \Rightarrow diam(G) = dist(v, w) = 1, \text{ contrary to the premises.}$
$\text{Thus the initial assumption led to a contradiction.}$
$\text{Thus } x \in H \Rightarrow deg_H(x) \le |V(H)|-2.$
$\text{Therefore } \Delta(Pe(G)) \le |V(Pe(G))|-2.$
• Oct 27th 2012, 04:22 AM
xixi
Re: periphery of a graph
• Oct 27th 2012, 11:13 AM
johnsomeone
Re: periphery of a graph
$\text{Let } H \text{ be a graph such that } \Delta(H) < |V(H)|-1 \text{ and } |H| > 1.$
$\text{Then for all } v \in H \text{ have that } deg_H(v) < |V(H)|-1,$
$\text{and so if } v \in H \text{ there exists } w \in H \text{ such that } (v, w) \notin E(H).$
$\text{Define } G \text{ by } V(G) = V(H) \cup \{ x \}, E(G) = E(H) \cup \{ (x,v) | v \in V(H) \}.$
$\text{Then for all } v \in H \subset G, dist_G(v, x) = 1 \text{ since } (v, x) \in E(G).$
$\text{Thus } e_G(x) = 1.$
$\text{From before, if } v \in H \subset G, \text{ there exists } w \in H \text{ such that } (v,w) \notin E(H),$
$\text{and so by the definition of } E(G), \text{ also have that } (v,w) \notin E(G).$
$\text{So for all } v \in H \subset G, \text{ this proves that } e_G(v) > 1, \text{ since } dist_G(v, w) > 1.$
$\text{Now, for all } v, w \in H \subset G, \{(v, x), (x, w) \} \subset E(G),$
$\text{and so, for all } v, w \in H \subset G, \ dist_G(v, w) \le 2.$
$\text{But from that it follows that, for all } v \in H \subset G, e_G(v) \le 2.$
$\text{(That conclusion also requires also remembering that } dist(v, x) = 1.)$
$\text{So have shown that, for all } v \in H \subset G, 1 < e_G(v) \le 2.$
$\text{Therefore, if } v \in G, v \ne x, \text{ then } e_G(v) = 2. \text{ Also, } e_G(x) = 1.$
$\text{That proves that } diam(G) = 2, \text{ and so also that } Pe(G) = H,$
$\text{since } Pe(G) = \ < \{ v \in V(G) | e_G(v) = diam(G) = 2 \} > \ = \ < H > \ = H.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583522081375122, "perplexity": 2654.6840916465094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193288.61/warc/CC-MAIN-20170322212953-00246-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://www.sciencehq.com/chemistry/physical-significance-of-entropy.html | # Physical significance of Entropy
The entropy of a substance is real physical quantity and is a definite function of the state of the body like pressure, temperature, volume of internal energy.
It is difficult to form a tangible conception of this quantity because it can not be felt like temperature or pressure. We can, however, readily infer it from the following aspects:
1. Entropy and unavailable energy
The second law of thermodynamics tells us that whole amount of internal energy of any substance is not convertible into useful work. A portion of this energy which is used for doing useful work is called available energy. The remaining part of the energy which cannot be converted into useful work is called unavailable energy. Entropy is a measure of this unavailable energy. In fact, the entropy may be regarded as the unavailable energy per unit temperature.
I.e.
$\text{Entropy} = \dfrac{\text{Unavailable energy}}{\text{Temperature}}$
or, $Unavailable \hspace{2mm}energy \hspace{2mm} = Entropy \times Temperature$
The concept of entropy is of great -value and it provides the information regarding structural changes accompanying a given process.
2. Entropy and disorder
Entropy is a measure of the disorder or randomness in the system. When a gas expands into vacuum, water flows out of a reservoir, spontaneous chain reaction takes place, an increase in the disorder occurs and therefore entropy increases.
Similarly, when a substance is heated or cooled there is also a change in entropy. Thus increase in entropy implies a transition from on ordered to a less ordered state of affair.
3. Entropy and probability
Why is disorder favoured? This can be answered by considering an example, when a single coin is flipped, there is an equal chance that head or tail will show up. When two coins are flipped, there is a chance of two heads or two tails showing up but there are double chance of occurrence of one head and one tail. This shows that disorder is more frequent than order.
Changes in order are expressed quantitatively in terms of entropy change, $\Delta S$. How are entropy and order in the system related? Since a disordered state is more probable for systems than of order(see figure), the entropy and thermodynamic probabilities are closely related.
Order and probality
Features of entropy:
(1) It is an extensive properly and a state function
(2) It’s value depends upon mass of substance present in the system
(3) $\Delta S = S_{final}- S_{initial}$
(4) At equilibrium $\Delta S = zero$
(5) For a cyclic process $\Delta S = 0$
(6) For natural process $\Delta S > 0$ i.e Increasing.
(7) For a adiabatic process $\Delta S$ zero
Related posts:
1. Internal Energy of a System Each substance is associated with a certain amount of energy...
2. Second Law of Thermodynamics The second law of thermodynamics states that, “Whenever a spontaneous...
3. Worksheet on Spontaneous process A process that occurs in a system without giving any...
4. Heat Content or Enthalpy When the change of state of a system is brought...
5. Worksheet on Gibb’s Free Energy A thermodynamic potential that measures the process-initiating work obtained from... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033118486404419, "perplexity": 616.7616573313938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681625.83/warc/CC-MAIN-20200125222506-20200126012506-00136.warc.gz"} |
http://mathhelpforum.com/calculus/140422-help-epsilon-delta-continuous-proof.html | # Math Help - Help with Epsilon Delta continuous proof
1. ## Help with Epsilon Delta continuous proof
Question: Prove that $2x^2-\frac{1}{x}\ is\ continuous\ on\ (2,3)$
ok so $|f(x)-f(c)|=|\left(2x^2-\frac{1}{x}\right)-\left(2c^2-\frac{1}{c}\right)|\leq|2x^2-2c^2|+|\frac{1}{x}-\frac{1}{c}\displaystyle{|}\leq$
$\leq2|x-c||x+c|+|\frac{1}{x}-\frac{1}{c}\displaystyle{|}$
when $|x-c|<\delta<1$ then
$|\frac{1}{x}-\frac{1}{c}\displaystyle{|}\leq|\frac{1}{x}|+|\fra c{1}{c}|\leq|x|+|c|<1+2|c|$
and also $|x+c|\leq|x|+|c|<1+2|c|$
so $2|x-c||x+c|+|\frac{1}{x}-\frac{1}{c}|<2|x-c|(1+2|c|)+(1+2|c|)=(1+2|c|)(2|x-c|+1)$
$(1+2|c|)(2|x-c|+1)<\epsilon\rightarrow|x-c|<\frac{\epsilon}{2(1+2|c|)}-\frac{1}{2}
$
so here's where i get stuck the algebra says that is i set $\delta=min\{1,\left(\frac{\epsilon}{2(1+2|c|)}-\frac{1}{2}\right)\}$
then i run into the problem that Delta is not 100% going to be greater than 0. namely if $\frac{\epsilon}{2(1+2|c|)}$ is less than one half. so if anyone has any tips suggestions that would be amazing
2. You need to use the absolute value.
3. so you're saying that if i set delta equal to
$|\frac{\epsilon}{2(1+2|c|)}-\frac{1}{2}|$ then it works?
maybe im dumb but i don't see the connection. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030036926269531, "perplexity": 407.1423207400456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928501.75/warc/CC-MAIN-20150521113208-00211-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://slideplayer.com/slide/4213395/ | # Right angled triangle C is the hypotenuse (Always the longest side) For angle θ (a) is the opposite and (b )is the adjacent For angle α (b) is the opposite.
## Presentation on theme: "Right angled triangle C is the hypotenuse (Always the longest side) For angle θ (a) is the opposite and (b )is the adjacent For angle α (b) is the opposite."— Presentation transcript:
Right angled triangle C is the hypotenuse (Always the longest side) For angle θ (a) is the opposite and (b )is the adjacent For angle α (b) is the opposite and (a) is the adjacent c a b θ α
Trigonometry Functions Sine = opposite hypotenuse Cosine = adjacent hypotenuse Tangent = opposite adjacent
This is aTrigonometric Identity Also, if we divide Sine by Cosine we get: Also, if we divide Sine by Cosine we get: Sin θ = Opposite/hypotenuse Cos θ = Adjacent/hypotenuse The hypotenuses cancel each other out] So we get Opposite/adjacent which is Tan (tangent) so Tan θ = sin θ cos θ
More functions Cosecant Function: csc(θ) = Hypotenuse / Opposite Secant Function: sec(θ) = Hypotenuse / Adjacent Cotangent Function: cot(θ) = Adjacent / Opposite We can also divide "the other way around" (such as hypotenuse/opposite instead of Opposite/hypotenuse): which will give us three more functions
So using the inverse of these we get: sin(θ) = 1/cosec(θ) cos(θ) = 1/sec(θ) tan(θ) = 1/cot(θ)
More functions Also the other way around: cosec(θ) = 1/sin(θ) sec(θ) = 1/cos(θ) cot(θ) = 1/tan(θ) And we also have: (cot(θ) = cos(θ)/sin(θ) (from tan = sin/cos)
Pythagoras c a b a 2 + b 2 = c 2 C is the hypotenuse (the longest side) θ
TRIGONOMETRY
Pythagoras a 2 + b 2 = c 2 Can be written as a 2 b 2 c 2 c 2 c 2 c 2 + = = 1
proof sin θ can be written as a/c and cos θ can be written as b/c (cos) (a/c) 2 is sin 2 θ and(b/c) 2 is cos 2 θ (a/c) 2 +(b/c) 2 = 1 so sin 2 θ + cos 2 θ = 1 (sin 2 θ) means to find the sine of θ, then square it. (sin θ 2 ) means square θ, then find the sine
Rearranged versions sin 2 θ = 1 − cos 2 θ cos 2 θ = 1 − sin 2 θ
Rearranged versions sin 2 θ cos 2 θ 1 cos 2 θ cos 2 θ cos 2 θ tan 2 θ + 1 = sec 2 θ tan 2 θ = sec 2 θ − 1 Or sec 2 θ = 1 + tan 2 θ + = OR
Rearranged versions sin 2 θ cos 2 θ 1 sin 2 θ sin 2 θ sin 2 θ 1 + cot 2 θ = cosec 2 θ cot 2 θ + 1 = cosec 2 θ or cosec 2 θ = 1 + cot 2 θ + =
Circular motion to sine curve 0o0o 90 o 180 o 270 o 360 o time y = R.sinθ In the triangle formed by the first part of the motion. The vertical line (y) is the opposite of the angle formed (θ) and the hypotenuse is the radius of the circle (R) Sinθ = y/R so y = R.sin θ. At 90 o sinθ = 1 so y = R θ y R
More sine curves y = Rsinθ y =2Rsinθ y =0.5Rsinθ
y = sin2θ 90 o 180 o 270 o 360 o For y = sin2θ two waves fit in 360 o For y = sin3θ three waves fit in 360 o and so on For y =sin 0.5θ one wave would stretch over 720 o
Cosine curves (cosine 90 o = 0) (cosine 0 o =1)
Graph of sin 2 θ Has to be positive because we cannot have a minus squared number
Graph of cos 2 θ
C AS T A = All positive S = Only sine positive T = Only tangent positive C = only cosine positive 0o0o 90 o 180 o 270 o 360 o C.A.S.T
Finding sin, cos and tan of angles. Sin 245 o = sin(245 o – 180 o ) = sin 65 o = 0.906 Sin 245 o = - 0.906 (third quadrant = negative) Sin 118 o = sin (180 o – 118 o ) = sin 62 o = 0.883 Sin 118 o = + 0.883 (second quadrant positive) Cos 162 o = cos (180 o – 162 o ) = cos 18 o = 0.951 Cos 162 o = - 0.851 (second quadrant negative) Cos 285 o = cos(360 o – 285 o ) = cos 75 o = 0.259 Cos 285 o = + 0.259 (fourth quadrant positive) Tan 196 o = tan(196 o – 180 o ) = tan 16 o = 0.287 Tan 196 o = + 0.287 (third quadrant positive) Tan 282 o = tan(360 o – 282 o ) = tan 78 o = 4.705 Tan 282 o = - 4.705 (fourth quadrant negative)
Finding angles First quadrant θ 2 nd quadrant 180 – θ 3 rd quadrant 180 + θ 4 th quadrant 360 – θ
Finding angles 90 o 180 o 270 o 360 o 0o0o 30 o 180 o -30 o = 150 o 180 o + 30 o =210 o 360 o - 30 o =330 o 150 o 210 o 330 o C A S T
Finding angles Find all the angles between 0 o and 360 o to satisfy the equation 8sinθ -4 = 0 (rearrange) 8sinθ = 4 Sinθ = 4/8 = 0.5 Sin -1 0.5 = 30 o and 180 o – 30 o = 150 o
Find all the angles between 0 o and 360 o to satisfy the equation 6cos 2 θ = 1.8 (rearrange) cos 2 θ = 1.8÷6 = 0.3 cosθ = √0.3 = ± 0.548 Cos -1 +0.548 (1 st and 4 th quadrant positive) = 56.8 o and 360 o – 56.8 o = 303.2 o Cos -1 - 0.548 (2 nd and 3 rd quadrant negative ) 180 o – 56.8 o = 123.2 o and 180 o + 56.8 o = 236.8 o
Finding angles 90 o 180 o 270 o 360 o 0o0o 56.8 o 123.2 o 56.8 o 236.8 o 303.2 o Red 1 st and 4 th quadrant (positive cos) Blue 2 nd and 3 rd quadrant (negative cos)
Finding angles Solve for all angle between 0° and 360° 2Tan 2 B + Tan B = 6 (let Tan B = x)so 2x 2 + x = 6 or 2x 2 + x – 6 = 0 then solving as a quadratic equation using formula: x = -b +/- √(b 2 - 4ac) / 2a Where a = 2; b= 1; and c = - 6
Finding angles x = -1+/- √(1 2 – 4x2x-6) / 4 = -1+/- √(1 + 48) / 4 = -1+/- √(49) / 4 = -1+/- (7) / 4 +6/4 or -8/4 Tan B = 1.5 or -2 1 st and 3 rd quadrant 56.3 o or(180 + 56.3) = 236.3 o 2 nd quadrant (180 - 63.43) = 116.57 o 4 th quadrant (360 – 63.43) = 296.57 o
Formulae for sin (A + B), cos (A + B), tan (A + B) (compound angles) sin (A + B) = sin A cos B + cos A sin B sin (A - B) = sin A cos B - cos A sin B cos (A + B) = cos A cos B - sin A sin B cos (A - B) = cos A cos B + sin A sin B These will come in handy later
a sin θ ± b cos θ can be expressed in the form R sin(θ ± α), R is the maximum value of the sine wave sin(θ ± α) must = 1 or -1 (α is the reference angle for finding θ)
Finding α Using sin(A + B) = sin A cos B + cos A sin B, (from before) we can expand R sin (θ + α) as follows: R sin (θ + α) ≡ R (sin θ cos α + cos θ sin α) ≡ R sin θ cos α + R cos θ sin α
Finding α So a sin θ + b cos θ ≡ R cos α sin θ + R sin α cos θ a = R cos α b = R sin α
Finding α b ÷ a = R sin α ÷ R cos α = tan α tan α = b/a
Using the equation Now we square the equation a 2 + b 2 = R 2 cos 2 α + R 2 sin 2 α = R 2 (cos 2 α + sin 2 α) = R 2 (because cos 2 A + sin 2 A = 1)
compound angle formulae: Hence R = √a 2 + b 2 R 2 = a 2 + b 2 (pythagoras)
The important bits tan α = b/a R 2 = a 2 + b 2 (pythagoras)
For the minus case a sin θ − b cos θ = R sin(θ − α) tan α = b/a
Cosine version a sin θ + b cos θ ≡ R cos (θ − α) Therefore: tanα= a/b (Note the fraction is a/b for the cosine case, whereas it is b/a for the sine case.) We find R the same as before: R=√a 2 +b 2 So the sum of a sine term and cosine term have been combined into a single cosine term: a sin θ + b cos θ ≡ R cos(θ − α)
Minus cosine version If we have a sin θ − b cos θ and we need to express it in terms of a single cosine function, the formula we need to use is: a sin θ − b cos θ ≡ −R cos (θ + α)
Graph of 4sinθ
Graph of 4sinθ and 3 cosθ
Resultant graph of 4sinθ + 3 cosθ
The radian r Length of arc (s) The radian is the length of the arc divided by the radius of the circle, A full circle is 2π radians That is 360 0 = 2π radians
Circular motion to sine curve π/2 π Radians 3π/2 2π2π time y = R.sinθ In the triangle formed by the first part of the motion. The vertical line (y) is the opposite of the angle formed (θ) and the hypotenuse is the radius of the circle (R) Sinθ = y/R so y = R.sin θ. At π/2 sinθ = 1 so y = R θ y R 0
Angular velocity (ω) Angular velocity is the rate of change of an angle in circular motion and has the symbol ω ω = radians ÷ time (secs) Angles can be expressed by ωt
example For the equation 3.Sin ωt - 6.Cos ωt: i) Express in R.Sin (ωt - α) form ii) State the maximum value iii) Find value at which maximum occurs
example R=√3 2 +6 2 R =√9 +36 R =√45 R = 6.7 Maximum value is 6.7
example Tan α = b/a Tan α = 6/3 = 2 63.4 o or 1.107 radians 63.4 o x π ÷ 180
example Maximum value occurs when Sin (ωt - 1.1071) = 1, or (ωt - 1.1071) = π/2 radians Since π/2 radians = 1.57, then (ωt - 1.107) = 1.57 rad. Therefore ωt = 1.57 + 1.107 = 2.6781 radians Maximum value occurs at 2.678 radians
example a) 3Sin ω t - 6Cos ω t = 6.7Sin ( ω t - 1.107) b) maximum = 6.7 c) Maximum value occurs at 2.6781 radians
Download ppt "Right angled triangle C is the hypotenuse (Always the longest side) For angle θ (a) is the opposite and (b )is the adjacent For angle α (b) is the opposite."
Similar presentations | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756742715835571, "perplexity": 3111.686135244169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00601.warc.gz"} |
http://mathhelpforum.com/trigonometry/96998-translations-graphs-sine-cosine-functions-print.html | # translations of the graphs of sine and cosine functions
• August 4th 2009, 05:48 PM
KoolFlair
translations of the graphs of sine and cosine functions
I have a test tmrw and just went over the information today so I am a little confused. Some of the graphs I get right but some of them I am getting the wrong X values.
So if i have y=-4 sin 2(x-pi/2)
so the Amplitude is 4
and the period is pi
but how in the world do i get the x axis?
I've tried everything and i'll find one way works for one type but doesn't work for another. Is it really that way? I was told to multiply pi/2 by 1/4 2/4 and 3/4 but it doesnt come out correct. Does anyone know what to add or multiply by to get the x axis? Thanks
• August 4th 2009, 05:57 PM
pickslides
Quote:
Originally Posted by KoolFlair
so the Amplitude is 4
correct
Quote:
Originally Posted by KoolFlair
and the period is pi
correct
Quote:
Originally Posted by KoolFlair
So if i have y=-4 sin 2(x-pi/2)
but how in the world do i get the x axis?
find the x axis intercepts by making y = 0
$y=-4 \sin 2(x-\frac{\pi}{2})$
$0=-4 \sin 2(x-\frac{\pi}{2})$
$0= \sin 2(x-\frac{\pi}{2})$
can you solve from here?
• August 4th 2009, 05:59 PM
skeeter
Quote:
Originally Posted by KoolFlair
I have a test tmrw and just went over the information today so I am a little confused. Some of the graphs I get right but some of them I am getting the wrong X values.
So if i have y=-4 sin 2(x-pi/2)
so the Amplitude is 4
and the period is pi
but how in the world do i get the x axis?
what do you mean by "getting the x-axis"?
I've tried everything and i'll find one way works for one type but doesn't work for another. Is it really that way? I was told to multiply pi/2 by 1/4 2/4 and 3/4 but it doesnt come out correct. Does anyone know what to add or multiply by to get the x axis? Thanks
...
• August 4th 2009, 06:03 PM
KoolFlair
im not sure how to finish it that way.
I was told to make an inequality. So i got
pi/2<x<9pi/2
And pi/2 starts the x
and
9pi/2 ends as the 4th so i need to find the 3 in between. Do i add pi/2 to each one?
• August 4th 2009, 06:08 PM
KoolFlair
Quote:
Originally Posted by skeeter
...
like if you set up and x/y tree. I need to get the X then take the X and insert it into the cos x
• August 4th 2009, 06:22 PM
pickslides
Quote:
Originally Posted by KoolFlair
im not sure how to finish it that way.
I was told to make an inequality. So i got
pi/2<x<9pi/2
And pi/2 starts the x
and
9pi/2 ends as the 4th so i need to find the 3 in between. Do i add pi/2 to each one?
The function
$y=-4 \sin 2(x-\frac{\pi}{2})$
has period of $\pi$ with x-intercepts at $0+\frac{\pi}{2},\frac{\pi}{2}+\frac{\pi}{2}$ and $\pi+\frac{\pi}{2}$
This gives all the solutions on $[\frac{\pi}{2},\frac{3\pi}{2}]$
If you need solutions on $[\frac{\pi}{2},\frac{9\pi}{2}]$
keep adding $\pi$ to the solutions above
• August 4th 2009, 06:30 PM
KoolFlair
Thanks pickslides
Thats what i thought.
But my solutions manuel says it should be
Pi/2
11pi/4
5pi
19pi/4
9pi/2
So how are they getting 11pi/4?
• August 4th 2009, 09:14 PM
yeongil
You sure about that? I graphed the function and I'm getting x-intercepts at $0,\; \frac{\pi}{2},\; \pi,\; \frac{3\pi}{2},...$.
$y = -4{\color{red}\cos} \left(2\left(x - \frac{\pi}{2}\right)\right)$
$\frac{\pi}{4},\; \frac{3\pi}{4},\; \frac{5\pi}{4},\; \frac{7\pi}{4},...$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9439387917518616, "perplexity": 944.9458743140888}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925201.39/warc/CC-MAIN-20150521113205-00153-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/34241/laplacian-operator-and-relation-to-the-laplace-transform?sort=oldest | # Laplacian operator and relation to the Laplace Transform
I'm trying to understand why the Laplacian operator is used in blob detection in image analysis. I must admit that in trying to figure out why the Laplacian is useful in this application, I've really confused myself with the different uses of the word 'Laplace.' For instance, Wikipedia has many articles on this, and the ones I'm having trouble unifying conceptually are the Laplace Transform and the Laplace Operator.
From co-workers and some reading on the internet, I have come to very shallowly think of my Laplacian convolutions on images as performing something similar to the second derivative, where the most quickly changing areas on the image are what become highlighted in the new, convolved image. From the page on the Laplace Operator this makes a lot of sense. This doesn't make sense to me from the page on the Laplace Transform. My question then, I think, is how are the Laplace Operator and the Laplace Transform related? If I can see, from the definition, that the Laplace Operator is basically doing the second derivative, I would think I should be able to see something similar from the Laplace Transform. But I don't. Am I mistaken in thinking that the Laplace Transform and the Laplace operator are the same thing? How are they related?
-
They are certainly not the same thing.
You might sometimes see them appear in the same context because transforms of Laplace-Fourier type are immensely useful for analyzing linear differential operators like the Laplacian. But the Fourier transform has better analytic properties, so that's the one you are more likely to see used.
Here's some intuition you might find helpful.
The discrete Laplacian computes the difference between a node's averaged neighbors and the node itself. It's often used in image processing and that gives an easy way to visualize it. The 1D case where the kernel is [1 -2 1] is especially simple:
In an area of constant color the Laplacian is zero. Indeed, even if you have linear variation it remains zero, e.g. in the neighborhood [1 2 3] the Laplacian's value at the center point is
$$1 \cdot 1 + (-2) \cdot 2 + 3 \cdot 1 = 0.$$
But quadratic and higher-order variation excites the Laplacian and results in non-zero values. Thus it's especially useful for detecting 'jumps'. That's why it's the weapon of choice in edge detection. It's often combined with a Gaussian to pre-filter out any small-scale features or noise that might cause spurious edges to be detected.
I should mention that the Laplacian in two dimensions and higher is significantly richer than the one-dimensional case might suggest. For one, not all two-dimensional images with a uniformly zero Laplacian are linear. But qualitatively a lot of the same intuition holds true as to how the Laplacian reacts to variation.
-
so when you do a Laplacion convolution, you're not actually doing a Laplacian Transform of the image (similar to how you do a Fourier transform) but instead are convolution with the Laplacian operator? So convolution with the laplacian operator is different then applying a Laplacian Transformation to the image? – Nick Aug 2 '10 at 12:06
Strictly speaking, the Laplacian is only a convolution operator in the discrete case. But yes, it is absolutely not the same thing as the Laplace transform (which is never called the Laplacian transform, by the way). – Per Vognsen Aug 2 '10 at 12:10
ah, thank you very much. – Nick Aug 2 '10 at 12:15
There is no relation on the basic level between the Laplace operator and Laplace transform. From the point of view of learning about them, put the coincidence of names out of your mind.
-
so the operator is not actually derived from the equation, and the thing they share in common is that they were discovered/used by Laplace? – Nick Aug 2 '10 at 12:03
What they certainly share in common is that both are named after Laplace. Whether he discovered them is a tougher question. – Michael Hardy Aug 2 '10 at 21:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8321556448936462, "perplexity": 389.028258636937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672441.2/warc/CC-MAIN-20151001215752-00202-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.competoid.com/quiz_answers/14-0-12728/Question_answers/ | • In a rare coin collection, there is one gold coin for every three non-gold coins. 10 more gold coins are added to the collection and the ratio of gold coins to non-gold coins would be 1 : 2. Based on the information; the total number of coins in the collection now becomes.
A) 90 B) 80 C) 60 D) 50
A) 90
Let the number of gold coins initially be x
and the number of non-gold coins be y.
According to the question,
3x = y
When 10 more gold coins, total number gold coins become x + 10
and the number non-gold coins remain the same at y.
Now, we have $$\Large 2 \left(10+x\right)=y$$
Solving these two equations, we get
x = 20 and y = 60.
Total number of coins in the collection at the end is equal to
x+10+y = 20+10+60 = 90.
Similar Questions
1). If $$\Large \frac{\sqrt{3+x}+\sqrt{3-x}}{\sqrt{3+x}-\sqrt{3-x}}=2$$, then x is equal to
A). $$\Large \frac{5}{12}$$ B). $$\Large \frac{12}{5}$$ C). $$\Large \frac{5}{7}$$ D). $$\Large \frac{7}{5}$$
2). In an examination, a student scores 4 marks for every correct answer and losses 1 mark for every wrong answer. A student attempted all the 200 questions stud and scored 200 marks. Find the number of questions he answered correctly.
A). 82 B). 80 C). 68 D). 60
3). The graphs of ax + by = c, dx + ey = f will be
I. parallel, if the system has no solution.
II. coincident, if the system has finite numbers of solutions.
III. intersecting, if the system has only one solution.
Which of the above statements are correct?
A). Only I and II B). Only ll and Ill C). Only I and III D). I, II and Ill
4). If $$\Large 3^{x+y}=81$$ and $$\Large 81^{x-y}=3$$, then what is the value of x?
A). $$\Large \frac{17}{16}$$ B). $$\Large \frac{17}{8}$$ C). $$\Large \frac{17}{4}$$ D). $$\Large \frac{15}{4}$$
5). Ten chairs and six tables together cost Rs.6200, three chairs and two tables together cost Rs.1900. The cost of 4 chairs and 5 tables is
A). Rs.3000 B). Rs.3300 C). Rs.3500 D). Rs.3800
6). The system of equations 3x + y - 4 = 0 and 6x + 2y - 8 = 0 has
A). a unique solution x = 1, y=1 B). a unique solution x = 0, y = 4 C). no solution D). infinite solutions
7). If x+ y - 7 = 0 and 3x + y -13 = 0, then what is $$\Large 4x^{2} + y^{2} + 4xy$$ equal to?
8). The solution of the equations $$\Large \frac{p}{x}+\frac{q}{y} \ and \ \frac{q}{x}+\frac{p}{y}$$ n is
A). $$\Large x=\frac{q^{2}-p^{2}}{mp-nq}, y=\frac{p^{2}-q^{2}}{np-mq}$$ B). $$\Large x=\frac{p^{2}-q^{2}}{mp-nq}, y=\frac{q^{2}-p^{2}}{np-mq}$$ C). $$\Large x=\frac{p^{2}-q^{2}}{mp-nq}, y=\frac{p^{2}-q^{2}}{np-mq}$$ D). $$\Large x=\frac{q^{2}-p^{2}}{mp-nq}, y=\frac{q^{2}-p^{2}}{np-mq}$$
9). If $$\Large \frac{3}{x+y}+\frac{2}{x-y}=2$$ and $$\Large \frac{9}{x+y}-\frac{4}{x-y}=1$$, then what is the value of $$\Large \frac{x}{y}$$?
A). $$\Large \frac{3}{2}$$ B). 5 C). $$\Large \frac{2}{3}$$ D). $$\Large \frac{1}{5}$$
10). If $$\Large \frac{a}{b}-\frac{b}{a}=\frac{x}{y}$$ and $$\Large \frac{a}{b}+\frac{b}{a}= x - y$$ , then what is the value of x?
A). $$\Large \frac{a+b}{a}$$ B). $$\Large \frac{a+b}{b}$$ C). $$\Large \frac{a-b}{a}$$ D). None of these | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546524047851562, "perplexity": 1006.8854925935219}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209980.13/warc/CC-MAIN-20180815063517-20180815083517-00630.warc.gz"} |
https://math.stackexchange.com/questions/2397024/find-the-coefficient-of-a5b5c5d6-in-bcdacdabdabc7 | Find the coefficient of $a^5b^5c^5d^6$ in $(bcd+acd+abd+abc)^7$
Find the coefficient of $a^5b^5c^5d^6$ in the expansion $(bcd+acd+abd+abc)^7$.
I tried to use multinomial theorem but failed.I can find out the coefficient of $a^5b^5c^5d^6$ in $(a+b+c+d)^21$.But I discovered that the multiplication of terms in the given expansion is $(abcd)^21$ and in my expression I have done the same operation and have found the same result.Is there any relation between these two expressions??I am sure that it can be done using multinomial theorem but how??If you have another method then show me that also.
Notice that if we have:
$$(abc)^i(abd)^j(bcd)^k(acd)^l=a^5b^5c^5d^6 ;$$
then this implies that
$$\left\{ \begin{array}{lcc} i+j+l=5 , & \text{by comparison of the power of} \ \ a , \\ i+j+k=5 , & \text{by comparison of the power of} \ \ b , \\ i+k+l=5 , & \text{by comparison of the power of} \ \ c , \\ j+k+l=6 , & \text{by comparison of the power of} \ \ d . \\ \end{array} \right.$$
The above system of equations has the solution:
$$i=1, \ \ j=2, \ \ k=2, \ \ l=2.$$
So we have the following:
$$(abc)^1(abd)^2(bcd)^2(acd)^2=a^5b^5c^5d^6.$$
So the coefficeint is equal to $\dfrac{7!}{1!2!2!2!}$.
• it is okay bro.But why u distribute $a^5b^5c^5d^6$ into$(abc)^1(abd)^2(bcd)^2(acd)^2?It can be distributed in another manner too!But it is so simple.I actually failed to distribute$a^5b^5c^5d^6$in that manner.Thanks Aug 17 '17 at 16:00 • @Sufaid Saleel , Now I have explained more. Does it satisfies you? Aug 17 '17 at 16:13 • Thank you .The total sum is completely cleared.Thanks for your cooperation Aug 17 '17 at 16:31 The coefficient of$a^5b^5c^5d^6$in$(bcd+acd+abd+abc)^7$is the same as the coefficient of$\dfrac{a^5b^5c^5d^6}{(abcd)^7}$in$\dfrac{(bcd+acd+abd+abc)^7}{(abcd)^7}$, i.e. the coefficient of$\dfrac1{a^2b^2c^2d}$in$\left(\dfrac1a+\dfrac1b+\dfrac1c+\dfrac1d\right)^7\$. Is that better for you?
• It is Enough for me.I ask it for increase my own knowledge.But is it a good question??What do u think?? Aug 17 '17 at 15:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832452535629272, "perplexity": 301.6171354702572}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00512.warc.gz"} |
https://arxiv.org/abs/1502.02281v2 | math.OC
# Title:A Lyapunov Analysis of FISTA with Local Linear Convergence for Sparse Optimization
Abstract: We conduct a Lyapunov analysis of the Fast Iterative Shrinkage and Thresholding Algorithm (FISTA) and show that the algorithm obtains local linear convergence for the special case of sparse ($\ell_1$-regularized) optimization. We use an appropriate multi-step potential function to determine conditions on the parameters which imply weak convergence of the iterates to a minimizer in a real Hilbert space (strong convergence in $\mathbb{R}^n$). Our results apply to a modified version of the momentum sequence proposed by Beck and Teboulle [1], for which convergence of the iterates is unknown. The Lyapunov analysis also allows us to show that FISTA achieves local linear convergence for sparse optimization problems. We generalize the analysis by Hale, Yin, and Zhang [2], of the Iterative Shrinkage and Thresholding Algorithm (ISTA) to FISTA. We prove finite convergence to the optimal manifold and determine the local linear convergence rate which holds thereafter. Our results show that the classical choice due to Beck and Teboulle and recent choice due to Chambolle and Dossal [3] for the momentum parameter are not optimal for sparse optimization in terms of the local convergence rate.
Comments: 30 pages Subjects: Optimization and Control (math.OC); Numerical Analysis (math.NA) Cite as: arXiv:1502.02281 [math.OC] (or arXiv:1502.02281v2 [math.OC] for this version)
## Submission history
From: Patrick Johnstone [view email]
[v1] Sun, 8 Feb 2015 17:50:30 UTC (68 KB)
[v2] Thu, 19 Feb 2015 17:21:48 UTC (68 KB)
[v3] Thu, 12 Mar 2015 20:59:38 UTC (57 KB)
[v4] Wed, 24 Jun 2015 04:00:54 UTC (60 KB)
[v5] Mon, 23 Jan 2017 16:14:18 UTC (61 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891229510307312, "perplexity": 1554.9320876811566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999291.1/warc/CC-MAIN-20190620230326-20190621012326-00127.warc.gz"} |
http://www.zazzle.com/celebration+8x10+plaques | Showing All Results
2,669 results
Page 1 of 45
Related Searches: party, birthday, celebrate
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
No matches for
Showing All Results
2,669 results
Page 1 of 45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241721987724304, "perplexity": 4399.9603487522545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678677656/warc/CC-MAIN-20140313024437-00076-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://brilliant.org/problems/reciprocal-of-logarithm-minus-reciprocal/ | # Reciprocal Of Logarithm Minus Reciprocal
Calculus Level 3
$\large \lim_{a\to 1} \left(\dfrac{1}{\ln a} - \dfrac{1}{a-1}\right)$
Find the value of the closed form of the above limit.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884949922561646, "perplexity": 3200.636162195252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00132.warc.gz"} |
https://forum.wilmott.com/viewtopic.php?f=8&t=102438&p=862024&sid=76f70d1a7908c8db693ea268e50e9566 | Serving the Quantitative Finance Community
stilyo
Topic Author
Posts: 169
Joined: January 12th, 2009, 6:31 pm
### Bayes and Coin Toss?
Hi -
Suppose someone tossed 100 coins and then covered them up. They ask you how many heads you expect, and absent other info you say 50. Suppose now that this person uncovers 20 of the 100 coins and shows you 20 heads. Note that they don't randomly uncover 20 coins which happen to be all heads - they purposefully show you 20 heads. So it's as if we have a crystal ball that gives us new info which is, "there are at least 20 heads out of the original 100 coin tosses". Do you change your estimate of how many heads in total and how? I'm having some trouble constructing Bayes rule for this particular example - thanks for your help!
bearish
Posts: 6448
Joined: February 3rd, 2011, 2:19 pm
### Re: Bayes and Coin Toss?
Well, if your prior is that the coin tosses are iid with probability of a head is one half, then the probability of at least 20 heads are in the neighborhood of .9999999995, so you haven’t learnt much. So, up to rounding, 50 still seems like a good number.
katastrofa
Posts: 10082
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri
### Re: Bayes and Coin Toss?
The Bayes' rule was constructed in the 18th century and still applies. No need to construct a new one.
If the person cherry-picked the 20 heads, you get bearish's answer in the following way:
A - probability of tossing 50 heads exactly
B - probability of tossing at least 20 coins
P(A|B) = P(A&B) / P(B) = P(A) / P(B)
P(A|B) - P(A) = 4.440717e-11 according to an R console i've found in the Internet. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774091601371765, "perplexity": 1241.224892398856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.52/warc/CC-MAIN-20210515192444-20210515222444-00442.warc.gz"} |
https://sciencing.com/calculate-jump-height-acceleration-8771263.html | # How to Calculate the Jump Height From Acceleration
••• Ezra Shaw/Digital Vision/Getty Images
Print
Problems dealing with motion are usually the first that students of physics will encounter. Concepts like time, velocity and acceleration are interrelated by formulas that students can rearrange with the help of algebra to apply to different circumstances.
Students can calculate the height of a jump, for instance, from more than one starting point. The height of the jump can be calculated if the acceleration and either the initial velocity or the total time in the air is known.
Write an expression for time in terms of change in velocity, using the formula
v_f=-gt+v_i
where vf is final velocity, g is the acceleration due to gravity, t is time, and vi is initial velocity.
## Time of Flight
Solve the equation for t
t = (v_f − v_i)/-g
Therefore, the amount of time is equal to the change in velocity divided by the acceleration due to gravity.
## Calculate Time to Reach Highest Point
Calculate the amount of time to reach the highest point of the jump. At the highest point, velocity (vf) is zero. Use 9.8 m/s² for the acceleration due to gravity. For example, if the initial velocity is 1.37 m/s, time to reach maximum height is:
t = (0 − 1.37)/( − 9.8) = 0.14\text{ s}
## Calculate Initial Velocity from Total Time of Flight
The initial velocity vi can be calculated using the time to reach the jump height
v_i=gt
For example, if the total time is 0.14 seconds:
v_i=9.8\times 0.14=1.37\text{ m/s}
## Vertical Jump Physics Equation
Calculate the jump height using the formula
s_f=s_i+v_it-1/2gt^2
where sf is the final position and si is the initial position. Since jump height is the difference between the final and initial position
h = (s_f − s_i)
simplify the formula to
h=v_it-1/2gt^2
and calculate:
h = (1.37\times 0.14) - 1/2(9.8 \times 0.14^2) = 0.19 − 0.10 = 0.09\text{ meters}
#### Tips
• Create your own jump height calculator by programming the jump height formula into your graphing calculator!
Dont Go!
We Have More Great Sciencing Articles! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959220886230469, "perplexity": 1553.613284208369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00695.warc.gz"} |
https://www.lessonplanet.com/teachers/how-long-will-it-take-to-get-there | # How Long Will It Take To Get There?
Fourth graders estimate the average traveling distance to a city of choice with the help of an internet site. They determine the amount of time it will take to travel there at a speed of 65 miles per hour. Students set up a proportion to solve the problem.
Concepts
Resource Details | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271942734718323, "perplexity": 212.38838835969358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816094.78/warc/CC-MAIN-20180225031153-20180225051153-00223.warc.gz"} |
https://hesso.tind.io/record/492 | ## On the numerical solution of the Dirichlet problem for the elliptic sigma-2 equation
Faculty:
Economie et Services
School:
HEG - Genève
Subject(s):
Economie/gestion
Date:
2014
Published in:
In : Fitzgibbon, W. (ed.) et al. Modeling, simulation and optimization for science and technology. Berlin : Springer, 2014, pp. 23-40. Computational methods in applied sciences, vol. 34.
Appears in Collection: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8854473233222961, "perplexity": 2803.2410624356144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00033.warc.gz"} |
http://mathhelpforum.com/discrete-math/53678-need-help-proving-mathematical-induction-print.html | # Need help proving!! Mathematical Induction
• October 14th 2008, 12:39 PM
Saint22
Need help proving!! Mathematical Induction
Hey I'm struggling in Discrete Math, could anyone help me solve this proof?
Prove that A1, A2, . . . ., An and B are sets, then (A1 ∩ A2 ∩ . . . ∩An) U B = (A1 U B) ∩ (A2 U B) ∩ . . . ∩(An U B).
This comes from the Induction and Recursion chapter of Mathematical Induction.
• October 15th 2008, 06:38 PM
ThePerfectHacker
Quote:
Originally Posted by Saint22
Hey I'm struggling in Discrete Math, could anyone help me solve this proof?
Prove that A1, A2, . . . ., An and B are sets, then (A1 ∩ A2 ∩ . . . ∩An) U B = (A1 U B) ∩ (A2 U B) ∩ . . . ∩(An U B).
This comes from the Induction and Recursion chapter of Mathematical Induction.
Prove it by induction for $n=2$ we need to show $(A_1\cap A_2) \cup B = (A_1 \cup B) \cap (A_2\cup B)$.
This case is proven individually.
If it is for $(A_1\cap ... \cap A_n) \cup B = (A_1 \cup B)\cap ... \cap (A_n \cup B)$.
Then if we have $(A_1\cap ... \cap A_n \cap A_{n+1})\cup B = [(A_1\cap ... \cap A_n) \cap A_{n+1}] \cup B$
And this gives, (as in the case $n=2$):
$[(A_1\cap ... \cap A_n) \cup B] \cap (A_{n+1} \cup B)$
Now apply inductive step:
$(A_1\cup B)\cap ... \cap (A_{n+1} \cup B)$
And that completes induction. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833310842514038, "perplexity": 1601.4904416850582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://en.wikibooks.org/wiki/Calculus/Implicit_differentiation | Calculus/Implicit Differentiation
(Redirected from Calculus/Implicit differentiation)
← Higher Order Derivatives Calculus Derivatives of Exponential and Logarithm Functions → Implicit Differentiation
Generally, you will encounter functions expressed in explicit form, that is, in the form $y = f(x)$. To find the derivative of y with respect to x, you take the derivative with respect to x of both sides of the equation to get
$\frac{dy}{dx}=\frac{d}{dx}[f(x)]=f'(x)$
But suppose you have a relation of the form $f(x,y(x))=g(x,y(x))$. In this case, it may be inconvenient or even impossible to solve for y as a function of x. A good example is the relation $y^2 + 2yx + 3 = 5x \,$. In this case you can utilize implicit differentiation to find the derivative. To do so, one takes the derivative of both sides of the equation with respect to x and solves for $y'$. That is, form
$\frac{d}{dx}[f(x,y(x))]=\frac{d}{dx}[g(x,y(x))]$
and solve for dy/dx. You need to employ the chain rule whenever you take the derivative of a variable with respect to a different variable. For example,
$\frac{d}{dx} (y^3) = \frac{d}{dy}[y^3]\cdot\frac{dy}{dx}=3y^2 \cdot y' \$
Implicit Differentiation and the Chain Rule
To understand how implicit differentiation works and use it effectively it is important to recognize that the key idea is simply the chain rule. First let's recall the chain rule. Suppose we are given two differentiable functions f(x) and g(x) and that we are interested in computing the derivative of the function f(g(x)), the the chain rule states that:
$\frac{d}{dx}\Big(f(g(x))\Big) = f'(g(x))\,g'(x)$
That is, we take the derivative of f as normal and then plug in g, finally multiply the result by the derivative of g.
Now suppose we want to differentiate a term like y2 with respect to x where we are thinking of y as a function of x, so for the remainder of this calculation let's write it as y(x) instead of just y. The term y2 is just the composition of f(x) = x2 and y(x). That is, f(y(x)) = y2(x). Recalling that f′(x) = 2x then the chain rule states that:
$\frac{d}{dx}\Big(f(y(x))\Big)=f'(y(x))\,y'(x)=2y(x)y'(x)$
Of course it is customary to think of y as being a function of x without always writing y(x), so this calculation usually is just written as
$\frac{d}{dx}y^2=2yy'.$
Don't be confused by the fact that we don't yet know what y′ is, it is some function and often if we are differentiating two quantities that are equal it becomes possible to explicitly solve for y′ (as we will see in the examples below.) This makes it a very powerful technique for taking derivatives.
Explicit Differentiation
For example, suppose we are interested in the derivative of y with respect to x where x and y are related by the equation
$x^2 + y^2 = 1\,$
This equation represents a circle of radius 1 centered on the origin. Note that y is not a function of x since it fails the vertical line test ($y=\pm1$ when $x=0$, for example).
To find y', first we can separate variables to get
$y^2 = 1 - x^2\,$
Taking the square root of both sides we get two separate functions for y:
$y = \pm \sqrt{1-x^2}\,$
We can rewrite this as a fractional power:
$y = \pm (1-x^2)^{\frac{1}{2}}\,$
Using the chain rule we get,
$y' = \pm\frac{1}{2}(1-x^2)^{-1/2}\cdot(-2x) = \pm\frac{x}{(1-x^2)^{1/2}}$
And simplifying by substituting y back into this equation gives
$y' = -\frac{x}{y}$
Implicit Differentiation
Using the same equation
$x^2 + y^2 = 1\,$
First, differentiate with respect to x on both sides of the equation:
$\frac{d}{dx}[x^2 + y^2] = \frac{d}{dx}[1]$
$\frac{d}{dx}[x^2]+\frac{d}{dx}[y^2] = 0$
To differentiate the second term on the left hand side of the equation (call it f(y(x))=y2), use the chain rule:
$\frac{df}{dx}=\frac{df}{dy}\cdot\frac{dy}{dx}=2y\cdot y'$
So the equation becomes
$2x+2yy'=0$
Separate the variables:
$2yy' = -2x\,$
Divide both sides by $2y\,$, and simplify to get the same result as above:
$y' = -\frac{2x}{2y}$
$y' = -\frac{x}{y}$
Uses
Implicit differentiation is useful when differentiating an equation that cannot be explicitly differentiated because it is impossible to isolate variables.
For example, consider the equation,
$x^2 + xy + y^2 = 16\,$
Differentiate both sides of the equation (remember to use the product rule on the term xy):
$2x + y + xy' + 2yy' = 0\,$
Isolate terms with y':
$xy' + 2yy' = -2x - y\,$
Factor out a y' and divide both sides by the other term:
$y' = \frac{-2x-y}{x+2y}$
Example
$xy \,=1$
can be solved as:
$y=\frac{1}{x}$
then differentiated:
$\frac{dy}{dx}=-\frac{1}{x^2}$
However, using implicit differentiation it can also be differentiated like this:
$\frac{d}{dx}[xy]=\frac{d}{dx}[1]$
use the product rule:
$x\frac{dy}{dx}+y=0$
solve for $\frac{dy}{dx}$:
$\frac{dy}{dx}=-\frac{y}{x}$
Note that, if we substitute $y=\frac{1}{x}$ into $\frac{dy}{dx}=-\frac{y}{x}$, we end up with $\frac{dy}{dx}=-\frac{1}{x^2}$ again.
Application: inverse trigonometric functions
Arcsine, arccosine, arctangent. These are the functions that allow you to determine the angle given the sine, cosine, or tangent of that angle.
$y=\arcsin(x)$
To find dy/dx we first need to break this down into a form we can work with:
$x = \sin(y)$
Then we can take the derivative of that:
$1 = \cos(y) \cdot \frac{dy}{dx}$
...and solve for dy / dx:
y=arcsin(x) gives us this unit triangle.
$\frac{dy}{dx} = \frac{1}{\cos(y)}$
At this point we need to go back to the unit triangle. Since y is the angle and the opposite side is sin(y) (which is equal to x), the adjacent side is cos(y) (which is equal to the square root of 1 minus x2, based on the Pythagorean theorem), and the hypotenuse is 1. Since we have determined the value of cos(y) based on the unit triangle, we can substitute it back in to the above equation and get:
Derivative of the Arcsine $\frac{d}{dx} \arcsin(x) = \frac{1}{\sqrt{1-x^2}}\,\!$
We can use an identical procedure for the arccosine and arctangent:
Derivative of the Arccosine $\frac{d}{dx} \arccos(x) = \frac{-1}{\sqrt{1-x^2}}\,\!$ Derivative of the Arctangent $\frac{d}{dx} \arctan(x) = \frac{1}{1+x^2}\,\!$
← Higher Order Derivatives Calculus Derivatives of Exponential and Logarithm Functions → Implicit Differentiation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 48, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836708903312683, "perplexity": 206.68450638171967}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379636.59/warc/CC-MAIN-20141119123259-00248-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://www.iam.fmph.uniba.sk/institute/forum/viewtopic.php?p=73& | ## Seminar 30.10.2014: Daniel Sevcovic
Seminar on Qualitative Theory of Differential Equations
organized by P.Quittner, M.Fila and R.Kollar
Moderator: sevcovic
### Seminar 30.10.2014: Daniel Sevcovic
Seminár z kvalitatívnej teórie diferenciálnych rovníc
Seminar on Qualitative Theory of Differential Equations
Štvrtok 30.10.2014 o 14:00, poslucháreň M-223
Daniel Ševčovič (KAMŠ):
Tangential redistribution for mean-curvature driven evolution of surfaces and curves
Abstrakt:
The main goal of this talk is to investigate tangential redistribution of points on evolving immersed manifolds.
More precisely, we will analyze motion of surfaces or curves evolved in the normal direction by the curvature.
Although the tangential velocity has no impact on the shape of evolved manifolds, it is an important issue
in numerical approximation of any evolution model, since the quality of the mesh has a significant impact on the
result of the computation. We analyze the volume-oriented and length-oriented tangential redistribution methods.
We apply the proposed techniques to the particular case of mean curvature evolution of surfaces in \$\mathbb{R}^3\$.
We explain the numerical approximation of the model and present several experiments illustrating the performance
of the redistribution techniques. This is a joint work based on the joint paper with M.Remesikova, K.Mikula
and P.Sarkoci: Manifold evolution with tangential redistribution of points, SIAM J. Sci. Comput. 36-4 (2014).
quittner
Posts: 92
Joined: Fri Oct 12, 2012 11:21 am | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923008918762207, "perplexity": 3391.6998942689306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526228.27/warc/CC-MAIN-20190418181435-20190418203435-00484.warc.gz"} |
http://tex.stackexchange.com/questions/26057/squeezing-space-around-algorithm-construct | # Squeezing space around algorithm construct
I am using algorithm2e for developing algorithm constructs in my LaTeX document. However, if I want to use [!t] or [!b] to place the algorithm construct in an appropriate place, sometimes the construct is going down to the last page instead of the current page. Therefore, I wanted to wrap it with a figure construct as follows to squeeze space after the algorithm construct by using squeezing functions such as \vpsace{-0.5cm} just between \end{algorithm} and \end{figure}. But it is giving the error: ! LaTeX Error: Not in outer par mode. Please help me to solve this.
\begin{figure}[!t]
\begin{algorithm}
....
...
\end{algorithm}
\end{figure}
-
You cannot wrap an algorithm within a figure environment, since both are floats. Floats inside floats is not allowed, producing your error. It's not the use of \vspace{-0.5cm} that produces the error. – Werner Aug 18 '11 at 19:46
## 2 Answers
@kkp: The algorithm environment provided by the algorithm2e package is a "floating" environment, just like table and figure floating environments are. Hence, it can't be wrapped inside another floating group. What you are encountering -- the fact that you can't get LaTeX to place the floats anywhere close to where you want them to go -- is a commonly shared frustration of many LaTeX users. My main suggestion is to check if the algorithm floats in question occupy well more than half a page. If that's the case, you may want to change some or all of the parameters \topfraction, \bottomfraction, \textfraction, and \floatpagefraction. In many of my documents, I have the following commands in the preamble:
\renewcommand\topfraction{0.85}
\renewcommand\bottomfraction{0.85}
\renewcommand\textfraction{0.1}
\renewcommand\floatpagefraction{0.85}
With these commands, you would instruct LaTeX to allow a float -- really, a group of floats -- to occupy up to 85% of a page that also contains some text. (If a float is larger than that, it'll end up on a page by itself.)
Another suggestion I'd make is not to specify the placement options [t!] and [h!], for if LaTeX cannot satisfy this demand immediately, somewhat perversely (and counter-intuitively) the float, and all subsequent floats of the same type (figure, table, or algorithm, will be pushed back all the way to the end of the document rather than just to the next suitable page.
Addendum: Discussion of how to reduce the space between text and floats:
To change these amounts of space, you could add the following instructions (or something similar) to your document's preamble:
\setlength\floatsep{1.25\baselineskip plus 3pt minus 2pt}
\setlength\textfloatsep{1.25\baselineskip plus 3pt minus 2pt}
\setlength\intextsep{1.25\baselineskip plus 3pt minus 2 pt}
The first length governs the separation of two adjacent floats, the second sets the separation between a float that's at the top (or bottom) of a page and the text below (above) it, and the third sets the separation between a float that's in the middle of a page and the text above and below it. As you can see from this example, I've set all three lengths to the same ("rubber") value. Unless you're really really pressed for space, I wouldn't reduce the lengths even further.
-
LaTeX places floats on pages based on the availability for that specific page. These availabilities are defined in terms of lengths or ratios.
In the documentation of the layouts package it displays the page-related quantities associated with float placement (see section 6 Float layouts):
Therefore, float placement is influenced by
• \topfraction (default is 0.699)
• \topnumber (default is 2)
• \textfraction (default is 0.199)
• \bottomfraction (default is 0.300)
• \bottomnumber (default is 1)
• \totalnumber (default is 3)
Changing these should motivate LaTeX to increase the number of floats in a specific region (like t or b), for example. Redefinition of \...number is done via \setlength, while \...fraction is modified using \renewcommand. See section 6.3 Changing the float layout in your document for more details on how this can be done. Modifying these settings may be very document-specific.
More layout options regarding the space between page components can also be modified. Here's a graphic from the same documentation displaying the important ones:
Modifying these lengths via \setlength adjusts the layout according to your preference.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557462096214294, "perplexity": 1676.4010597366207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457697.46/warc/CC-MAIN-20151124205417-00110-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://planetmath.org/Projection | # projection
A linear transformation $P:V\rightarrow V$ of a vector space $V$ is called a projection if it acts like the identity on its image. This condition can be more succinctly expressed by the equation
$P^{2}=P.$ (1)
###### Proposition 1
If $P:V\rightarrow V$ is a projection, then its image and the kernel are complementary subspaces, namely
$V=\ker P\oplus\mathop{\mathrm{img}}\nolimits P.$ (2)
Proof. Suppose that $P$ is a projection. Let $v\in V$ be given, and set
$u=v-Pv.$
The projection condition (1) then implies that $u\in\ker P$, and we can write $v$ as the sum of an image and kernel vectors:
$v=u+Pv.$
This decomposition is unique, because the intersection of the image and the kernel is the trivial subspace. Indeed, suppose that $v\in V$ is in both the image and the kernel of $P$. Then, $Pv=v$ and $Pv=0$, and hence $v=0$. QED
Conversely, every direct sum decomposition
$V=V_{1}\oplus V_{2}$
corresponds to a projection $P:V\rightarrow V$ defined by
$Pv=\begin{cases}v&v\in V_{1}\\ 0&v\in V_{2}\end{cases}$
Specializing somewhat, suppose that the ground field is $\mathbb{R}$ or $\mathbb{C}$ and that $V$ is equipped with a positive-definite inner product. In this setting we call an endomorphism $P:V\rightarrow V$ an orthogonal projection if it is self-dual
$P^{\displaystyle\star}=P,$
in addition to satisfying the projection condition (1).
###### Proposition 2
The kernel and image of an orthogonal projection are orthogonal subspaces.
Proof. Let $u\in\ker P$ and $v\in\mathop{\mathrm{img}}\nolimits P$ be given. Since $P$ is self-dual we have
$0=\langle Pu,v\rangle=\langle u,Pv\rangle=\langle u,v\rangle.$
QED
Thus we see that a orthogonal projection $P$ projects a $v\in V$ onto $Pv$ in an orthogonal fashion, i.e.
$\langle v-Pv,u\rangle=0$
for all $u\in\mathop{\mathrm{img}}\nolimits P$.
Title projection Projection 2013-03-22 12:52:13 2013-03-22 12:52:13 rmilson (146) rmilson (146) 8 rmilson (146) Definition msc 15A21 msc 15A57 orthogonal projection | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 33, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9845941662788391, "perplexity": 221.3152768427432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00504.warc.gz"} |
http://openstudy.com/updates/4ec67786e4b0306379b2004b | ## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## xEnOnn Group Title I suddenly got very confused with something. Suppose I have a parametric equation like this: $\begin{bmatrix} x\\ y \end{bmatrix}=\begin{bmatrix}sin( t)\\ cos(t) \end{bmatrix}$ Why is the tangent vector of this function simply just the derivative of the x and y like this: $tangent \ vector=\begin{bmatrix} x'\\ y' \end{bmatrix}=\frac{\partial }{\partial t} \begin{bmatrix}sin( t)\\ cos(t) \end{bmatrix}= \begin{bmatrix}cos( t)\\ -sin(t) \end{bmatrix}$ Taking the derivative just give me the gradient of the equation. It is just the gradient and not the tangent line yet, isn't it? 2 years ago 2 years ago Edit Question Delete Cancel Submit
• This Question is Closed
1. phi Group Title
Best Response
You've already chosen the best response.
0
I would not call it a tangent line i.e. a line that is tangent to the curve at the point. It is the tangent direction vector. It points in the correct direction, but it's not necessarily tangent to the curve.
• 2 years ago
2. xEnOnn Group Title
Best Response
You've already chosen the best response.
0
oh yea...you are right. it is the the tangent direction vector. But what is the rationale behind that the gradient is simply the direction? The gradient is just the rate of change but how does it give that direction?
• 2 years ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627150297164917, "perplexity": 1896.1123584018278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922087.15/warc/CC-MAIN-20140909045503-00216-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/122682/renormalizability-of-standard-model?noredirect=1 | # Renormalizability of standard model
I'm wonder what precisely is meant by the renormalizability of the standard model. I can imagine two possibilities:
1. The renormalizability of all of the interaction described by the Lagrangian before spontaneous symmetry breaking (SSB) by the nonzero vacuum expectation value (VEV) of the Higgs field.
2. The renormalizability of the Lagrangian obtain from the initial one after SSB, expressed in terms of suitable new fields (which has direct physical interpretation contrary to the fields appearing in initial Lagrangian).
It seems that in case (2) we obtain an effective (nonrenormalizable) theory only and this precisely was the reason to introduce the mechanism of generating mass by nonzero VEV of Higgs field. The original Lagrangian (case (1)) contains only power counting renormalizable vertices so if there are no anomalies then SM befor SSB is renormalizable. However, in physical prediction (actual computations being performed), as far as I know, Lagrangian after SSB is used. Does is require infinite number of counterterms (is it effective theory)?
• If any model is said to be renormalizable, it entails all infinities may be absorbed by a finite number of counter-terms, expressing all quantities in terms of the renormalized, physical parameters. In addition, the Standard Model should indeed be viewed as an effective field theory. – JamalS Jul 1 '14 at 20:29
• @JamalS So to make the prediction of SM finite one needs infinite number of counterterms? – user72829 Jul 1 '14 at 20:33
• No, if you read my post, I specifically said a finite number of counter-terms. – JamalS Jul 1 '14 at 20:58
• By the way, have a look at: physics.stackexchange.com/q/4184 – JamalS Jul 1 '14 at 21:04
• Visit einstein-schrodinger.com/Standard_Model.pdf for the full SM Lagrangian, with detailed explanations of each part and conventions. See also Prof. Wise's lectures on the SM available on the Perimeter Institute site. – JamalS Jul 2 '14 at 11:35
$${\cal L} = \mu ^2 \left| \phi \right| ^2 - \lambda \left| \phi \right| ^4 - \phi \psi _i \psi _j$$ where, $\psi$ are the set of SM fields which have Yukawas (I'm being a bit sloppy here about all keeping terms that are actually SU(2) invariant). SSB implies shifting the Higgs to its vacuum expectation value which is at some value $v$: $$\left( \begin{array}{c} \phi _1 + i \phi _2 \\ \phi _3 + i \phi _4 \end{array} \right) \rightarrow \left( \begin{array}{c} \phi _1 + i \phi _2 + v \\ \phi _3 + i \phi _4 \end{array} \right)$$ This doesn't change the dimension of the Higgs field, since $v$ is still of mass dimension $1$ and so each term containing the Higgs won't change dimensions after SSB. Every term will still be at most of mass dimension $4$. Therefore, whether the theory is renormalizable will hold equally well before or after SSB. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.879580020904541, "perplexity": 472.83564697967284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655906214.53/warc/CC-MAIN-20200710050953-20200710080953-00293.warc.gz"} |
https://www.deeplearningpatterns.com/doku.php?id=pruning | Name Pruning
Intent
Motivation
Structure
<Diagram>
Discussion
Known Uses
Related Patterns
<Diagram>
References
http://openreview.net/pdf?id=SkC_7v5gx THE POWER OF SPARSITY IN CONVOLUTIONAL NEURAL NETWORKS
A surprisingly effective approach to trade accuracy for size and speed is to simply reduce the number of channels in each convolutional layer by a fixed fraction and retrain the network. In many cases this leads to significantly smaller networks with only minimal changes to accuracy. In this paper, we take a step further by empirically examining a strategy for deactivating connections between filters in convolutional layers in a way that allows us to harvest savings both in run-time and memory for many network architectures.
https://arxiv.org/abs/1701.04465v1 The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning
We also observed strong evidence for the hypotheses of Mozer & Smolensky (1989a) regarding the “dualist” nature of hidden units, i.e. that learning representations are divided between units which either participate in the output approximation or learn to cancel each others influence.
https://arxiv.org/abs/1704.05119 Exploring Sparsity in Recurrent Neural Networks
We propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8x and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90% and speed-up is around 2x to 7x.
https://arxiv.org/abs/1810.04622v1 Pruning neural networks: is it time to nip it in the bud?
First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network. Second, it is the architectures obtained through the pruning process — not the learnt weights —that prove valuable. Such architectures are powerful when trained from scratch. Furthermore, these architectures are easy to approximate without any further pruning: we can prune once and obtain a family of new, scalable network architectures for different memory requirements. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548056483268738, "perplexity": 837.2943715950794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00253.warc.gz"} |
https://www.ias.ac.in/listing/bibliography/boms/S._N._Jha | • S N Jha
Articles written in Bulletin of Materials Science
• Effect of residual elements on high performance nickel base superalloys for gas turbines and strategies for manufacture
The need for better gas turbine operating efficiency and reliability has resulted in tightening of specification and acceptance standards. It has been realized that some elements even at trace level, can have disastrous effect on high temperature properties. The present paper highlights the adverse effect of tramp elements and strategies that should be adopted to produce high purity superalloys.
• X-ray absorption spectroscopy of PbMoO4 single crystals
X-ray absorption spectra of PbMoO4 (LMO) crystals have been investigated for the first time in literature. The measurements have been carried out at Mo absorption edge at the dispersive EXAFS beamline (BL-8) of INDUS-2 Synchrotron facility at Indore, India. The optics of the beamline was set to obtain a band of 2000 eV at 20,000 eV and the channels of the CCD detector were calibrated by recording the absorption edges of standard Mo and Nb foils in the same setting. The absorption spectra have been measured for three LMO samples prepared under different conditions viz.
grown in air from stoichiometric starting charge,
grown in argon from stoichiometric starting charge and
grown in air from PbO-rich starting charge.
The results have been explained on the basis of the defect structure analysed in LMO crystals prepared under different conditions. The Mo absorption edge is significantly influenced by the deviations in crystal stoichiometry.
• Optical and X-ray photoelectron spectroscopy of PbGeO3 and Pb5Ge3O11 single crystals
Pb5Ge3O11 crystals are found to exhibit pale yellow colouration while PbGeO3 are colourless. X-ray photoelectron spectroscopy (XPS) measurements show lead deficiency in both the crystals. The results also reveal a stronger ionic character for PbGeO3 as compared to Pb5Ge3O11 crystal. The binding energy of Ge3𝑑 core level in the case of Pb5Ge3O11 crystal is found to be smaller than the binding energy of germanium oxide, thereby indicating the incomplete oxidation of Ge ions in the crystal lattice. On gamma ray irradiation, the transmission of both the crystals is observed to deteriorate uniformly over the entire wavelength range, which has been attributed to the oxidation of some of the lattice Pb ions. On gamma irradiation the changes observed in O1𝑠 core level energies for both the crystals are seen to be consistent with the changes noted in the Pb4𝑓7/2 and Ge3𝑑 spectra. Interestingly, the results reveal oxidation of surface Ge atoms with atmospheric oxygen under gamma irradiation.
• EXAFS investigations on PbMoO4 single crystals grown under different conditions
Extended X-ray absorption fine structure (EXAFS) measurements on PbMoO4 (LMO) crystals have been performed at the recently-commissioned dispersive EXAFS beamline (BL-8) of INDUS-2 Synchrotron facility at Indore, India. The LMO samples were prepared under three different conditions viz.
grown from a stoichiometric starting charge in air ambient,
grown from a stoichiometric starting charge in argon ambient and
grown from PbO-rich starting charge in air ambient.
The EXAFS data obtained at both Pb 𝐿3 and Mo K edges of LMO have been analysed to determine Pb–O, Pb–Mo and Mo–O bond lengths in the crystals. The information thus obtained has been used to examine the microscopic defect structures in crystals grown under different conditions.
• Chemical shift of Mn and Cr K-edges in X-ray absorption spectroscopy with synchrotron radiation
Mn and Cr K X-ray absorption edges were measured in various compounds containing Mn in Mn2+, Mn3+ and Mn4+ oxidation states and Cr in Cr3+ and Cr6+ oxidation states. Few compounds possess tetrahedral coordination in the 1st shell surrounding the cation while others possess octahedral coordination. Measurements have been carried out at the energy dispersive EXAFS beamline at INDUS-2 Synchrotron Radiation Source at Raja Ramanna Centre for Advanced Technology, Indore. Energy shifts of ∼8–16 eV were observed for Mn K edge in the Mn-compounds while a shift of 13–20 eV was observed for Cr K edge in Cr-compounds compared to values in elementalMn and Cr, respectively. The different chemical shifts observed for compounds having the same oxidation state of the cation but different anions or ligands show the effect of different chemical environments surrounding the cations in determining their X-ray absorption edges in the above compounds. The above chemical effect has been quantitatively described by determining the effective charges on Mn and Cr cations in the above compounds.
• Chemical shift of U L3 edges in different uranium compounds obtained by X-ray absorption spectroscopy with synchrotron radiation
Uranium L3 X-ray absorption edge was measured in various compounds containing uranium in U4+, U5+ and U6+ oxidation states. The measurements have been carried out at the Energy Dispersive EXAFS beamline (BL-08) at INDUS-2 synchrotron radiation source at RRCAT, Indore. Energy shifts of ∼ 2–3 eV were observed for U L3 edge in the U-compounds compared to their value in elemental U. The different chemical shifts observed for the compounds having the same oxidation state of the cation but different anions or ligands show the effect of different chemical environments surrounding the cations in determining their X-ray absorption edges in the above compounds. The above chemical effect has been quantitatively described by determining the effective charges on U cation in the above compounds.
• # Bulletin of Materials Science
Volume 43, 2020
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8479894995689392, "perplexity": 4189.252239875206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00518.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/130917-derivative-matrix-element-print.html | # Derivative of matrix element
• Feb 26th 2010, 02:03 PM
paolopiace
Derivative of matrix element
The derivative of a matrix element with respect to the matrix:
$\frac{d\ x_{ij}}{dX}$
Is there any equation or formula for it?
I could not find it on the Web. If you know any book treating it, I would appreciate the title.
Thanks!
• Feb 27th 2010, 07:22 AM
HallsofIvy
Quote:
Originally Posted by paolopiace
The derivative of a matrix element with respect to the matrix:
$\frac{d\ x_{ij}}{dX}$
Is there any equation or formula for it?
I could not find it on the Web. If you know any book treating it, I would appreciate the title.
Thanks!
Derivatives only make sense for functions so this matrix and matrix element must have one or more variables. Let's say that each element, and so the matrix, if a function of t. Then $\frac{dx_{ij}}{dt}$ is the derivative of the function $x_{ij}(t)$ and $\frac{dX}{dt}$ is the matrix having those derivatives as elements.
By the chain rule, $\frac{dx_{ij}}{dX}= \frac{dx_{ij}}{dt}\frac{dt}{dX}= \frac{dx_{ij}}{dt}\left(\frac{dX}{dt}\right)^{-1}$.
If that inverse does not exist, the derivative does not exist.
• Feb 27th 2010, 09:04 AM
paolopiace
Quote:
Originally Posted by HallsofIvy
Derivatives only make sense for functions so this matrix and matrix element must have one or more variables. Let's say that each element, and so the matrix, if a function of t. Then $\frac{dx_{ij}}{dt}$ is the derivative of the function $x_{ij}(t)$ and $\frac{dX}{dt}$ is the matrix having those derivatives as elements.
By the chain rule, $\frac{dx_{ij}}{dX}= \frac{dx_{ij}}{dt}\frac{dt}{dX}= \frac{dx_{ij}}{dt}\left(\frac{dX}{dt}\right)^{-1}$.
If that inverse does not exist, the derivative does not exist.
HallsofIvy,
There is no function. In $\frac{d x_{ij}}{d X}$ the matrix X is the variable in a quadratic form. To me, it seems like doing $\frac{d}{d x} x = 1$ in one dimension.
Anyway, I really reach the point where I have $\frac{d x_{ij}}{d X}$.
Does $\frac{d x_{ij}}{d X}$ = 1 make sense? Should I post the whole equation?
Thanks and Regards.
• Feb 27th 2010, 09:39 AM
Opalg
I agree with HallsofIvy. It is not orthodox mathematics to differentiate with respect to a matrix. Nevertheless, some people have tried to formulate such a concept, and you may find this Wikipedia page informative.
According to that page, if f(X) is a scalar-valued function of an n×m matrix X then the derivative $\frac{df}{dX}$ is defined to be the m×n matrix whose (i,j)-element is $\frac{\partial f}{\partial x_{ji}}$. In particular, if $f(X) = x_{ij}$ then $\frac{dx_{ij}}{dX}$ would be a matrix with a 1 in the (j,i)-position and zeros elsewhere.
But notice that much of the material on that Wikipedia page is disputed. I have no idea how reliable or useful this whole concept is.
• Feb 27th 2010, 10:05 AM
paolopiace
Oplag, Thanks.
I see it's better if I post a shorter version of the whole function.
I need to obtain the analytic formula of following derivative:
$\frac{d}{d\Sigma}\left[ b^T \Sigma^{-1} b \right]$
where the dxd matrix Sigma is positive definite and decomposed as $\Sigma = AA^T$
$b = \frac{1}{2}\Sigma_{ii}$ is the n-dimensional vector composed of the diagonal of the matrix Sigma.
Although not much relevant, $\Sigma_{ii}= a_i^T a_i$ where ai is the i-th row of A.
Thanks for any help.
P.S. Actually, like $\frac{d}{dx}x = 1$ it could be that $\frac{d}{dX}X = I$. Thus, $\frac{d}{dX}x_{ij} = 1$ when i=j. Zero when i<>j. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026978611946106, "perplexity": 308.10663185358504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424846.81/warc/CC-MAIN-20170724102308-20170724122308-00530.warc.gz"} |
https://chemistry.stackexchange.com/questions/33408/empirical-and-molecular-formula | Empirical and molecular formula
I was doing a homework assignment and I got the following question wrong because I was supposed to multiply the empirical formula by 3 to get the molecular formula of $\ce{C6H10S2O}$. I put $\ce{(C6H10S2O3)3}$ and the teacher said it should be $\ce{C18H30S6O3}$. My question is: isn't what I put the same thing? Please note that this is Gen Chem I.
$\ce{(C6H10S2O3)3}$ implies that there are $3 ~ \ce{C6H10S2O3}$ molecules bonded together instead of $1~ \ce{C18H30S6O3}$ molecule. Although they are the same thing, the correct notation is your teachers way because it shows the isometric formula for the molecule. Also if you were using that notation you should have put $\ce{(C6H10S2O)3}$ because $\ce{(C6H10S2O3)3}$ has 6 too many oxygens. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807062745094299, "perplexity": 248.46998404248086}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251737572.61/warc/CC-MAIN-20200127235617-20200128025617-00540.warc.gz"} |
http://www.conferencedequebec.org/2016/07/26/to-assess-correlation-between-multiplanar-dynamic-contrast-enhanced-us-blood-flow-measurements/ | # To assess correlation between multiplanar dynamic contrast-enhanced US blood flow measurements
To assess correlation between multiplanar dynamic contrast-enhanced US blood flow measurements and radiolabeled microsphere blood flow measurements five groups of 6 rabbits underwent unilateral testicular torsion of 0 180 360 540 or 720 degrees. and combined transverse/longitudinal US ratios as a function of torsion degree were compared to radiolabeled microsphere ratios using Pearson’s correlation coefficient ρ. There was high correlation between the two units of ratios (ρ ≥ 0.88 p≤ 0.05) except for the transverse US ratio in the immediate postoperative period (ρ = 0.79 p = 0.11). These results hold promise for future clinical applications. = 6) 180 (= 6) 360 (= SGX-523 6) 540 (= 6) or 720° (= 6) of spermatic cord torsion after which the postoperative US studies were performed. In the 720° torsion group torsion of the right testis was performed in two rabbits and torsion of the left testis was performed in four rabbits. In all of the remaining experimental groups torsion of the right testis was performed in three rabbits and torsion of the left testis was performed in SGX-523 three rabbits. The intra-aortic catheter was always placed through the groin opposite the torsive testis. In the sham surgery group the intra-aortic catheter was placed through the right groin in two rabbits and through the left groin in four rabbits. Contrast Agent Administration The US contrast agent Definity? (Lantheus Medical Imaging Inc. Billerica MA) was used in the study. Definity? consists of perflutren lipid microspheres made of octafluoropropane encapsulated in an outer lipid shell. The mean diameter PTGFRN of the microspheres ranges from 1.1 to 3.3 is proportional to regional mean flow and is proportional to blood volume (Wei et al. 1998). Although this model is incomplete (Hudson et al. 2009) it has been shown to yield reasonable results for measuring blood flow (Kogan et al. 2011; Thierman et al. 2006). A drawback of this empirical approach is that it necessitates calibration between subjects. In practice this is problematic since in addition to non-linear bubble oscillation pixel intensity can vary with anatomy acoustic beam profile system settings and other factors. The SGX-523 analysis used in the current study was designed to at least partially offset some of this subjectivity. It is first assumed that background signal can be subtracted such that (1) holds and S is zero at time t = 0. We next examine modification of (1) under the assumption that remaining unknown factors are time independent and can be represented by a factor independent of blood flow value identical to the VOI. It is further assumed that the two volumes functioning normally would have similar signal response (i.e. blood flow in the VOI and control are ideally identical). Noting that the time derivative of (3) is proportional to αAβ the ratio
$Q=(dS∕dt)∕(dS0∕dt)$
(4) yields a value proportional to blood flow. Time-varying values assigned in the US images were assumed to be solely a result of bubble response i.e. that the tissue response to the incident US beam was linear. For each time step pixel values were summed and SGX-523 then divided by the total number of analyzed pixels as a function of time to obtain a mean value. The processed time history was then stored in a database. The linear least squares method (Bj?rck. 1996) was used to fit the rise phase of the mean signal over a 7-second period about its midpoint. The midpoint was assumed to be the maximum of the first derivative of the curve as a function of time. The slope of the fit was determined and the intervention/control (I/C) ratio was calculated (Paltiel et al. 2011) providing an experimental approximation to (4). The standard deviation of the residuals was used to quantify the error in the fit. In this process the uncertainty in the curve was determined by calculating the maximum and minimum slopes that fit within one standard deviation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203198313713074, "perplexity": 1475.6070994022452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00262.warc.gz"} |
http://math.stackexchange.com/users/2764/binai?tab=activity&sort=all&page=3 | Binai
Reputation
508
Next privilege 1,000 Rep.
Create new tags
May 2 suggested approved edit on homomorphism of Laurent polynomial ring Apr 19 comment Surjective homomorphism on Laurent polynomial ring, part II Hi, could you give a look at my other question in link. Thanks, Apr 19 asked homomorphism of Laurent polynomial ring Jan 22 comment PBW Theorem applied to graded Lie algebras @user8268 The gradation is in $\mathbb Z_+^n$, then something more is necessary. There is no meaning for $b_1<\cdots< b_m$ in this case. Do you know where there exists a proof in the case of $\mathbb Z_+$-gradation? I think that it is possible to extend it... Otherwise, you could write one here to help! Jan 22 comment PBW Theorem applied to graded Lie algebras @user8268: I think that he wants a decomposition of each piece in terms of tensor products of symmetric powers of ${\frak a}[r_i]$ for suitable choice of $a[r_i]$. I don't know how to do either. Jan 5 awarded Promoter Jan 1 asked Integral forms of loop algebras. Dec 25 accepted Sum involving units of a ring. Dec 24 accepted Parabolic subalgebra Dec 16 accepted Surjective homomorphism on Laurent polynomial ring, part II Dec 16 comment Surjective homomorphism in Laurent polynomial ring. Thank you for helping me to do in the better way! I was not realizing the importance of this hypothesis which was inserted in the new question link. It was very helpful to understand what happens if I take out this hypothesis on my original problem. See the other question if you can! Dec 16 asked Surjective homomorphism on Laurent polynomial ring, part II Dec 16 awarded Cleanup Dec 16 accepted Surjective homomorphism in Laurent polynomial ring. Dec 16 revised Surjective homomorphism in Laurent polynomial ring. rolled back to a previous revision Dec 16 revised Surjective homomorphism in Laurent polynomial ring. deleted 177 characters in body Dec 16 revised Surjective homomorphism in Laurent polynomial ring. the general answer and a complementary question Dec 16 comment Surjective homomorphism in Laurent polynomial ring. Good point! You gave me the full answer of a part of my problem. Actually, the problem which I am working has a stronger hypothesis that $a_i^k \ne a_j^k$ for $i\ne j$ and $k=1,2,3$. So, the counterexample that you gave is of the form $(t-1)(t+1)$ and therefore it is not satisfying my original hypothesis. I am very greatful about your answer and I am editing the original question with this variation. Dec 15 comment Surjective homomorphism in Laurent polynomial ring. @jspecter: You are right! Dec 15 comment Surjective homomorphism in Laurent polynomial ring. @Bill: I just put $a_i\ne 0$, that is a different question. Sorry about my failure! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174984693527222, "perplexity": 572.5231442028876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166222.10/warc/CC-MAIN-20160205193926-00036-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://clay6.com/qa/9904/if-overrightarrow-hat-i-hat-j-2-hat-k-and-overrightarrow-2-hat-i-hat-j-2-ha | # If $\overrightarrow{a}=\hat i+\hat j+2\hat k\;and\;\overrightarrow{b}=2\hat i+\hat j-2\hat k,$then find the unit vector in the direction of $\;2\overrightarrow{ a}- \overrightarrow{b}$
Toolbox:
• Unit vector in the direction of $\overrightarrow {a}=\large \frac{\overrightarrow {a}}{|\overrightarrow {a}|}$
Let $\overrightarrow{a}=\hat i+\hat j+2\hat k\;and\;\overrightarrow{b}=2\hat i+\hat j-2\hat k,$
Therefore $2\overrightarrow{a}-\overrightarrow{b}=2(\hat i+\hat j+2 \hat k)-(2 \hat i+\hat j-2 \hat k)$
$\qquad\qquad \qquad= 2\hat i+2\hat j+4 \hat k-2 \hat i-\hat j+2 \hat k$
Therefore $2\overrightarrow{a}-\overrightarrow{b}=\hat j+6 \hat k$
The magnitude of this vector is
$|2\overrightarrow{a}-\overrightarrow{b}|=\sqrt {(1)^2+6^2}$
$=\sqrt {37}$
Hence the Unit vector in the direction of $|(2 \overrightarrow {a}-\overrightarrow {b})| is =\large \frac{2\overrightarrow {a}-\overrightarrow {b}}{|2\overrightarrow {a}-\overrightarrow {b}|}$
$=\Large\frac{\hat j+6 \hat k}{\sqrt {37}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984921813011169, "perplexity": 177.19099287090333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864822.44/warc/CC-MAIN-20180622220911-20180623000911-00630.warc.gz"} |
https://www.particlebites.com/?p=7021 | # Does antihydrogen really matter?
Article title: Investigation of the fine structure of antihydrogen
Authors: The ALPHA Collaboration
Reference: https://doi.org/10.1038/s41586-020-2006-5 (Open Access)
Physics often doesn’t delay our introduction to one of the most important concepts in history – symmetries (as I am sure many fellow physicists will agree). From the idea that “for every action there is an equal and opposite reaction” to the vacuum solutions of electric and magnetic fields from Maxwell’s equations, we often take such astounding universal principles for granted. For example, how many years after you first calculated the speed of a billiard ball using conservation of momentum did you realise that what you were doing was only valid because of the fundamental symmetrical structure of the laws of nature? And hence goes our life through physics education – we first begin from what we ‘see’ to understanding what the real mechanisms are that operate below the hood.
These days our understanding of symmetries and how they relate to the phenomena we observe have developed so comprehensively throughout the 20th century that physicists are now often concerned with the opposite approach – applying the fundamental mechanisms to determine where the gaps are between what they predict and what we observe.
So far one of these important symmetries has stood up the test of time with no observable violation so far being reported. This is the simultaneous transformation of charge conjugation (C), parity (P) and time reversal (T), or CPT for short. A ‘CPT-transformed’ universe would be like a mirror-image of our own, with all matter as antimatter and opposite momenta. the amazing thing is that under all these transformations, the laws of physics behave the exact same way. With such an exceptional result, we would want to be absolutely sure that all our experiments say the same thing, so that brings us the our current topic of discussion – antihydrogen.
#### Matter, but anti.
The trick with antimatter is to keep it as far away from normal matter as possible. Antimatter-matter pairs readily interact, releasing vast amounts of energy proportional to the mass of the particles involved. Hence it goes without saying that we can’t just keep them sealed up in Tupperware containers and store them next to aunty’s lasagne. But what if we start simple – gather together an antiproton and a single positron and voila, we have antihydrogen – the antimatter sibling to the most abundant element in nature. Well this is precisely what the international ALPHA collaboration at CERN has been concerned with, providing “slowed-down” antiprotons with positrons in a device known as a Penning trap. Just like hydrogen, the orbit of a positron around an antiproton behaves like a tiny magnet, a property known as an object’s magnetic moment. The difficulty however is in the complexity of external magnetic field required to ‘trap’ the neutral antihydrogen in space. Therefore not surprisingly, these are the atoms of very low kinetic energy (i.e. cold) that cannot overcome the weak effect of external magnetism.
There are plenty more details of how the ALPHA collaboration acquires antihydrogen for study. I’ll leave this up to a reference at the end. What I’ll focus on is what we can do with it and what it means for fundamental physics. In particular, one of the most intriguing predictions of the invariance of the laws of physics under charge, parity and time transformations is that antihydrogen should share many of the same properties as hydrogen. And not just the mass and magnetic moment, but also the fine structure (atomic transition frequencies). In fact, the most successful theory of the 20th century, quantum electrodynamics (QED), properly accomodating anti-electronic interactions, also predicts a foundational test for both matter and antimatter hydrogen – the splitting of the $2S_{1/2}$ and $2P_{1/2}$ energy levels (I’ll leave a reference to a refresher on this notation). This is of course known as the Nobel-Prize winning Lamb Shift in hydrogen, a feature of the interaction between the quantum fluctuations in the electromagnetic field and the orbiting electron.
#### I’m feelin’ hyperfine
Of course it is only very recently that atomic versions of antimatter have been able to be created and trapped, allowing researchers to uniquely study the foundations of QED (and hence modern physics itself) from the perspective of this mirror-reflected anti-world. Very recently, the ALPHA collaboration have been able to report the fine structure of antihydrogen up to the $n=2$ state using laser-induced optical excitations from the ground state and a strong external magnetic field. Undergraduates by now will have seen, at least even qualitatively, that increasing the strength of an external magnetic field on an atomic structure also increases the gaps in the energy levels, and hence frequencies of their transitions. Maybe a little less known is the splitting due to the interaction between the electron’s spin angular momentum and that of the nucleus. This additional structure is known as the hyperfine structure, and is readily calculable in hydrogen utilizing the 1/2-integer spins of the electron and proton.
From the predictions of QED, one would expect antihydrogen to show precisely this same structure. Amazingly (or perhaps exactly as one would expect?) the average measurement of the antihydrogen transition frequencies agree with those in hydrogen to 16 ppb (parts per billion) – an observation that solidly keeps CPT invariance in rule but also opens up a new world of precision measurement of modern foundational physics. Similarly, with consideration to the Zeeman and hyperfine interactions, the splitting between $2P_{1/2} - 2P_{3/2}$ is found to be consistent with the CPT invariance of QED up to a level of 2 percent, and the identity of the Lamb shift ($2S_{1/2} - 2P_{1/2}$) up to 11 percent. With advancements in antiproton production and laser inducement of energy transitions, such tests provide unprecedented insight into the structure of antihydrogen. The presence of an antiproton and more accurate spectroscopy may even help in answering the unsolved question in physics: the size of the proton!
#### References
1. A Youtube link to how the ALPHA experiment acquires antihydrogen and measures excitations of anti-atoms: http://alpha.web.cern.ch/howalphaworks
2. A picture of my aunty’s lasagne: https://imgur.com/a/2ffR4C3
3. A reminder of what that fancy notation for labeling spin states means: https://quantummechanics.ucsd.edu/ph130a/130_notes/node315.html
4. Details of the 1) Zeeman effect in atomic structure and 2) Lamb shift, discovery and calculation: 1) https://en.wikipedia.org/wiki/Zeeman_effect 2) https://en.wikipedia.org/wiki/Lamb_shift
5. Hyperfine structure (great to be familiar with, and even more interesting to calculate in senior physics years): https://en.wikipedia.org/wiki/Hyperfine_structure
6. Interested about why the size of the proton seems like such a challenge to figure out? See how the structure of hydrogen can be used to calculate it: https://en.wikipedia.org/wiki/Proton_radius_puzzle
The following two tabs change content below. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.894197404384613, "perplexity": 683.0360133143754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00246.warc.gz"} |
https://en.wikiversity.org/wiki/Monte_Carlo_Integration | # Monte Carlo Integration
## Content summary
A brief introduction to Monte Carlo integration and a few optimization techniques.
## Goals
This learning project offers learning activities to Monte Carlo integration. A student should be able to effectively apply Monte Carlo methods to integrate basic functions over set boundaries and apply some level of optimizations to a given problem.
Concepts to learn include: /concepts
## Learning materials
### Texts
[1] Numerical Mathematics and Computing Chapter 12
### Lessons
• Lesson 1: Introduction to Monte Carlo Integration
Monte Carlo methods use random samplings to approximate probability distributions. This
technique has applications from weather prediction to quantum mechanics.
One use for Monte Carlo methods is in the approximation of integrals. This is done by
choosing some number of random points over the desired interval and summing the function
evaluations at these points. The area of the desired interval is then multiplied by the
average function evaluation from the chosen points.
(1) ${\displaystyle \int _{a}^{b}{f(x)}\approx }$ ${\displaystyle (b-a)*\sum _{1}^{N}{f(x_{n})} \over {N}}$
This technique can further be implemented in multiple dimensions where the process becomes
more useful. A rigorous evaluation of this technique finds the error approximately ${\displaystyle 1 \over {\sqrt {(}}N)}$ [1] which means
${\displaystyle O{({1 \over {\sqrt {n}}})}}$ convergence. This is not very useful in the one dimension case above, as better techniques exist, but
since the error is not bounded by the number of dimensions evaluated, when many dimensional
integrals are evaluated, Monte Carlo methods can become increasingly effective.
The following Matlab script will approximate a triple integral for a given function and
boundary.
%F is the vectorized function to be evaluated
%bound is a vector representing the x,y,z bounds to the integrals
%N is the number of samples to use in the approximation
%e.g. MonteCarlo(inline('x.*y.*z'), [0 1 0 1 0 1], 1000)
function est = MonteCarlo(F, bound, N)
B = bound;
R = rand(3, N);
%Set the random samplings to the correct intervals
R(1, :) = (B(2)-B(1))*R(1, :)+ B(1);
R(2, :) = R(2, :)*(B(4) - B(3)) + B(3);
R(3, :) = R(3, :)*(B(6) - B(5)) + B(5);
Volume = (B(2)-B(1))*(B(4)-B(3))*(B(6)-B(5));
s = feval(F, R(1,:), R(2,:), R(3,:));
total = sum(s);
avgF = total/N;
Approx = avgF*Volume;
fprintf('Approximation: %f', Approx);
The next step in Monte Carlo integration is to optimize the evaluation to more accurately
and quickly determine the integral. There are numerous techniques for improving on (1). The
following require some pre-existing knowledge about the function being evaluated:
• Lesson 2: Control Variates
This technique breaks the function being evaluated into pieces in which one or more pieces
have known integral values or are more easily evaluated than the original function. In this
way, the random samplings will be more prevalent with the difficult part of the function
and not be wasted on the already known piece.
e.g.
let
${\displaystyle f(x)=e^{-x^{2}}+sin{(x)}}$
over the interval of 0 to ${\displaystyle 2\pi }$ then,
${\displaystyle \int _{0}^{2\pi }{f(x)}}$ ${\displaystyle =}$ ${\displaystyle \int _{0}^{2\pi }{e^{-x^{2}}}+\int _{0}^{2\pi }{sin{(x)}}}$ ${\displaystyle =}$ ${\displaystyle \int _{0}^{2\pi }{e^{-x^{2}}}+0}$
since sin x is an odd function, its integral is 0 on this interval. By decreasing the variance of the
function being integrated, the approximated answer will be more accurate [2].
• Lesson 3: Stratified Sampling
This technique relies on breaking the desired interval into multiple sections and evaluating
the Monte Carlo integration on each section individually. In this way, the more important
sections, i.e. the intervals where f(x) gives its greatest contribution to the integral, are
able to receive more random samplings in approximating their integral values. This will
allow the more important sections to contribute more accurately to the final integral.
e.g.
let ${\displaystyle f(x)=x^{3}}$ over the interval [0,1]One can easily show ${\displaystyle \int _{0}^{1}{x^{3}}=.25}$.
It can also be seen that ${\displaystyle \int _{.8}^{1}{x^{3}}=.1476}$.
This shows that the interval of x from .8 to 1 makes up almost
60% of the integral value. Clearly this section is more important than
when x is between 0 and .8. It logically follows that a more accurate
approximation can be made if more of the samplings are chosen between .8
and 1 than the rest of the interval.
The Monte Carlo method then becomes the following, given that the interval is broken into m sections:
(2) ${\displaystyle \int _{a}^{b}{f(x)}\approx }$ ${\displaystyle \sum _{i=1}^{m}{Length_{i}*\sum _{j=1}^{N_{i}}{f(x_{j})} \over {N_{i}}}}$
where Length_i is the length of the ith section, N_i is the number of random samplings
chosen from the ith section.
If the interval lengths and number of samples for each length are chosen correctly,
this method can dramatically decrease the error in approximating the integral.
• Lesson 4: Importance Sampling
The previous technique showed that using a non-uniform distribution of random points can
lead to better samplings and more accurate approximations of an integral. Importance
sampling is an extension of this technique, but instead of using grids to split the
interval, a distribution function is used for choosing the random points. Now when picking
the sampling points to evaluate the integral, the points must conform to some distribution
function which ideally approximates the desired function.
i.e.
given a distribution function p(x) which simulates f(x).
Pick N random numbers s.t. the density of the points conforms to p(x)
p(x) = x^2 distribution
e.g.
if p(x) is x^2 on the interval [0,1] then more of the sampling points should
appear closer to x=1 than x=0.
With the non-uniform distribution of random points, the equation approximating the integral
must be revisited. Now, given a distribution of random points with a density of p(x), the
approximation of the integral becomes:
(3) ${\displaystyle \int _{a}^{b}{f(x)}\approx }$${\displaystyle (b-a)*\sum _{1}^{N}{f(x_{n}) \over {p(x_{n})}} \over {N}}$ [2]
### Assignments
#### Activities
• Activity 1.
Approximate Pi using Monte Carlo integration and Matlab. [hint: ${\displaystyle x^{2}+y^{2}=r^{2}}$ forms a circle]
• Activity 2.
Extend the given matlab code from the introduction to perform integration on an arbitrary number of dimensions, rather than just three. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582370519638062, "perplexity": 1423.7851928752698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104704.64/warc/CC-MAIN-20170818160227-20170818180227-00267.warc.gz"} |
https://www.physicsforums.com/threads/potential-energy-of-a-system-gravity.597836/ | # Potential energy of a system (gravity)
1. Apr 18, 2012
### jjr
1. The problem statement, all variables and given/known data
Show that the potential energy of a system which consists of four equal particles, each with mass M, that are placed in different corners of a square with sides of length d, is given by
ep = - $\frac{G M^2}{d}$*(4 + √(2))
2. Relevant equations
The gravitational force F(r) = - $\frac{G M m}{r^2}$ * ur
Potential gravitational energy Ep(r) = - $\frac{G M m}{r}$
3. The attempt at a solution
I'm having a hard time achieveing an intuitive comprehension of how one might solve this problem. As far as I can understand, they're asking how much work would be done if all the particles moved in to the center? I'm not sure if I should figure out the work it would take to bring each individual particle in to the center one at a time, all at once, or if I need to approach this in some other way.. Any hints would be greatly appreciated
J
2. Apr 18, 2012
### ehild
The gravitational potential is zero at infinity. You need to calculate the work needed to take the system apart, to move the particles one by one to infinity. The negative of that work is equal to the potential energy of the system.
ehild
3. Apr 18, 2012
### jjr
Of course! Thanks:)
Similar Discussions: Potential energy of a system (gravity) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.921288788318634, "perplexity": 367.67883292164373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00037.warc.gz"} |
https://astronomy.stackexchange.com/questions/36162/little-h-usage-in-cosmological-simulations | # 'Little h' usage in cosmological simulations
I am running a cosmological simulation and am having some trouble putting things into code units. The physical distance units in my simulation are in terms of $$\text{Mpc/h}$$, where $$h$$ is the dimensionless Hubble parameter. This makes enough sense because, as noted elsewhere, simulations are often scale-free so it makes sense to factor out the $$h$$ dependence and make it explicit. This unit convention is causing me some confusion however. In one calculation I have to do during the simulation, I essentially (ignoring context which I can provide later) have to multiply the speed of light $$c$$ by an inverse distance $$1/x_0$$ which is given in units of $$\text{Mpc/h}$$.
In order to properly have the units cancel, I first put $$c$$ in units of $$\text{Mpc/s}$$ to get $$9.716 \times10^{-15} \text{Mpc/s}$$ However, should I know factor out the $$h$$ dependence? These seems strange to me because in my mind, the value of the speed of light should not depend on the underlying cosmology I have simulated. On the other hand, I feel that I should not cancel units of $$\text{Mpc}$$ with units of $$\text{Mpc/h}$$. To make things concrete, let's assume I have a value of $$h=.7$$. Should I then take the quantity above and multiply it by $$.7$$ to yield $$6.802 \times 10^{-15} \text{(Mpc/h)/s}$$
and use that result in my calculations? I think this situation confuses me because it doesn't involve measurements, where it is clear how $$h$$ can enter, and it involves a constant of nature, which should be independent of the assumed cosmology.
• If the result of your computation is in units of $hs^{-1}$ then that can be ok. Depending on the value of $h$ the process is longer or shorter in physical time. – user26287 May 14 '20 at 12:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245591163635254, "perplexity": 179.6261542481432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00201.warc.gz"} |
https://quantumcomputing.stackexchange.com/questions/21561/swap-test-and-density-matrix-distinguishability | # SWAP test and density matrix distinguishability
Let us either be given the density matrix $$$$|\psi\rangle\langle \psi| \otimes |\psi\rangle\langle \psi| ,$$$$ for an $$n$$ qubit pure state $$|\psi \rangle$$ or the maximally mixed density matrix $$$$\frac{\mathbb{I}}{{2^{2n}}}.$$$$
I am trying to analyze the following algorithm to distinguish between these two cases.
We plug the $$2n$$ qubit state we are given into the circuit of a SWAP test. Then, following the recipe given in the link provided, if the first qubit is is $$0$$, I say that we were given two copies of $$|\psi \rangle$$, and if it is $$1$$, we say we were given the maximally mixed state over $$2n$$ qubits.
What is the success probability of this algorithm? Is it the optimal distinguisher for these two states? The optimal measurement ought to be an orthogonal one (as the optimal Helstorm measurement is an orthogonal measurement). How do I see that the SWAP test implements an orthogonal measurement?
First of all, let us compute the probability of success of this algorithm. If you are given the state $$|\psi\rangle\langle\psi|\otimes|\psi\rangle\langle\psi|$$, the SWAP test will return the state $$|0\rangle$$ with probability $$1$$, which is the probability of success of the algorithm in this case.
Let us now consider the second case. The initial state is: $$\rho_0=\frac{1}{2^{2n}}\sum_{i,j}|0,i,j\rangle\langle0,i,j|$$ The first gate to be applied is: $$\mathbf{H}\otimes \mathbf{I}\otimes\mathbf{I}=\frac{1}{\sqrt{2}}\sum_{a,b,x,y}(-1)^{a\cdot b}|a,x,y\rangle\langle b,x,y|.$$ The resulting state is thus given by: $$\rho_1=\frac{1}{2}\frac{1}{2^{2n}}\sum_{a,i,j,b}|a,i,j\rangle\langle b,i,j|$$ We now apply the $$\mathbf{CSWAP}$$ gate, whose expression is: $$\mathbf{CSWAP}=\sum_{x,y}|0,x,y\rangle\langle0,x,y|+\sum_{x,y}|1,x,y\rangle\langle1,y,x|$$ The resulting state is: $$\rho_2=\frac{1}{2}\frac{1}{2^{2n}}\sum_{i,j}\left(|0,i,j\rangle\langle0,i,j|+|0,i,j\rangle\langle1,j,i|+|1,j,i\rangle\langle0,i,j|+|1,j,i\rangle\langle1,j,i|\right)$$ Finally, we apply the Hadamard gate on the first qubit once again, which results in the state: $$\rho_3=\frac{1}{4}\frac{1}{2^{2n}}\sum_{i,j}\left(\sum_{a,b}|a,i,j\rangle\langle b,i,j|+\sum_{a,b}(-1)^b|a,i,j\rangle\langle b,j,i|+\sum_{a,b}(-1)^a|a,j,i\rangle\langle b,i,j|+\sum_{a,b}(-1)^{a\oplus b}|a,j,i\rangle\langle b,j,i|\right)$$ We're interested by the diagonal coefficients of $$\rho_3$$ that can be written as $$|0,i,j\rangle\langle0,i,j|$$. Summing them would give us the probability of measuring $$|0\rangle$$. This probability is thus given by: $$\mathbb{P}[|0\rangle]=\frac{1}{4}\frac{1}{2^{2n}}\left(\sum_{i,j}1+\sum_{i}1+\sum_{i}1+\sum_{i,j}1\right)=\frac12+\frac{1}{2^{n+1}}.$$ All in all, this algorithm distinguishes these two states with probability $$\frac34-\frac{1}{2^{n+2}}$$.
Now, let $$T$$ denote the trace distance between these two states. We know that the optimal probability of disinguishing these states is given by $$\frac12(1+T)$$. Let $$U$$ be a quantum gate such that $$U|0\rangle=|\psi\rangle$$. $$T$$ is then also equal to the trace distance between $$\left(U^\dagger\otimes U^\dagger\right)\left(|\psi\rangle\langle\psi|\otimes|\psi\rangle\langle\psi|\right)\left(U\otimes U\right)=|0\rangle\langle0|\otimes|0\rangle\langle0|$$ and $$\frac{1}{2^{2n}}\left(U^\dagger\otimes U^\dagger\right)\mathbf{I}\left(U\otimes U\right)=\frac{1}{2^{2n}}\mathbf{I}$$. $$T$$ is then easily seen to be: $$T=\frac12\sum_i\left|\lambda_i\right|=\frac12\left(1-\frac{1}{2^{2n}}+\sum_{i=1}^{2^{2n}-1}\frac{1}{2^{2n}}\right)=1-\frac{1}{2^{2n}}$$ which means that the maximal probability of distinguishing these states is $$1-\frac{1}{2^{2n+1}}$$.
Thus, the SWAP test has a sub-optimal probability of success. Intuitively, this is due to the fact that the probability of measuring $$|0\rangle$$ is always larger than or equal to $$\frac12$$, which upper-bounds the probability of success with $$\frac34$$.
Note however that this reasoning works assuming you know what $$|\psi\rangle$$ is. Otherwise, the initial density matrix in the first case is also $$\frac{1}{2^{2n}}\mathbf{I}$$ and the maximal probabilty of distinguishing these situations is $$\frac12$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9926719665527344, "perplexity": 141.9583446406432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00207.warc.gz"} |
https://www.physicsforums.com/threads/define-physical.58723/ | # Define Physical
1. Jan 5, 2005
### Les Sleeth
Debates about physicalism are sometimes hampered because participants can't seem to agree what "physical" is. I'd like to invite all physicalists and those who believe they are clear about what physicalness is to create an exact definition.
I'll offer my opinion first. I think physicalness is mass, immediate effects of mass, and all that which has come about from the presence of mass. Since all mass we know of is believed to have originated with the Big Bang, then I'd also restrict the definition of physical to how mass and mass effects have developed from that event.
In a past thread I posted the following in support of my definition:
Princeton's Word Reference site give the definition of physical science here:
- the science of matter and energy and their interactions
On the same page you can find a definition for physicalness:
- the quality of being physical; consisting of matter
The Word Reference site gives several relevant definitions of physical here:
1* physical - involving the body as distinguished from the mind or spirit . . .
2* physical - relating to the sciences dealing with matter and energy; especially physics; "physical sciences"; "physical laws"
3* physical, tangible, touchable - having substance or material existence; perceptible to the senses; "a physical manifestation"; "surrounded by tangible objects"
4* physical - according with material things or natural laws (other than those peculiar to living matter); "a reflex response to physical stimuli"
6* physical - concerned with material things; "physical properties"; "the physical characteristics of the earth"; "the physical size of a computer"
Of Physicalism the Wikipedia says:
Physicalism is the metaphysical position that everything is physical; that is, that there are no kinds of things other than physical things. Likewise, physicalism about the mental is a position in philosophy of mind which holds that the mind is a physical thing in some sense. This position is also called "materialism", but the term "physicalism" is preferable because it does not have any misleading connotations, and because it carries an emphasis on the physical, meaning whatever is described ultimately by physics -- that is, matter and energy.
Last edited: Jan 5, 2005
2. Jan 5, 2005
### StatusX
I would define physical laws as those laws that can be framed in the language of mathematics. Or less strictly, the language of logic. For example, if we find a theory of consciousness that quantitatively relates experiences to information processors, as Chalmer's suggests, I would call this a physical theory of consciousness. This raises the question of whether the universe is mathematical or merely approximated by math. If it's the former, then physicalism completely describes the universe, with the possible exception of its creation. If its the latter, physicalism, at least as we know it today, will fall short.
Last edited: Jan 5, 2005
3. Jan 5, 2005
### Les Sleeth
You might be correct, but I haven't asked for what physical "laws" are. That is entirely different! Once you reduce physicalness to the abstraction of laws and logic and math, you've put the ball squarely in your own (physicalist) court. Physical might follow laws, and be predicted by math, but that isn't what it is.
Please stick to a definition of physicalness itself. What is it?
4. Jan 5, 2005
### StatusX
Ok, then I would define physicalism as the position that every observable process is completely determined by physical laws, as described above. To put this another way, if two systems are identical in every physical way, they cannot be different in any other way. By "physical way" I mean whatever parameters go into the final mathematical theory (eg, matter, space, qualia). As for unobservable processes, I would say that physicalists deny such a thing could exist.
5. Jan 5, 2005
### Les Sleeth
I'm sorry if I've confused the issue. I didn't ask for a definition of physicalism. I am asking what "physical" means. What are the properties of physicalness? How can you tell if you are looking at something physical (without any reference to laws or calculation)? What qualities, if observed, would make an objective thinker say, " that is physical"? What qualities, if observed, would make an objective thinker say, "that's not physical"? You can't cite obeying "laws" because those are determined after the fact of consistantly observing the same qualities. I am asking for what can be observed in the raw, one time, that makes something physical.
6. Jan 5, 2005
### StatusX
I'm sorry. The reason for the confusion is probably that (and to answer your question as best I can) as a physicalist, I believe everything is physical, and the question doesn't really make sense to me. And I don't see how you can say anything about the world without some basic rules. Don't forget, the mass and energy in your own defintion are not intrinsic qualities, but only arise from the rules we have discovered to describe functional relationships.
7. Jan 5, 2005
### Les Sleeth
You are admitting to a lack of objectivity. How can anyone trust such an opinion? What if someone comes along and picks out of reality only that which gives support to their spiritual beliefs, and ignores anything which doesn't?
For a minute, can't you just look at reality without your filters and concepts in place and describe what you see that is physical? We can argue later what is birthed by physicalness and what isn't.
Let's just define the OBSERVABLE properties (for now) which most define physicalness for what it is.
8. Jan 5, 2005
### StatusX
Anything that we can observe must have at least intitiated a physical process since our senses are physical. Assume we observe something "non-physical" and we can trace the physical processes back somehow from where they interacted with our senses to point where the laws of physics are violated (as they must be since otherwise the phenomenon would be physical). Now I can't think of anything like this, but if it exists, I would say that all it means is our laws are incomplete, and as long as the additional laws followed some kind of basic logic, preferably framable with math, a physicalist view can be sustained. If they can't, well have to rethink a lot of things, but I'm sure many scientists would first die trying.
One other possibilty that comes to mind is that a phenomenon can strictly follow the laws of physics, specifically quantum mechanics, but the chances of it happening the way it did are so astronomically small that pure luck can be ruled out. (eg, a ten foot tall gold crucifix spontaneously forms from atoms in the air) Again, I'm sure many, many alternative theories will be proposed by scientists first, and maybe they'll find one that works.
So, for my fourth try, I'll say that phenomena are non-physical if they cannot be explained using logic or math or if they can be, but something so unlikely has happened that some unseen force must be responsible. If this isn't what you wanted, I think I'm gonna have to give up.
Last edited: Jan 5, 2005
9. Jan 6, 2005
### loseyourname
Staff Emeritus
Given his framework, the definition of physical would be "anything that obeys mathematical laws." This would be about the same as the definition I developed in another thread for you. The word "physical" describes the property of having predictable extrinsic relationships. This is borrowed from theory-physicalism, which excludes all instrinsic properties, making it meaningless to ask "What is an intrinsic property of physical things?" Mass is not an intrinsic property, so your own definition doesn't tell you what physicalness is by your own standards. Furthermore, massless particles are generally considered to be "physical."
10. Jan 6, 2005
Staff Emeritus
I think that the definition of what is physical evolves along with physics. Once upon a time when Descartes wrote, physical meant pushes and pulls by macroscopic matter, then there was gravity, and chemical bonds, conserved energy, and luminiferous ether, and so on. At each point people who espoused physical philosophies (Locke, Marx, the log-pos group) used the then current notion of physicality.
Today physicality pretty much means consistence with the Standard Model of particle interactions or with General Relativity (locally GR looks like Special Relativity so that is included too). Those theories are accepted by physicists as "effective", matching all experiments we know how to do now, and there is enormous experimental support for their predictions at all energy scales likely to be relevant to the human body.
People who use speculative theories beyond these have to carefully state their assumptions, and their conclusions can only be accepted modulo the theory they posit.
11. Jan 6, 2005
### Les Sleeth
Would the part of your post I selected be a concise definition? I would love to have a tight definition, one which states the absolute minimum needed to qualify as physical. That would help to judge if something is physical, or uf something is a trait of physicalness (a common dispute in debates). Let me give an example.
If I say one requires balance to ride a bicycle, can I then go on to say riding a bike is anything that requires balance? That sort of logic is what I don't like about the definition Loseyourname and StatusX give. They basically define physical as anything subject to logic and/or which obeys mathematical laws. I've disputed that because I don't see why some cosmic consciousness would not have particles and not be subject to relativity (using your definition now), and yet still have ordered aspects to it which could be represented logically or mathematically.
Let me ask you one thing more (well, it's several questions about the same thing). Do you think my definition is generally correct (that "physicalness is mass, immediate effects of mass, and all that which has come about from the presence of mass")? Do you think it automatically includes your elements (i.e., quantum and relativity factors)? Do you think it is more basic than your definition? Maybe too basic? If so, do you think my definition would be improved by adding yours, something like this:
"Physicalness is mass and the effects of mass, and exhibits consistency with the Standard Model of particle interactions or Relativity."
Last edited: Jan 6, 2005
12. Jan 6, 2005
### Locrian
What a great question. Would the statement:
"Something is physical when it can be observed" be acceptable? Have I changed the game by rewriting the way the statement is said? "Physicalness" would then be something that is observable.
By the way, I don't personally consider your definition "that physicalness is mass, immediate effects of mass, and all that which has come about from the presence of mass" particularly good, because the concept of mass has become so exceedingly abstract and intermingled with other ideas. For example, light has no mass (though it has momentum) and yet I would consider it a physical thing. The fact that light can push seems to eliminate the possibility of it not being physical, and yet does not give it any mass.
13. Jan 6, 2005
### Les Sleeth
A photon has no rest mass. However, I understand what you are saying, which is why possibly the addition of inertia should be added to the definition. I've quoted the following before from the the McGraw-Hill Encyclopedia of Science and Technology: “The distinguishing properties of matter are gravitation and inertia.”
In terms of being observable as the definition, I don't think that tells us anything about physicalness itself; i.e., it's properties, nature, requirements for existence. Physicalness would still exist, for example, even if no one observed it.
Last edited: Jan 6, 2005
14. Jan 6, 2005
### Locrian
Yes, but I didn't say something had to be observed to be physical, just that it had to be observ-able. One might argue that there are things that can be observed that aren't physical, such as love and anguish, but wouldn't a physicalist argue that those were, in fact, observable in a physical sense?
I still like my definition best. But then what do you expect? :tongue2:
15. Jan 6, 2005
Staff Emeritus
First of all, I would accept the quote you selected as what physicalism means to me, and I would add an annex, not to add to but to explain that quote. Systems like electromagnetism and Newtonian physics are specializations of the standard model and general relaivity, valid under certain retrictive conditions which conditions are generally true in the human body, including the brain (speeds are very tiny relative to c, and actions are very large relative to Planck's constant h). They can be used to specify physical states or phenomena if those conditions are met (and at least implicitly stated in the argument).
Secondly I would not like to see mass made fundamental to physicalism. In the standard model mass is a derived quantity (generated by the Higgs interaction and by the binding energy of gluons). Although mass in involved in some very interesting questions of broken symmetry (current work on neutrino masses comes to mind), it is not a reliable base to found a philosophic view upon. Energy (in the strict physical sense of the word) and momentum would be better for that purpose. But it would be instructive to read some of the physicists' answers to the Edge question "What do you believe that you cannot prove?" for further insights on this.
Last edited: Jan 6, 2005
16. Jan 6, 2005
### Les Sleeth
I am in a hurry now, so I'll have to wait until tomorrow to think about all your comments. But just one point.
The definition I gave using mass was meant to say that physicalness isn't just anything with mass, but it is also that which is derived from mass (i.e., which now might be massless) and that which is manifested by the effects of mass present (such as gravity). Where I'm coming from with that is, basically, the Big Bang. It seems to me that that's what the BB primarily did -- create mass -- and then everything has emerged from and been manifested by that.
What do you think of my thinking :tongue2: in this respect?
17. Jan 6, 2005
### Les Sleeth
Yes, but you aren't defining physicalness. You are describing human perception. What I am after is the properties of physicalness itself which something must minimully possess to be recognized as physical.
18. Jan 6, 2005
### StatusX
We shouldn't just freeze physicalsim at what we know now (GR and the Standard Model), since, as SelfAdjoint pointed out, the definition of physicalism has changed as science has progressed. It seems you are trying to define physicalsim so that the things you believe to be unphysical, such as consciousness, remain so. I think "physical" should be defined from a social point of view: having the capacity to be investigated and explained by science. It seems pretty clear what is scientific and what isn't, and as science expands, so will the realm of the physical.
Last edited: Jan 6, 2005
19. Jan 6, 2005
### Locrian
Apologies for being repetitive, but I am defining physicalness; I am saying it is dependant upon human observation. I'm unwilling to replace the word "observation" with "perception" and do not understand why you did that. Observability is the most fundamental property of physical to me. When you ask if something is physical, that may very well be the only property I consider.
I would disagree with Self Adjoints definition of physicality on the grounds that it is circular and temporary. Defining something as physical because we currently have a predictive system that can predict things about it to me seems backwards. We generated that system (GR or SM) by observing physical things. Using that system to then define what it is for something to be physical seems to me rather backwards. On top of that, we are almost certainly to make advances in physics that would require his definition to be rewritten for the new ideas.
So why am I not giving you want you want? I don't feel particularly new to this conversation, but the responses to my posts seem to suggest - respectfully - that I'm somehow missing the point. I am more the willing to admit I may be doing that, but I just don't understand why or how.
20. Jan 6, 2005
Staff Emeritus
According to what is called the standard model of cosmology (not to be confused with the standard model of particle physics), immediately after the big bang there was no mass; the forces were all unified and the particles had not condensed out of the energy. In general relativity mass is only one of the sources of energy and momentum which warp spacetime; light, which has momentum but no mass, is another. Gravitational waves are still another.
Paul Davies has a book called The Matter Myth which discusses some of these ideas. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633105158805847, "perplexity": 1049.2646258136185}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701149377.17/warc/CC-MAIN-20160205193909-00152-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-geometry/104572-show-sets-not-connected-print.html | # Show sets not connected
• September 27th 2009, 08:31 AM
thaopanda
Show sets not connected
Show that the following sets are not connected:
(a) A := ${(x,y) \in R^2 : x^2 + y^2 \neq 1}$;
(b) B := ${(x,y) \in R^2 : xy = 1}$.
For this problem, do I just need to show that A $\cap$ B is the empty set?
If so, I would need to prove that for any x,y $\in$ A cannot equal any x,y $\in$ B, right?
• September 27th 2009, 09:05 PM
redsoxfan325
Quote:
Originally Posted by thaopanda
Show that the following sets are not connected:
(a) A := ${(x,y) \in R^2 : x^2 + y^2 \neq 1}$;
(b) B := ${(x,y) \in R^2 : xy = 1}$.
For this problem, do I just need to show that A $\cap$ B is the empty set?
If so, I would need to prove that for any x,y $\in$ A cannot equal any x,y $\in$ B, right?
You are not asked to show that $A$ and $B$ are disconnected from each other. (It seems like that's the way you interpreted the problem.) You are asked to show that $A$ is disconnected. You are also asked to show that $B$ is disconnected.
A set X is disconnected if there exist two open sets, U and V such that:
$X\subset U\cup V$ and $cl(U)\cap V=U\cap cl(V)=\emptyset$
--------------
For (a), take $U=\{(x,y):x^2+y^2<1\}$ and $V=\{(x,y):x^2+y^2>1\}$. It's obvious that $A\subset U\cup V$. It is also clear that $cl(U)\cap V = U\cap cl(V)=\emptyset$ (simply because $cl(U)=V^c$ and vice-versa).
For (b), take $U=\{(x,y):x>0,y>0\}$ and $V=\{(x,y):x<0,y<0\}$. Then proceed to show that:
1. $B\subset U\cup V$
2. $cl(U)\cap V=U\cap cl(V)=\emptyset$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8803299069404602, "perplexity": 241.27408795554967}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983026851.98/warc/CC-MAIN-20160823201026-00124-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/76884/a-simple-property-of-kloosterman-sum | # A simple property of Kloosterman sum
Kloosterman sum is defined as
$$K(a,b;m)=\sum_{0\leq x \leq m-1}_{\gcd(x,m)=1} e^{2\pi \mathcal{i} (ax+bx^*)/m}$$
where $a,b,m \in \mathbb{N}$ and $x^*$ is the inverse of $x$ modulo $m$. Now there is a simple property of Kloosterman sum, which is that $K(1,mn;q)=K(m,n;q)$ with $\gcd(m,q)=1$, but how to show it is true? How should the $\gcd(m,q)=1$ condition be used in the proof?
-
Write down the formula for $K(1,mn;q)$ and note the effect of the change of variable which replaces $x$ with $mx$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499284625053406, "perplexity": 92.91291169189607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738087479.95/warc/CC-MAIN-20151001222127-00024-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/652439/solve-cosz-frac34-fraci4 | # Solve $\cos(z)=\frac{3}{4}+\frac{i}{4}$
I tried solving this using the definition of $cos(z)=\frac{e^{iz}+e^{-iz}}{2}$ and equating it to $\frac{3}{4}+\frac{i}{4}$ and converting it to a complex quadratic equation through a substitution $t=e^{iz}$ and finding roots via the complex quadratic formula but it didn't seem to work. I would prefer solutions via elementary methods.
Here is my attempt:
By definition we have $\frac{e^{iz}+e^{-iz}}{2}=\frac{3}{4}+\frac{i}{4} \implies e^{iz}+e^{-iz}=\frac{3}{2}+\frac{i}{2}$. Let $t=e^{iz}$ so then we have $t+\frac{1}{t}=\frac{3}{2}+\frac{i}{2}$ and if we multiply both sides by $t$ we have $t^2+1=(\frac{3}{2}+\frac{i}{2})t$ and hence $t^2+(\frac{3}{2}+\frac{i}{2})t+1=0$ By the quadratic formula for complex numbers we have, $a=1, b=\frac{3}{2}+\frac{i}{2}, c=1 \implies z=\frac{-(\frac{3}{2}+\frac{i}{2}) \pm \sqrt{(\frac{3}{2}+\frac{i}{2})^2-4(1)(1)}}{2(1)}$. Simplyifing we have $z=\frac{-\frac{3}{2}-(\frac{1}{2})i \pm \sqrt{-2+(\frac{3}{2})i}}{2}$ We wish to express $-2+(\frac{3}{3})i$ in polar form so we have $|-2+(\frac{3}{2})i|=\frac{5}{2}$. Now equating the real and imaginary parts we have $\frac{5}{2}\cos(\theta)=-2 \implies \cos(\theta)=-\frac{4}{5}$ and $\frac{5}{2}\sin(\theta)=\frac{3}{2} \implies \sin(\theta)=\frac{3}{5}$. From this we have $\tan(\theta)=-\frac{3}{4} \implies \theta=\arctan(-\frac{3}{4}) \approx -.6435$ rad. So we have $w=-2+(\frac{3}{2})i=\frac{5}{4}(\cos(\theta)+i\sin(\theta))=\frac{5}{4}e^{i\theta}$. By Proposition 1.3.12 we have $\sqrt{w}=\sqrt{\frac{5}{4}}e^{\frac{i\theta}{2}}=\frac{\sqrt{5}}{2}e^{\frac{i\theta}{2}}$. Similarily for $-\frac{3}{2}-(\frac{1}{2})=\frac{\sqrt{10}}{2}e^{i\varphi}$ Where $\varphi=\arctan(\frac{1}{3})$. So finally we have $z=\frac{-(\frac{3}{2}+\frac{i}{2}) \pm \sqrt{(\frac{3}{2}+\frac{i}{2})^2-4(1)(1)}}{2(1)}=\frac{\sqrt{10}e^{i\varphi} \pm \sqrt{5}e^{\frac{i\theta}{2}}}{4}$ as solutions to $\cos(z)=\frac{3}{4}+\frac{i}{4}$.
• Your method is the standard technique. Jan 26, 2014 at 21:27
• How can it be more elementary?
– J.R.
Jan 26, 2014 at 21:30
• I wasn't aware that was the "standard" technique. One thing I did differently was instead of using the form $a+bi$ for complex numbers, I opted to use $re^{i\theta}$ as the representation so I could avoid taking $\sqrt{a+bi}$ in the discriminant since $\sqrt{re^{i\theta}}=\sqrt{r}e^{\frac{i\theta}{2}}$ seemed easier to work with.
– 1028
Jan 26, 2014 at 21:37
• The only other reasonable technique I know of would be to compute $\arccos$. However, IMO, the formula for $\arccos$ is pretty annoying.
– user14972
Jan 26, 2014 at 21:40
• If you show the details of the computation that "didn't seem to work", it would be much easier for someone to explain what you did wrong. Jan 26, 2014 at 21:42
I agree with your approach for the most part, but I think you've messed up on calculating $b$ in the quadratic formula: $$e^{iz} + e^{-iz} -\frac{3}{2} - \frac{i}{2} = 0$$
Using $t = e^{iz}$: $$t + \frac{1}{t} -\frac{3}{2} - \frac{i}{2} = 0$$ Multiplying by $t$: $$t^2\color{red}{-}\left(\frac{3}{2} + \frac{i}{2}\right)t + 1 = 0$$ This is where your solution starts to go wrong. Those minus signs are just lying in wait for the innocent mathematician! :P
Thus: \begin{align} x &= \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\\ &= \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \sqrt{\left(\frac{3}{2} + \frac{i}{2}\right)^2 - 4}}{2}\\ &= \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \sqrt{\left(\frac{9}{4} - \frac{1}{4}+\frac{3}{2}i\right) - 4}}{2}\\ &= \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \frac{1}{2}\sqrt{-8+6i}}{2}\\ &= \frac{3+i \pm (3i+1)}{4}\\ &= \cdots \end{align}
Going from line three to line four: \begin{align} \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \sqrt{\left(\frac{9}{4} - \frac{1}{4}+\frac{3}{2}i\right) - 4}}{2} &= \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \sqrt{\left(\frac{8}{4}+\frac{3}{2}i\right) - 4}}{2}\\ &= \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \sqrt{\left(2+\frac{3}{2}i\right) - 4}}{2}\\ &= \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \sqrt{-2+\frac{3}{2}i}}{2}\\ &= \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \sqrt{-\frac{8}{4}+\frac{6}{4}i}}{2}\\ &= \frac{\left(\frac{3}{2} + \frac{i}{2}\right) \pm \frac{1}{2}\sqrt{-8+6i}}{2}\\ \end{align}
• Why can you replace $t$ with $z$ on the LHS of that quadratic formula, when $t = e^{iz}$? Jul 27, 2015 at 9:33
Two solutions of the system above: (Pi /4 , - ln(2)/2) and (- Pi/4 , ln(2)/2), among others.
One possible "elementary method" might start:
$$\cos(x+iy) = \cos(x)\cos(iy)-\sin(x)\sin(iy),$$
which simplifies to
$$\cos(x)\cosh(y)-i\sin(x)\sinh(y).$$
Now let
$$\cos(x)\cosh(y)-i\sin(x)\sinh(y) \equiv \frac{3}{4}+\frac{i}{4},$$
which gives $$\cos(x)\cosh(y)=3/4$$ and $$\sin(x)\sinh(y)=-1/4.$$
You may be able to solve for $x$ and $y$.
• I tried this approach but I got bogged down since I went off on a tangent using this. Thank you!
– 1028
Jan 26, 2014 at 22:14
• I highly doubt that a human would be able to solve those nonlinear equations in a reasonable amount of time: see W|A's solutions Jan 26, 2014 at 22:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985166192054749, "perplexity": 580.3396185575372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00682.warc.gz"} |
https://math.stackexchange.com/questions/2727161/find-all-irreducible-polynomials-of-degrees-2-and-3-in-bbbz-2x | Find all irreducible polynomials of degrees $2$ and $3$ in $\Bbb{Z}_{2}[x]$.
Find all irreducible polynomials of degrees $2$ and $3$ in $\Bbb{Z}_{2}[x]$.
A polynomial $p(x)$ of degree $2$ or $3$ is irreducible if and only if it does not have linear factors. Therefore, it suffices to show that $p(0) = p(1) = 1$. This quickly tells us that $x^2 + x + 1$ is the only irreducible polynomial of degree $2$. This also tells us that $x^3 + x^2 + 1$ and $x^3 + x + 1$ are the only irreducible polynomials of degree $3$.
The part I don't understand is "it suffices to show $p(0)=p(1)=1$". Could someone please explain what this means and how this finds the irreducible polynomials?
• If you are looking for irreducible polynomial of degree 3 or 2 in a fieldYou just have to show it has no zeros. – Sorfosh Apr 8 '18 at 0:53
Transform the "if and only if" statement as follows.
A polynomial $p(x)$ of degree $2$ or $3$ is reducible if and only if it has linear factor.
By the factor theorem,
$p(x)$ has linear factor iff $p(0) = 0$ or $p(1) = 0$.
Take the negation on both sides.
$p(x)$ has no linear factor iff $p(0) \ne 0$ and $p(1) \ne 0$.
Since we are in $\Bbb{Z}_2$, the RHS can be simply written as follows.
$p(x)$ has no linear factor iff $p(0) = p(1) = 1$.
In $\Bbb{Z}_2[X]$,
• $p(0) = 1$ iff the constant term is $1$.
• $p(1) = 1$ iff the polynomial contains odd number of terms.
This helps us to select the irreducible polynomials of degree $\le 3$ by eliminating those with even number of terms and/or no constant term.
• Thank you for giving a very clear answer, I understand now... the $0$ and $1$ are elements in $\Bbb{Z}_{2}$. But how does this theorem find the irreducible polynomials? – numericalorange Apr 8 '18 at 1:02
• @numericalorange I've edited my answer to explain how to use $p(0)$ and $p(1)$ to check irreducibility of $p$. The Factor Theorem allows us to find linear factors of a polynomial by substituting values into the indeterminate. – GNUSupporter 8964民主女神 地下教會 Apr 8 '18 at 1:08
• Thanks! Very interesting read! – numericalorange Apr 8 '18 at 1:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539921045303345, "perplexity": 140.11768745787617}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00477.warc.gz"} |
https://resonaances.blogspot.com/2011/04/xenon100-nothing.html | Thursday, 14 April 2011
Xenon100: Nothing
The most expected experimental result of Spring 2011 is out now. XENON100 just released the results of the dark matter search based on 100 days of data-taking with xenon target in 2010. Here is what they see:
The plot shows all events that pass the quality cuts. The x-axis corresponds to the measured recoil energy determined by counting the number of scintillation photons in the event, the so-called S1. (There is an important companion paper fixing the relation between recoil and S1 at low energies where previous experimental results have been somewhat confusing). Most of the events in the plot are due to photons scattering on electrons from the xenon atoms. The way to distinguish those from the more interesting nuclear recoils (expected when a dark matter particle scatters) is by simultaneously measuring the number of ionization electrons, the so-called S2. Nuclear recoils typically lead to a smaller ratio of S2/S1 (the grey area in the plot). Therefore one makes a cut on S2/S1 (the dashed horizontal line) defining the signal region such that most electron recoils are rejected while the bulk of nuclear recoils is retained. At the end of the day one finds 3 events in the signal window (red points) while the expected background, mostly from spillover of electron recoils, is estimated to be 1.8 ± 0.6. Once again, no signal :-( Instead, we have new limits on the dark matter - nucleon cross section
For a 100 GeV dark matter particle the limit is around 10^-44 cm2, 3 times better than the previous limits from CDMS and Edelweiss. For light dark matter the improvement seems to be even better, more than an order of magnitude, which further disfavors dark matter interpretations of the CoGeNT and DAMA signals.
Actually, the paper mentions in passing that the analysis leading to these limits was not completely blind. After opening the box, there were many events at small recoil energy of which 3 fell into the signal region, which would make 6 signal events in total. However after investigating these 3 additional events the collaboration decided they were static from the electric can opener ;-), and devised additional cuts to get rid of them.
So what do these results tell us about the WIMP dark matter? At which point should we start to worry that we're on the wrong track? Unfortunately, there is no sharp prediction for the dark matter cross section. The most appealing possibility – a weak scale dark matter particle interacting with matter via Z-boson exchange - leads to the cross section of order 10^-39 cm2 which was excluded back in the 80s by the first round of dark matter experiments. There exists another natural possibility for WIMP dark matter: a particle interacting via Higgs boson exchange. This would lead to the cross section in the 10^-42-10^-46 cm2 ballpark (depending on the Higgs mass and on the coupling of dark matter to the Higgs). This generic possibility is now getting disfavored thanks to Xenon100's efforts, unless the Higgs is heavier than we expect. Therefore, even though models predicting the cross section below 10^-44 cm2 certainly do exist, it may be a good moment to start thinking more seriously about alternatives to WIMP. In the worst case dark matter may be very weakly interacting (axions, gravitinos) or very light (keV-MeV scale dark matter), in which case the current approach to direct detection is doomed from the start.
NW said...
Jester - are you talking about cross section, or cross section per nucleon? The cross section per nucleon for a Z-boson is I think around 10^-39 (see Essig 0710.1668 eq 1) and for a Higgs boson I think it's more like 10^-45 (ish) see e.g., Davoudiasl et al 0405097 fig 2. So I think the region we're about to get into is the Higgs mediated region.
Jester said...
I meant per nucleon...indeed I screwed with Z. I'll doublecheck the Higgs...
Anonymous said...
Of course there's always the possibility that we are living in a region of low dark matter density inside the halo.
This is always a possibility to revive your favorite weak scale dark matter model.
Kea said...
The poor zombie stringy susy just keeps on going regardless, heh? What we need now is a susy reduction combination plot for multiple experiments: LHC, Xenon, etc.
Anonymous said...
It is statistically improbable for us to be in a region of abnormally low dark matter density. It seems cowardly to invoke particles with arbitrarily low cross section. The whole thing starts to feel like aether - something that absolutely ought to be there, but aint. And it can get even worse - http://physics.aps.org/articles/v4/23
chris said...
"It is statistically improbable for us to be in a region of abnormally"
anthropic principle to the rescue ^_^
Anonymous said...
Maybe the dark halo is very different to the standard one. See http://arxiv.org/abs/1103.6091: a WIMP with mass in the TeV range and a rotating dark disk could be a viable solution for DAMA and the recoils measured by the other experiments.
Jester said...
@NW: Concerning the Higgs exchange, for example hep-ph/0011335 quotes σ ~(DM-Higgs coupling)^2*(120 GeV/mh)^4*(100 GeV/mDM)^2*10^-42cm2 . Davoudiasl et al must be using a smaller coupling, but I can't figure out what exactly they put in.
Anonymous said...
There is no large under/overdensity in the DM halo, according to formation with the CDM paradigm with newtonian gravity: the halo turns out to be quite smooth (there are also experimental constraints from stellar kinematics.) The local density can then be estimated as 0.4+-0.2 GeV/cm3 (at 1 sigma!). Ref is http://inspirebeta.net/record/849062
Alejandro Rivero said...
Kea, do you include my version of susy between the zoombie stringy ones?
Kea said...
Alejandro, only if you predict zombie superpartners.
NW said...
Hi Jester - I think Davoudiasl is consistent with 0011335. Note that if you look at the plots, the characteristic cross section when normalizing relic abundance is typically ~10^-45 or a bit lower for heavier WIMPs and MH=O(120). But, I agree, it's a funny way of writing the cross section because the coupling is characteristically smaller than O(1) to DM.
The way I think about it (which is certainly a feeble approximation) is that Z's have O(1) couplings to matter, while the effective Higgs Yukawa is ~1/3 x mp/v ~ 10^-3 (where the 1/3 is uncertain and comes from people who know how to calculate such things). So if Z's give ~10^-39, I expect Higgses to give (10^-3)^2 x 10^-39 ~ 10^-45. That's obviously super hand-wavy but from my perspective we're just now getting into what I think of as the rich part of the "Higgs-mediated" region for thermal WIMPs. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8921882510185242, "perplexity": 1487.3236509151418}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00498.warc.gz"} |
http://mathonline.wikidot.com/the-rotating-drum-problem | The Rotating Drum Problem
# The Rotating Drum Problem
Suppose we have a drum that rotates and contains $16$ compartments. Within each compartment we can use either the binary digit $0$ or the binary digit $1$. We want to find an order of binary digits such that all of the $16$-bit sequences of length $4$ appear as consecutive chunks on the rotating drum:
To solve this problem, let's first make a digraph on the eight $3$-bit sequence according to the rule that any binary word abc is also adjacent to $bc0$ and $bc1$. For example, $000$ would be adjacent to $000$ and $001$, or $110$ would be adjacent to $100$ and $101$. We thus get the following digraph:
Imagine the $3$-bit binary words having all of their digits shift to the left where the last digit is replaced by either a $0$ or a $1$ (by our rule above). We thus label the edges with either $0$'s or $1$'s depending on what the last digit changes to. Notice that the first digit "falls off" and the second digit is shifted to the position of the first digit.
Now notice that $\deg ^+ (v) = 2 \quad \mathbf{and} \quad \deg ^- (v) = 2$. Hence this graph must be Eulerian. We will now find an Eulerian trail. For example:
(1)
\begin{align} \quad 000 \rightarrow 000 \rightarrow 001 \rightarrow 010 \rightarrow 100 \rightarrow 001 \rightarrow 011 \rightarrow 110 \rightarrow 101 \rightarrow 010 \rightarrow 101 \rightarrow 011 \rightarrow 111 \rightarrow 111 \rightarrow 110 \rightarrow 100 \rightarrow 000 \end{align}
Now we will write the corresponding values of the arcs to get $0100110101111000$. We thus get the following arrangement:
Binary Word Decimal Equivalence
$0100$ $4$
$1001$ $9$
$0011$ $3$
$0110$ $6$
$1101$ $13$
$1010$ $10$
$0101$ $5$
$1011$ $11$
$0111$ $7$
$1111$ $15$
$1110$ $14$
$1100$ $12$
$1000$ $8$
$0000$ $0$
$0001$ $1$
$0010$ $2$
Hence we can label our rotating drum in the following manner:
Any $4$ consecutive compartments of the rotating drum will produce one of the unique $4$-bit binary words (note that there are $16$ many $4$-bit binary words equivalent to $0 - 15$). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970593452453613, "perplexity": 296.0655369983868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815560.92/warc/CC-MAIN-20180224112708-20180224132708-00780.warc.gz"} |
https://www.physicsforums.com/threads/uncertainty-principle-with-time-and-frequency.613008/ | # Uncertainty Principle with time and frequency
1. Jun 10, 2012
### Lord_Sidious
ΔxΔp ≥ $\frac{h}{4\pi}$
Since Δx=ct for a photon and Δp=(mv$_{f}$-mv$_{i}$)
Then ct(mv$_{f}$-mv$_{i}$) ≥ $\frac{h}{4\pi}$
Since mv=$\frac{h}{\lambda}$
You have ct($\Delta$$\lambda$)$^{-1}$h ≥ $\frac{h}{4\pi}$
Planck's constant cancels, move the c over $\lambda$, $\frac{c}{\lambda}$=f
This leaves you with t$\Delta$f ≥ $\frac{1}{4\pi}$
Dimensional analysis checks. Is this correct and is there any use to this equation?
t ≥ (4$\pi$Δf)$^{-1}$
2. Jun 11, 2012
### Simon Bridge
It's usually put in the form: $\Delta E \Delta t = \frac{\hbar}{2}$ (note that E = hf for a photon - giving your relation.) It is used for the lifetime of excited states in solids and the range of fundamental interactions.
I'd like to point out that you don't get to do p=mv for a photon though.
3. Jun 11, 2012
### f95toli
Also, the "time-frequency uncertainty relation" is just a natural consequence of Fourier analysis, it is a mathematical results which does not rely on anything from physics.
Is is sometimes known as "the mathematical uncertainty principle"
See e.g.
http://www.ams.org/samplings/feature-column/fcarc-uncertainty
4. Jun 11, 2012
### Dickfore
What is meant by t in this equation? The meaning of Δx is the uncertainty in the position of the particle. But, photons do not have a defined position.
Again, Δp is the uncertainty in momentum, not the change of momentum equal to final - initial momentum. Furthermore, the momentum of the photon is not calculated by the non-relativistic formula $p = m v$. However, your last formula (in this quotation) is correct, if you ignore the intermediate result $m v$, that you never use after that.
You have made a mistake here. If momentum is calculated by the De Broglie relation $p = h/\lambda$, then, an uncertainty in wavelength Δλ implies, by the error propagation formula:
$$\Delta p = \left\vert \frac{d p}{d \lambda} \right\vert \, \Delta \lambda = h \, \frac{\Delta \lambda}{\lambda^2}$$
Planck's constant does cancel. However, because your previous formula was incorrect, so is this one. The corrected version is obtained by going from wavelength to frequency via the error propagation formula:
$$c \, t \, \frac{\Delta \lambda}{\lambda^2} \ge \frac{1}{4 \pi}$$
$$\lambda = \frac{c}{f}, \ \Delta \lambda = \left \vert \frac{d \lambda}{d f} \right\vert \, \Delta f = \frac{c \, \Delta f}{f^2}$$
$$c \, t \, \frac{\frac{c \, \Delta f}{f^2}}{\frac{c^2}{f^2}} \ge \frac{1}{4 \pi}$$
$$t \, \Delta f \ge \frac{1}{4 \pi}$$
but the method by which you derived it is incorrect.
5. Jun 11, 2012
### Lord_Sidious
Thanks for clarifying it. What exactly can be done with this now? The "t" is time, but I don't understand the time of what? The time of the change in frequency? I have done some calculations and for a change in frequency from say, UV to violet light, the equation gives:
t ≥ 8.6 attoseconds
But if you reverse it, then the change in frequency from violet light to UV, the equation gives:
t ≥ -8.6 attoseconds
6. Jun 11, 2012
### Dickfore
Again, $\Delta f$ is not the change in frequency. It is the uncertainty with which you know the frequency. Then, t is the minimum time you need to measure the wave-train to achieve the given precision.
7. Jun 11, 2012
### Simon Bridge
The "delta" is not a "change in" - it is the uncertainty in some measurement - a statement of how imprecise something is known. So $\Delta f$ would be the uncertainty in frequency.
You calculation shows that if you were making measurements of frequency which lead you to be uncertain to the extent that it could be violet or UV or something in between, then the smallest you could be uncertain about a related time measurement must be 8.6 attoseconds.
Now consider what this means in terms of the wave-nature of a particle like the virtual particle that mediates forces? Or energy-width of an electron orbital and it's stability?
8. Jun 11, 2012
### Lord_Sidious
Thanks everyone, this helps.
9. Jun 12, 2012
### professorscot
Musical applications
The uncertainty relationship between time and frequency has interesting musical applications. The lower the frequency of a pitch, the longer you must hear it in order to ascertain its frequency. That's why a piccolo can play short notes but a tuba can't. A piccolo requires only a tiny fraction of a second to complete several cycles and clearly define its frequency. Notes at the bottom of a piano are only around 30 Hz. If played for a duration of, say, 1/40 of a second, you don't even get a chance to hear a full wavelength, leading to uncertainty about the pitch.
Likewise in musical sampling. When digitizing a waveform, it must be sampled at least twice per cycle in order to discern a particular frequency. For frequencies that are high in the range of human hearing, like 4,000 Hz, the waveform must be recorded at least at a rate of 8,000 Hz. Any lower sampling rate will cut off the high frequencies and result in a damped sound.
These examples are, to me, some of the clearest ways to illustrate what the uncertainty principle is "really saying." Heisenberg used a very similar example to demonstrate why his location / momentum principle was sensible, though in his case it involved EM waves, not sound. Other aspects of the HUP baffle me! I'll be posting my own question soon, haha.
Similar Discussions: Uncertainty Principle with time and frequency | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210700988769531, "perplexity": 1635.0121463313499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105961.34/warc/CC-MAIN-20170820015021-20170820035021-00526.warc.gz"} |
https://www.physicsforums.com/threads/relativistic-cyclotron-frequency.235611/ | # Relativistic cyclotron frequency
1. May 16, 2008
### jdstokes
Since the acceleration is transverse to the velocity, should we consider the transverse mass in the formula $mv^2/r$ ie
$\gamma \frac{mv^2}{r} = qvB \implies \frac{v}{\sqrt{1-(v/c)^2}} = \frac{qBr}{m}$?
2. May 17, 2008
### lzkelley
How can you express the centripetal force in terms of momentum (with other terms)?
How is the relativistic momentum?
3. May 17, 2008
### pmb_phy
Yes. But just to be clear, using your symbols, the transverse mass = gamma*m.
Pete
4. May 17, 2008
### pam
It's clearer in terms of momentum pv/r=qvB-->p=qBr. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9938667416572571, "perplexity": 4548.759759868854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00297-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.dsprelated.com/freebooks/pasp/FDA_Frequency_Domain.html | #### FDA in the Frequency Domain
Viewing Eq.(7.2) in the frequency domain, the ideal differentiator transfer-function is , which can be viewed as the Laplace transform of the operator (left-hand side of Eq.(7.2)). Moving to the right-hand side, the z transform of the first-order difference operator is . Thus, in the frequency domain, the finite-difference approximation may be performed by making the substitution
(8.3)
in any continuous-time transfer function (Laplace transform of an integro-differential operator) to obtain a discrete-time transfer function (z transform of a finite-difference operator).
The inverse of substitution Eq.(7.3) is
As discussed in §8.3.1, the FDA is a special case of the matched transformation applied to the point .
Note that the FDA does not alias, since the conformal mapping is one to one. However, it does warp the poles and zeros in a way which may not be desirable, as discussed further on p. below.
Next Section:
Delay Operator Notation
Previous Section: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624497294425964, "perplexity": 804.4686491569073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00209.warc.gz"} |
https://dsp.stackexchange.com/questions/60190/matlab-fir2-npt-and-lap | # MATLAB fir2 - npt and lap
This might not be the right place to ask this, but I'm hoping someone can explain two of the arguments in the MATLAB fir2 function. It is a function for designing filters using the frequency sampling method. There are two optional arguments:
npt - Number of grid points, specified as a positive integer scalar. npt must be larger than one-half the filter order: npt > n/2.
lap - Length of region around duplicate frequency points, specified as a positive integer scalar.
I am struggling to find any meaningful literature around this so if anyone could explain these two in further detail or point me to something to read that would be greatly appreciated.
npt is the number of frequency points that are used to define the desired frequency response. It's the length of the inverse FFT that is applied to the frequency domain data. It defaults to $$512$$ points, but if you want to design very long filters (with many taps) then you should choose a larger number.
lap defines the width of the transition band, if there is one. There is a transition band if there are two equal frequencies specified in the vector f, where a step in the desired response occurs. The wider the transition band the smaller the approximation error in the pass bands and stop bands, so there is a trade-off between steep transitions and approximation errors in the pass bands and stop bands. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653953075408936, "perplexity": 472.497832894492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00207.warc.gz"} |
https://math.stackexchange.com/questions/1848984/if-the-entries-of-a-positive-semidefinite-matrix-shrink-individually-will-the-o/1849349 | # If the entries of a positive semidefinite matrix shrink individually, will the operator norm always decrease?
Given a positive semidefinite matrix $P$, if we scale down its entries individually, will its operator norm always decrease? Put it another way:
Suppose $P\in M_n(\mathbb R)$ is positive semidefinite and $B\in M_n(\mathbb R)$ is a $[0,1]$-matrix, i.e. $B$ has all entries between $0$ and $1$ (note: $B$ is not necessarily symmetric). Let $\|\cdot\|_2$ denotes the operator norm (i.e. the largest singular value). Is it always true that $$\|P\|_2\ge\|P\circ B\|_2?\tag{\ast}$$
Background. I ran into this inequality in another question. Having done a numerical experiment, I believed the inequality is true, but I hadn't been able to prove it. If $(\ast)$ turns out to be true, we immediately obtain the analogous inequality $\rho(P)\ge\rho(P\circ B)$ for the spectral radii because $\rho(P)=\|P\|_2\ge\|P\circ B\|_2\ge\rho(P\circ B)$.
Remarks. There is much research on inequalities about spectral radii or operator norms of Hadamard products. Often, either all multiplicands in each product are semidefinite or all of them are nonnegative. Inequalities like those two here, which involve mixtures of semidefinite matrices with nonnegative matrices, are rarely seen.
I have tested the inequality for $n=2,3,4,5$ with 100,000 random examples for each $n$. No counterexamples were found. The semidefiniteness condition is essential. If it is removed, counterexamples with symmetric $P$s can be easily obtained. The inequality is known to be true if $P$ is also entrywise nonnegative. So, if you want to carry out a numerical experiment to verify $(\ast)$, make sure that the $P$s you generate have both positive and negative entries.
One difficulty I met in constructing a proof is that I couldn't make use of the submultiplicativity of the operator norm. Note that tie occurs if $B$ is the all-one matrix, which has spectral norm $n\,(>1)$. If you somehow manage to extract a factor like $\|B\|_2$ from $\|P\circ B\|_2$, that factor may be too large. For a similar reason, the triangle inequality also looks useless.
• I take it that here $\circ$ means Hadamard (entrywise) product, right? – Nick Alger Jul 5 '16 at 2:19
• @NickAlger Yes, precisely. – user1551 Jul 5 '16 at 2:26
• There is some theory for the sensitivity of singular values with respect to changes in the entries of a matrix. One approach might be to consider a parameterized path $M$ between the matrices in the inequality, $M(s) = sP + (1-s)(P \circ B)$, and show that at each point on the path the derivative of the singular values is positive. – Nick Alger Jul 5 '16 at 2:39
• Here's a paper about singular value sensitivity, maybe it can be of use, particularly equation (3) in section 4: users.math.msu.edu/users/markiwen/Teaching/MTH995/Papers/… – Nick Alger Jul 5 '16 at 3:36
• @user1551: You're obviously correct. I had the inequality going the wrong way in my head. I have deleted my comment as a result. – J. Loreaux Jul 5 '16 at 16:53
It is not always true.
The following matrix is positive semidefinite with norm $3$: $$P := \left(\begin{array}{ccc} 2 & 1 & 1\\ 1 & 2 & -1\\ 1 & -1 & 2\\ \end{array}\right)$$ Use $B$ to poke out the $-1$'s and you get $$P \circ B = \left(\begin{array}{ccc} 2 & 1 & 1\\ 1 & 2 & 0\\ 1 & 0 & 2\\ \end{array}\right),$$ which is positive semidefinite with norm $2 + \sqrt{2} > 3$.
• Congrats and +1. This is a great counterexample! – user1551 Jul 5 '16 at 4:20
• This is a good lesson to learn. In my numerical experiment, the entries of $B$s were random floating-point numbers. Actually I was careful enough to re-do the original experiment with many more (a million) samples, but no counterexamples were found. The idea to generate random samples on the boundary (i.e. to use random $\{0,1\}$ matrices rather than random $[0,1]$-matrices) just didn't occur to me. Once I changed the code to sample integer $B$, counterexamples easily popped up. – user1551 Jul 5 '16 at 4:30
I think this is true. Here's an attempt that looks potentially fruitful:
Use the property given here. That is, note that $$\DeclareMathOperator{\tr}{tr} \| P \circ B\| = \sup_{\|x\| = \|y\| = 1} x^*(P \circ B)y = \sup_{\|x\| = \|y\| = 1} \tr (D_x P D_y B^T) = \\ \sup_{\|x\| = \|y\| = 1} \langle D_x P D_y,B \rangle \leq \sup_{\|x\| = \|y\| = 1} \left(\sum_{i,j}|(D_x P D_y)_{ij}|\right)\max_{i,j}|B_{ij}|$$ here, $\langle \cdot , \cdot \rangle$ is an entry-wise dot-product, and $D_x = \operatorname{diag}(x_1,\dots,x_n)$. From here, maybe you can use the fact that $P$ can be written as a convex combination of PSD rank $1$ matrices. Perhaps it's useful to note that $D_x vv^T D_y = (x \circ v) (y \circ v)^T$.
• Thanks and +1 for the characterisation $\|P\circ B\|=\sup_{\|x\|=\|y\|=1}\operatorname{tr}(D_xAD_yB^T)$ (I desperately wanted a characterisation of this kind, but it just didn't pop up in my mind), but as it currently stands, the later part of your approprach won't work: when $B$ is the all-one matrix, the RHS of your inequality will overshoot, as it becomes the norm of the entrywise absolute value of $P$, which can be strictly greater than $\|P\|$ when $n\ge3$. – user1551 Jul 5 '16 at 2:24
• Sorry, my $P$ became an $A$. Glad you think the characterization is useful. And interesting observation of why my approach fails; I wouldn't have put together that it's the norm of $|P|$. I was hoping some kind of Hölder's would work here, but it seems that neither the entry-wise nor the Schatten (singular value) versions work. – Ben Grossmann Jul 5 '16 at 2:40
• If there is no Hölder's inequality or version thereof that does the job, my next guess would be to try moving the $D_x$ and $D_y$ across the inner product, try the case where $B$ is rank $1$, or try calculus... somehow. That's all I have time for now, but hopefully that helps if you're out of ideas. – Ben Grossmann Jul 5 '16 at 2:48
• @user1551 just occurred to me: if $|P|$ (entrywise absolute value) has a greater norm than $P$, then we have a counterexample with $B_{ij} = \operatorname{sign}(P_{ij})$ – Ben Grossmann Jul 5 '16 at 18:46
• The sign matrix may have negative entries. So it may not be qualified as a counterexample. But your argument does succinctly point out why the complex version of the statement (or the real version with a $[−1,1]$ matrix $B$) is false. – user1551 Jul 5 '16 at 19:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936869204044342, "perplexity": 370.6951965367041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539480.67/warc/CC-MAIN-20210623134306-20210623164306-00386.warc.gz"} |
http://math.stackexchange.com/questions/17979/dot-product-and-orthogonal-complement | # Dot Product and Orthogonal Complement
Let V be the vector space of all real-valued bounded sequences. Then for $a,b \in V$ $\langle a,b \rangle :=\sum _{n=1}^{\infty } \frac{a(n) b(n)}{n^2}$ defines a dot product. Find a subspace $U \subset V$ with $U \neq 0, U \neq V, U^\bot=0$.
I couldn't find anything that works, thank you in advance. $U^\bot$ is the orthogonal complement of $U$, in case the notation is confusing.
-
Try the subspace $U$ of all sequences with only finitely many non-zero terms. – t.b. Jan 18 '11 at 13:47
I see, if you have a sequence that has only finitely many zero-terms you can always find a corresponding sequence with finitely many non-zero terms where some $a(n_0) \cdot b(n_0) \neq 0$ right? – Listing Jan 18 '11 at 13:52
@Theo: you should make that hint an answer. – Willie Wong Jan 18 '11 at 13:56
@user3123: that is not quite what you want. You want to show $U^\perp = 0$, which means that you want to show for any given non-zero sequence $a\in V$, there exists some $b$ with only finitely many non-zero terms such that $\langle a,b\rangle\neq 0$. Then you want to show that there exists some element in $V$ that is not in $U$. – Willie Wong Jan 18 '11 at 14:00
Ok so thats not hard to show? If $a \in V$ is a non-zero sequence there is a $n_0 \in \mathbb{N}$ such that $a(n_0) \neq 0$. Then you take $b(n) := \delta_{n,n_0}$ and then $\langle a,b\rangle\neq 0$, $b$ has only 1 non-zero term. And the sequence $c = \{0,1,1,1,\ldots \}$ is not in $U$. ( $\delta$ is en.wikipedia.org/wiki/Kronecker_delta ) – Listing Jan 18 '11 at 14:05
The sequences with only finitely many non-zero terms form a proper subspace $U$ of $V$. For $k \in \mathbb{N}$ let $\delta_{k} \in U$ be the sequence for which $\delta_{k}(k) = 1$ is the only non-zero entry. Now compute the scalar product (= dot product) $\langle \delta_{k}, b \rangle$ for all $k$ in order to see that $U^{\perp} = 0$. In other words: if $b$ is orthogonal to all $\delta_{k}$'s then $b$ must be zero. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677264094352722, "perplexity": 105.43247731240487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676381.33/warc/CC-MAIN-20151001215756-00034-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/172231/rational-homology-and-finite-group-actions | # Rational homology and finite group actions
I'm looking for examples of the following phenomena. Let $X$ be a reasonable space (say, a CW complex) and $G$ be a finite group acting on $X$. For all $k \geq 1$, the projection map $X \rightarrow X/G$ induces a map $H_k(X;\mathbb{Q}) \rightarrow H_k(X/G;\mathbb{Q})$ which factors through the $G$-coinvariants $(H_k(X;\mathbb{Q}))_G$; let $\psi_k : (H_k(X;\mathbb{Q}))_G \rightarrow H_k(X/G;\mathbb{Q})$ be the resulting map. I want examples of $X$ and $G$ and $k$ such that $\psi_k$ is not an isomorphism.
If $G$ acts freely, then the map $X \rightarrow X/G$ is a finite regular covering map and $\psi_k$ is an isomorphism by (for instance) the Cartan-Leray spectral sequence (Theorem VII.7.9 in Brown's book on group cohomology). But I have no idea what happens for non-free actions. My guess is that if it were true that $\psi_k$ were always an isomorphism, then I would have seen it somewhere, so I expect that there is a counterexample. However, I have not managed to come up with one.
-
The maps $\psi_k$ are all isomorphisms; this is a simple application of the transfer ("averaging") construction. See Theorem 2.4, Chapter II, of Bredon's book "Introduction to compact transformation groups". | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488444328308105, "perplexity": 92.01748803906115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00159-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://mathhelpforum.com/statistics/66890-please-help-probability.html | These were the questions that i was stuck with when i was doing my probability homework, please help =)
1) A die is rolled twice and the resulting numbers are added
(a) Find the Probability that the sum is seven
(b) What event(s) will give a probability of 1/9?
2) A coin is tossed and a die is rolled.
(a) Find the event(s) that will give a probability of 1/12
(b) Find the event(s) that will give a probability of 1/6
Please explain how to do this question, because i'm seriously stuck :'D
Thanks for all the help!
2. Originally Posted by mathpro
These were the questions that i was stuck with when i was doing my probability homework, please help =)
1) A die is rolled twice and the resulting numbers are added
(a) Find the Probability that the sum is seven
(b) What event(s) will give a probability of 1/9?
2) A coin is tossed and a die is rolled.
(a) Find the event(s) that will give a probability of 1/12
(b) Find the event(s) that will give a probability of 1/6
Please explain how to do this question, because i'm seriously stuck :'D
Thanks for all the help!
Q1 (a) Use the dice table here: Dice table
(b) 1/9 = 4/36. Use the dice table to see what event(s) occurs 4 times ....
Q2 Try constructing a coin-die table similar to the dice table above.
(a) Use the coin-die table to see what events occur only once.
(b) 1/6 = 2/12. Use the dice table to see what events occur 2 times ....
You might also find this link useful: Grade 8:* Independent Events
3. Thanks for the help
Thanks for the help, but i don't understand what you meant by teh 4/36 and to look on the dice table to check out what event(s) occur 4 times.
But anyways thanks for the help, i appreciate it =)
4. Probability
Hello mathpro
Originally Posted by mathpro
These were the questions that i was stuck with when i was doing my probability homework, please help =)
1) A die is rolled twice and the resulting numbers are added
(a) Find the Probability that the sum is seven
(b) What event(s) will give a probability of 1/9?
2) A coin is tossed and a die is rolled.
(a) Find the event(s) that will give a probability of 1/12
(b) Find the event(s) that will give a probability of 1/6
Please explain how to do this question, because i'm seriously stuck :'D
Thanks for all the help!
Since there are 6 ways in which the die can land each time there are 6 x 6 =36 ways in which it can land when rolled twice. You might find it helpful to plot these ways on an x-y (Cartesian) diagram. Suppose that x represents the score on the first throw, and y the score on the second throw. Then the point (2, 5), for example, represents a score of 2 on the first throw and 5 on the second.
(a) Marks as dots all 36 possible points. They form a square pattern. Now put a little circle around any points giving a total score of 7 - the point (2, 5) will be one of them. These circled points form a simple pattern. So:
• How many points are circled?
• Given that the total number of points is 36, what is the probability that you get one of the circled points?
(b) For an event to have a probability of 1/9, how many points will you need to circle? Can you see any patterns of points that will make this happen? Can you see what the answers are now? (There are two possible answers.)
For question 2, try drawing another x-y diagram, except that this time x represents the score on the die (from 1 to 6) and y represents the flip of the coin (H or T). Mark as dots all 12 possible points. Then look for patterns that give probabilities of 1/2 and 1/6.
I hope you can do it now. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.802298367023468, "perplexity": 550.8863843730976}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00202-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=18&t=47869 | ## 25.
$\lambda=\frac{h}{p}$
John Liang 2I
Posts: 102
Joined: Fri Aug 30, 2019 12:18 am
### 25.
How does one find the needed uncertainty in velocity of the electron?
Paige Lee 1A
Posts: 136
Joined: Sat Sep 07, 2019 12:16 am
### Re: 25.
Do Δp*Δx ≥ h/4pi, and then plug in the Δp value into this equation: Δp=mΔv and solve for Δv
905385366
Posts: 54
Joined: Sat Jul 20, 2019 12:16 am
### Re: 25.
I was looking at the solutions manual and the value for h is not planks constant (6.626x10-34) but some other value. Can somebody explain why it is different?
Posts: 54
Joined: Sat Jul 20, 2019 12:17 am
### Re: 25.
For velocity in that problem you just need two equations. delta p times delta x is greater than or equal to h/4pi being the first equation and delta p=m times delta v being the second equation. Knowing these are what you need you can simply plug in the information you know and solve for the change in velocity.
Tiffany_Chen 2K
Posts: 106
Joined: Fri Aug 30, 2019 12:15 am
### Re: 25.
905385366 wrote:I was looking at the solutions manual and the value for h is not planks constant (6.626x10-34) but some other value. Can somebody explain why it is different?
The value given should be h/4pi, which is used in Heisenberg's indeterminacy equation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024089336395264, "perplexity": 2974.1177182852953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00544.warc.gz"} |
http://mathhelpforum.com/calculus/89288-inequation-integral-print.html | # inequation with an integral
• May 16th 2009, 11:22 PM
gammafunction
inequation with an integral
Hi! Does anybody know how one can proof that $\frac{c}{1+c^2}\cdot exp(-\frac{c^2}{2}) \leq \int_c^\infty exp(-\frac{z^2}{2}) dz$ for $c>0$
I am thankful for any ideas.
• May 17th 2009, 02:13 AM
NonCommAlg
Quote:
Originally Posted by gammafunction
Hi! Does anybody know how one can proof that $\frac{c}{1+c^2}\cdot exp(-\frac{c^2}{2}) \leq \int_c^\infty exp(-\frac{z^2}{2}) dz$ for $c>0$
I am thankful for any ideas.
let $z=c\sqrt{2x+1}.$ then $I=\int_c^{\infty} \exp \left(\frac{-z^2}{2} \right) \ dz=c \exp \left(\frac{-c^2}{2} \right) \int_0^{\infty} \frac{e^{-c^2x}}{\sqrt{2x+1}} \ dx.$ but we know that $e^a \geq 1+a,$ for any $a \geq 0.$ thus $\frac{1}{\sqrt{2x+1}} \geq e^{-x}$ and hence:
$I \geq c \exp \left(\frac{-c^2}{2} \right) \int_0^{\infty} e^{-(1+c^2)x} \ dx=\frac{c}{1+c^2} \exp \left(\frac{-c^2}{2} \right).$
it was a little tricky, wasn't it? (Wink)
• May 17th 2009, 02:41 AM
gammafunction
Tricky and wonderful (Rofl). What a nice proof. Thank you a thousand times. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507822394371033, "perplexity": 882.1768268609172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597585.99/warc/CC-MAIN-20171217210620-20171217232620-00253.warc.gz"} |
https://fr.mathworks.com/help/control/ug/about-passivity-and-passivity-indices.html | # About Passivity and Passivity Indices
Passive control is often part of the safety requirements in applications such as process control, tele-operation, human-machine interfaces, and system networks. A system is passive if it cannot produce energy on its own, and can only dissipate the energy that is stored in it initially. More generally, an I/O map is passive if, on average, increasing the output y requires increasing the input u.
For example, a PID controller is passive because the control signal (the output) moves in the same direction as the error signal (the input). But a PID controller with delay is not passive, because the control signal can move in the opposite direction from the error, a potential cause of instability.
Most physical systems are passive. The Passivity Theorem holds that the negative-feedback interconnection of two strictly passive systems is passive and stable. As a result, it can be desirable to enforce passivity of the controller for a passive system, or to passivate the operator of a passive system, such as the driver of a car.
In practice, passivity can easily be destroyed by the phase lags introduced by sensors, actuators, and communication delays. These problems have led to extension of the Passivity Theorem that consider excesses or shortages of passivity, frequency-dependent measures of passivity, and a mix of passivity and small-gain properties.
### Passive Systems
A linear system $G\left(s\right)$ is passive if all input/output trajectories $y\left(t\right)=Gu\left(t\right)$ satisfy:
`${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>0,\phantom{\rule{1em}{0ex}}\forall T>0,$`
where ${y}^{T}\left(t\right)$ denotes the transpose of $y\left(t\right)$. For physical systems, the integral typically represents the energy going into the system. Thus passive systems are systems that only consume or dissipate energy. As a result, passive systems are intrinsically stable.
In the frequency domain, passivity is equivalent to the "positive real" condition:
`$G\left(j\omega \right)+{G}^{H}\left(j\omega \right)>0,\phantom{\rule{1em}{0ex}}\forall \omega \in R.$`
For SISO systems, this is saying that $Re\left(G\left(j\omega \right)\right)>0$ at all frequencies, so the entire Nyquist plot lies in the right-half plane.
`nyquist(tf([1 3 5],[5 6 1]))`
Nyquist plot of passive system
Passive systems have the following important properties for control purposes:
When controlling a passive system with unknown or variable characteristics, it is therefore desirable to use a passive feedback law to guarantee closed-loop stability. This task can be rendered difficult given that delays and significant phase lag destroy passivity.
### Directional Passivity Indices
For stability, knowing whether a system is passive or not does not tell the full story. It is often desirable to know by how much it is passive or fails to be passive. In addition, a shortage of passivity in the plant can be compensated by an excess of passivity in the controller, and vice versa. It is therefore important to measure the excess or shortage of passivity, and this is where passivity indices come into play.
There are different types of indices with different applications. One class of indices measure the excess or shortage of passivity in a particular direction of the input/output space. For example, the input passivity index is defined as the largest $\nu$ such that:
`${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>\nu {\int }_{0}^{T}{u}^{T}\left(t\right)u\left(t\right)dt,$`
for all trajectories $y\left(t\right)=Gu\left(t\right)$ and $T>0$. The system G is input strictly passive (ISP) when $\nu >0$, and has a shortage of passivity when $\nu <0$. The input passivity index is also called the input feedforward passivity (IFP) index because it corresponds to the minimum static feedforward action needed to make the system passive.
In the frequency domain, the input passivity index is characterized by:
`$\nu =\frac{1}{2}\underset{\omega }{\mathrm{min}}{\lambda }_{\mathrm{min}}\left(G\left(j\omega \right)+{G}^{H}\left(j\omega \right)\right),$`
where ${\lambda }_{\mathrm{min}}$ denotes the smallest eigenvalue. In the SISO case, $\nu$ is the abscissa of the leftmost point on the Nyquist curve.
Similarly, the output passivity index is defined as the largest $\rho$ such that:
`${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>\rho {\int }_{0}^{T}{y}^{T}\left(t\right)y\left(t\right)dt,$`
for all trajectories $y\left(t\right)=Gu\left(t\right)$ and $T>0$. The system G is output strictly passive (OSP) when $\rho >0$, and has a shortage of passivity when $\rho <0$. The output passivity index is also called the output feedback passivity (OFP) index because it corresponds to the minimum static feedback action needed to make the system passive.
In the frequency domain, the output passivity index of a minimum-phase system $G\left(s\right)$ is given by:
`$\rho =\frac{1}{2}\underset{\omega }{\mathrm{min}}{\lambda }_{\mathrm{min}}\left({G}^{-1}\left(j\omega \right)+{G}^{-H}\left(j\omega \right)\right).$`
In the SISO case, $\rho$ is the abscissa of the leftmost point on the Nyquist curve of ${G}^{-1}\left(s\right)$.
Combining these two notions leads to the I/O passivity index, which is the largest $\tau$ such that:
`${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>\tau {\int }_{0}^{T}\left({u}^{T}\left(t\right)u\left(t\right)+{y}^{T}\left(t\right)y\left(t\right)\right)dt.$`
A system with $\tau >0$ is very strictly passive. More generally, we can define the index in the direction $\delta Q$ as the largest $\tau$ such that:
`${\int }_{0}^{T}{y}^{T}\left(t\right)u\left(t\right)dt>\tau {\int }_{0}^{T}{\left(\begin{array}{c}y\left(t\right)\\ u\left(t\right)\end{array}\right)}^{T}\delta Q\left(\begin{array}{c}y\left(t\right)\\ u\left(t\right)\end{array}\right)dt.$`
The input, output, and I/O passivity indices all correspond to special choices of $\delta Q$ and are collectively referred to as directional passivity indices. You can use `getPassiveIndex` to compute any of these indices for linear systems in either parametric or FRD form. You can also use `passiveplot` to plot the input, output, or I/O passivity indices as a function of frequency. This plot provides insight into which frequency bands have weaker or stronger passivity.
There are many results quantifying how the input and output passivity indices propagate through parallel, series, or feedback interconnections. There are also results quantifying the excess of input or output passivity needed to compensate a given shortage of passivity in a feedback loop. For details, see:
### Relative Passivity Index
The positive real condition for passivity:
`$G\left(j\omega \right)+{G}^{H}\left(j\omega \right)>0\phantom{\rule{1em}{0ex}}\forall \omega \in R,$`
is equivalent to the small gain condition:
`$||\left(I-G\left(j\omega \right)\right)\left(I+G\left(j\omega \right){\right)}^{-1}||<1\phantom{\rule{1em}{0ex}}\forall \omega \in R.$`
We can therefore use the peak gain of $\left(I-G\right)\left(I+G{\right)}^{-1}$ as a measure of passivity. Specifically, let
`$R:={‖\left(I-G\right)\left(I+G{\right)}^{-1}‖}_{\infty }.$`
Then $G$ is passive if and only if $R<1$, and $R>1$ indicates a shortage of passivity. Note that $R$ is finite if and only if $I+G$ is minimum phase. We refer to $R$ as the relative passivity index, or R-index. In the time domain, the R-index is the smallest $r>0$ such that:
`${\int }_{0}^{T}||y-u|{|}^{2}dt<{r}^{2}{\int }_{0}^{T}||y+u|{|}^{2}dt,$`
for all trajectories $y\left(t\right)=Gu\left(t\right)$ and $T>0$. When $I+G$ is minimum phase, you can use `passiveplot` to plot the principal gains of $\left(I-G\left(j\omega \right)\right)\left(I+G\left(j\omega \right){\right)}^{-1}$. This plot is entirely analogous to the singular value plot (see `sigma`), and shows how the degree of passivity changes with frequency and direction.
The following result is analogous to the Small Gain Theorem for feedback loops. It gives a simple condition on R-indices for compensating a shortage of passivity in one system by an excess of passivity in the other.
Small-R Theorem: Let ${G}_{1}\left(s\right)$ and ${G}_{2}\left(s\right)$ be two linear systems with passivity R-indices ${R}_{1}$ and ${R}_{2}$, respectively. If ${R}_{1}{R}_{2}<1$, then the negative feedback interconnection of ${G}_{1}$ and ${G}_{2}$ is stable.
## References
[1] Xia, M., P. Gahinet, N. Abroug, C. Buhr, and E. Laroche. “Sector Bounds in Stability Analysis and Control Design.” International Journal of Robust and Nonlinear Control 30, no. 18 (December 2020): 7857–82. https://doi.org/10.1002/rnc.5236. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 56, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9156221151351929, "perplexity": 728.3473591177319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00627.warc.gz"} |
https://okayama.pure.elsevier.com/en/publications/a-novel-low-complexity-lattice-reduction-aided-iterative-receiver | # A novel low complexity lattice reduction-aided iterative receiver for overloaded MIMO
Satoshi Denno, Yuta Kawaguchi, Tsubasa Inoue, Yafei Hou
Research output: Contribution to journalArticlepeer-review
3 Citations (Scopus)
## Abstract
This paper proposes a novel low complexity lattice reduction-aided iterative receiver for overloaded MIMO. Novel noise cancellation is proposed that increases an equivalent channel gain with a scalar gain introduced in this paper, which results in the improvement of the signal to noise power ratio (SNR). We theoretically analyze the performance of the proposed receiver that the lattice reduction raises the SNR of the detector output signals as the scalar gain increases, when the Lenstra–Lenstra–Lova’s (LLL) algorithm is applied to implement the lattice reduction. Because the SNR improvement causes the scalar gain to increase, the performance is improved by iterating the reception process. Computer simulations confirm the performance. The proposed receiver attains a gain of about 5 dB at the BER of 10−4 in a 6 × 2 overloaded MIMO channel. Computational complexity of the proposed receiver is about 1/50 as much as that of the maximum likelihood detection (MLD).
Original language English 1045-1054 10 IEICE Transactions on Communications E102B 5 https://doi.org/10.1587/transcom.2018EBP3215 Published - May 2019
## Keywords
• Hard-input soft-output
• Iterative detection
• Low complexity
• Noise cancelling
• Serial interference cancellation
## ASJC Scopus subject areas
• Software
• Computer Networks and Communications
• Electrical and Electronic Engineering | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254812002182007, "perplexity": 3078.234417478246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00543.warc.gz"} |
http://www.math.snu.ac.kr/board/index.php?mid=colloquia&page=4&sort_index=speaker&order_type=desc&document_srl=778408 | We discuss how the closed connected 1-dimensional manifold, namely the circle, can help understanding 3-manifolds. We describe so-called the universal circle proposed by a lengendary mathematician, William Thurston, and discuss certain generalizations and open problems. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945881724357605, "perplexity": 1419.6302646854238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00044.warc.gz"} |
https://www.nature.com/articles/ncomms9262?error=cookies_not_supported&code=a04fd9fb-5677-40b3-8ad5-01fb63d0399b | ## Introduction
Ultrafast spectroscopic techniques provide new insights into correlated materials by exciting specific subsystems and following their subsequent relaxation. For magnetic systems, this involves probing non-equilibrium spin and charge dynamics that cannot be reached through thermodynamic pathways. The magnetism in metals is determined by the inter- and intra-atomic exchange interactions1. The former couples the spin moments on neighbouring atoms and leads to the long-range order, while the latter enforces the formation of the atomic spin moment. Inter-atomic exchange is typically much weaker than intra-atomic exchange, which is the strongest and thus potentially fastest force in magnetism. These magnetic interactions can be scrutinized on the shortest timescales by employing femtosecond laser pulses. Such investigations have recently been reported for metallic ferromagnets, half-metallic ferromagnets, dilute magnetic semiconductors, as well as correlated magnetic oxides2,3,4,5,6,7. The response of multicomponent magnetic materials to a short, fs-laser pulse has been examined in an element-specific manner using time-resolved core-level magneto-optical techniques in the soft-X-ray and extreme ultraviolet (XUV) regime8,9,10,11,12,13. These investigations showed that the inter-atomic exchange interaction can be overcome on short timescales. Distinct magnetic responses were seen for the iron and nickel atoms in permalloy on a sub-picosecond timescale9. Probing GdFeCo8, TbFe14, GdCo TbCo13, and GdTb alloys15, different dynamics of the Gd (or Tb) and Fe (or Co) spins were observed. The experiments on multi-sublattice ferrimagnets thereby provide evidence for transient inter-atomic decoupling on the timescale of a few picoseconds. The much stronger intra-atomic exchange could thus far not be examined, as this requires a dedicated technique to selectively probe spin-polarized electrons in different orbitals on the same atom.
Gadolinium metal is an ideal system to offer deep insight in the operation of intra-atomic exchange under non-equilibrium conditions. The localized Gd 4f electrons contribute most to the atomic moment with 7 μB per atom and spin-polarize the 5d valence electrons, which contribute an additional 0.55 μB, where μB is the Bohr magneton. Because the 4f electrons of adjacent atoms have no spatial overlap, the neighbouring 4f moments align via intra-atomic (Hund’s rule) exchange with the 5d electrons, which—combined with the inter-atomic exchange between 5d moments—leads to alignment of all moments (see Fig. 1a). The large intra-atomic exchange energy Jint=130 meV, which corresponds to an exchange field of 4,000 Tesla, is the background upon which quasi-instantaneous alignment (ħ/Jint6 fs; ħ is the reduced Planck constant) of on-site moments and, hence, identical spin dynamics have been assumed thus far.
Time- and angle-resolved photoemission spectroscopy (ARPES) with ultraviolet light pulses has developed into a sensitive tool to probe details of the photoexcited state on the ultrafast timescale16,17,18,19,20,21. Here we apply time-resolved ARPES using higher-order harmonic radiation to study within a single experiment the two spin subsystems of Gd metal, which are coupled via intra-atomic exchange. We show within a single time-resolved photoemission experiment that 5d exchange splitting and 4f magnetic linear dichroism (MLD) evolve upon 1.6-eV pulsed laser excitation on clearly different timescales with decay constants of 0.8 and 14 ps, respectively. We use first-principles calculations to derive the relation between the 5d exchange splitting and the 5d and 4f magnetic moments. The ab initio calculated intra- and inter-atomic exchange constants enter an orbital-resolved Heisenberg Hamiltonian used to simulate the spin dynamics. With the respective coupling of 5d and 4f spin systems to the electron and phonon heat baths (see Fig. 1a), the spin dynamics simulations explain the disparate dynamics of the 5d and 4f magnetic moments very well, despite the large intra-atomic exchange coupling of 130 meV.
## Results
### Time-resolved photoemission spectroscopy
To scrutinize the intra-atomic exchange interaction on ultrashort timescales, we probed with orbital resolution the 4f and 5d magnetization dynamics within a single photoemission experiment. We used an apparatus based on high-order harmonic generation22 to combine time-resolved ARPES with MLD in the angular distribution of photoelectrons23. The sample is a single-crystalline 10-nm-thick Gd(0001) film grown on a tungsten substrate. As shown in Fig. 1b, ARPES with a photon energy of 36.8 eV gives access to the Tamm-like surface state, the transient exchange splitting of the 5d minority and majority spin bands (↓ and ↑, respectively) and the localized 4f state. The Gd 5d electrons and the surface state are excited by the 1.6-eV laser pulse. This photon energy is too low to directly excite the occupied and unoccupied 4f states at 8 eV and −4 eV binding energy24. Figure 2a shows a series of photoemission spectra recorded for varying pump–probe delay extracted at the Γ point (averaged over an acceptance window of ±0.1 Å−1 in Fig. 1b). As illustrated in the inset, we measured for each delay two photoemission spectra with opposite in-plane magnetization directions (red and blue lines) to determine the 4f MLD.
The occupied majority component of the Tamm-like surface state at binding energy EB≤0.5 eV indicates the high quality of the Gd film preparation. As the binding energy of the surface state at is independent of the magnetization direction, it allows us to correct small space-charge shifts between individual measurements for opposite magnetization directions and equal delay. The small intensity change of the surface state for reversed magnetization is attributed to MLD. We observe a transient shift of the majority surface state towards the Fermi level by 50 meV, which is consistent with previous studies25,26.
The exchange-split minority and majority components of the 5d valence band are seen at binding energies of 1.4 and 2.3 eV, respectively. With a probe photon energy of 36.8 eV, we measure the Δ2-like component in the 4th Brillouin zone along the Γ–M direction. The valence band dispersion in Fig. 1b agrees well with previous measurements at higher photon energies27 and the calculated bulk band structure28. The bulk character of these states was also confirmed by recent ab initio calculations of a Gd slab29. Upon laser excitation, we observe a reduction of the exchange splitting (denoted ΔEex in Fig. 2a), which reaches its minimum within the first picosecond after laser excitation (see also Fig. 3). As substantiated below, the initial drop of the exchange splitting of the valence bands parallels the dynamics of the magnetic moment of the 5d electrons.
To follow in addition the average 4f moment, we simultaneously recorded the intensity contrast of the 4f 7F final-state spin–orbit multiplet at 8 eV binding energy for opposite in-plane magnetization directions. The time evolution of the asymmetry is highlighted in Fig. 2b; the grey area is a measure of the transient 4f magnetic moment30. We note that photoemission probes the 5d exchange splitting and 4f MLD in the same sample volume defined by the inelastic mean free path of the photoelectrons of about three monolayers. Although photoemission is a surface-sensitive technique, the MLD contrast reflects mostly the 4f bulk magnetization, that is, the subsurface layers. High-resolution photoemission reveals a surface core-level shift of 0.3 eV in the Gd 4f level with a bulk-to-surface intensity ratio of 3/2 (refs 23, 31). However, the atomic contribution to MLD due to the interference of the d and g photoemission final states is small at our photon energy since the f–g dipole transition is 10 times stronger than the f–d counterpart31. Thus, the MLD signal mainly originates from photoelectron diffraction where the prevailing forward scattering only enhances the MLD bulk signal. This justifies comparison of the exchange splitting of the 5d valence bands with the 4f MLD. As illustrated in Fig. 2b, we observe a clear reduction in MLD contrast at 40 ps delay, while it is only slightly changed at 1 ps delay (compared with the spectrum recorded before pumping at −1 ps delay).
The normalized exchange splitting and 4f MLD as a function of pump–probe delay are summarized in Fig. 3 by red and black circles, respectively. Their strikingly different temporal evolution reveals ultrafast decoupling of the intra-atomic exchange interaction. While the 5d exchange splitting reaches its minimum after one picosecond, by which time the 5d electron and phonon heat baths are nearly in equilibrium, the 4f magnetization continues to decrease until about 40 ps. The recovery of the magnetization occurs by the time the lattice and spins have comparable temperatures26. Fitting the initial collapse of the 5d exchange splitting with a single exponential function yields a time constant of τ5d=0.8±0.1 ps. This agrees with the time constants from magneto-optical Kerr effect (MOKE) measurements (0.85±0.05 ps)32 and our earlier work (0.86±0.1 ps)26. For the 4f response, however, we find a much longer single exponential time constant of τ4f=14±3 ps.
### Orbital-resolved spin dynamics simulations
To shed light on the origin of the disparate magnetization dynamics of the 5d exchange splitting and 4f spin system, we performed atomistic spin dynamics simulations. The 5d moments and the corresponding exchange splitting are caused by the exchange field of the 4f electrons. Nonetheless, it has been shown experimentally33 and theoretically34 that their behaviour deviates significantly from a Stoner-like behaviour. This motivates us to treat the 4f and 5d moments separately, leading to an orbital-resolved Heisenberg Hamiltonian including the intra-atomic interaction to go beyond the standard model with one fixed spin per atom. This approach was proposed recently and has been shown to describe very well the magnetization switching dynamics in ferrimagnets composed of transition-metal and rare-earth elements35; the predicted angular momentum transfer between the 3d and 4f sublattices was confirmed by recent experiments13. Moreover, a simple three-temperature model proposed already for the first laser-induced demagnetization experiments on nickel36 has been shown to fail for the case of strong demagnetization37 or for experiments in Gd30, where the spin system itself is driven out of equilibrium26. Consequently, Gd requires an orbital-resolved model to describe its two distinct spin systems, as illustrated in Fig. 1a. Here, the 5d spins are coupled to the electronic temperature (αe) because the 5d electrons are directly excited by the 1.6-eV photons of the pump pulse. In contrast, the 4f electrons are not perturbed by the pump pulse and thus couple, except through intra-atomic 4f–5d exchange Jint, only to the phononic temperature of the system (αp).
Consequently, we construct the appropriate orbital-dependent spin Hamiltonian as
where the 5d and 4f spins are, in the classical limit, expressed by unit vectors Si and S′i, representing the normalized 5d and 4f magnetic moments, respectively. The first term describes the inter-atomic Heisenberg exchange between the 5d spins at different sites i, j of the hexagonal close-packed lattice. The second term accounts for the intra-atomic 5d–4f exchange and the third term represents a uniaxial anisotropy. We consider Langevin dynamics, that is, we numerically solve the stochastic Landau–Lifshitz–Gilbert (LLG) equations of motion. For the 5d spins, the LLG equation reads
where the phenomenological damping parameter αe describes the coupling between 5d spins and the electronic heat bath, is the 5d spin moment and γ denotes the gyromagnetic ratio. The effective field includes thermal fluctuations via the white-noise term ζi (ref. 38). The same equation describes the 4f spins S′, however, with a coupling αp to the phononic heat bath. Separate values of αe and αp are not known in the literature, but it turns out that best agreement between simulation and experiment is achieved using different values, namely αe=0.00013 and αp=0.0015. Their average is in agreement with the Gilbert damping constant α=0.00044 of Gd, known from ferromagnetic resonance39. The role of these values is analysed in detail in the Supplementary Discussion and Supplementary Fig. 1.
The exchange constants Jij and Jint were calculated ab initio using the density functional theory. To validate the exchange constants with our orbital-resolved spin model, we calculated the equilibrium net magnetization and the individual 5d and 4f magnetizations versus temperature (Supplementary Fig. 2). Employing the exchange constants, we simulated a spin system with 45,696 atomic spins, taking into account exchange interactions up to the 22nd nearest neighbour. We computed a Curie temperature (TC=299 K) close to the experimental value (TC=293 K), implying that our orbital-dependent ab initio exchange constants describe the localized and itinerant magnetism of Gd adequately. Finally, we mention that to compute the electron and phonon temperatures we employed a two-temperature model40 using material parameters similar to ref. 41 (see Supplementary Fig. 3 and Supplementary Table 1).
The results of the atomistic spin dynamics simulations for the 5d and 4f moments are shown in Fig. 3 as blue and black solid lines, respectively. Our calculations clearly support a pronounced difference in the demagnetization times of the 4f spins with respect to the 5d spins. To compare directly the exchange splitting predicted by our simulations with its experimental counterpart, we computed the average angle between the 4f and 5d spin moments as a function of pump–probe delay. We then performed ab initio calculations for this non-collinear arrangement of the two on-site moments, which gives us the electronic bands, and hence the value of the d-band exchange splitting. The relation between the average angle between the 4f and 5d spin moments and the exchange splitting is given in Supplementary Fig. 4. Note that the 5d exchange splitting computed ab initio closely follows the 5d spin moment in the first 10 ps (red and blue curves in Fig. 3, respectively) but deviates more when the 4f moments demagnetize more strongly. This permits us to draw conclusions on the time evolution of the 5d spin moment from the exchange splitting. As can be seen from Fig. 3 the theoretical and measured 5d exchange splitting are in good agreement, as are the theoretical and measured 4f demagnetization. The similarity of measured and simulated 5d and 4f spin moments conclusively proves that despite the massive exchange field, the intra-atomic 5d–4f exchange alignment is broken for tens of picoseconds.
## Discussion
Comparing the measurements with the simulations described above provides the following understanding: laser excitation of the valence electrons in a single-crystalline Gd film creates non-equilibrium conditions between the 5d and 4f spin systems, which persist for several tens of picoseconds. The vastly different energetic positions of the 5d and 4f electrons in Gd are pivotal to the breakdown of the intra-atomic spin alignment on the ultrafast timescale. Initially, only the valence electronic system is heated rapidly by the pump pulse, leading to a fast loss of 5d spin alignment, in spite of the huge exchange field exerted by the spin-polarized 4f electrons. The 4f spin system remains cold for much longer as it couples mainly to the phonon heat bath. Notably, as the 5d electrons reach temperatures of a few thousand Kelvin (see Supplementary Fig. 3), their energy is sufficient to overcome the 4f exchange field. The transient breakdown of 5d–4f intra-atomic alignment only recovers on the slow, picosecond timescale of 4f-spin-lattice relaxation42,43.
Performing spin dynamics simulations with various αp and αe damping parameters (see Supplementary Discussion) we find that increasing αp ten times does not strongly influence the initial 4f demagnetization, but affects the position of the 4f magnetization minimum at about 70 ps (Fig. 3). Conversely, varying αe leads to a stronger change of the initial 5d demagnetization, but does not influence the 5d magnetization minimum at 40 ps. Thus, disparate spin dynamics of 4f and 5d spins are consistently obtained here for a range of damping parameters. A different recent approach29 assumes that the 4f and 5d moments in Gd cannot be treated separately and hence predicts the same demagnetization behaviour for 4f and 5d moments, which, however, is not confirmed by our photoemission measurements.
The orbital-resolved spin-dynamics model also has its limitations. Phononic heat transport is neglected, which can cause a somewhat slower cooling in our simulations and thus a magnetization recovery on longer timescales (≥80 ps) than were measured. Note that the difference in the measured and computed recovery times of the 5d and 4f spins is related to the slightly different equilibrium temperature dependence of the 5d and 4f magnetizations (see Supplementary Fig. 2). As mentioned above, our spin-dynamics approach predicts angular momentum transfer between sublattices35. For the monoatomic Gd lattice, the transport of 5d spin angular momentum occurs between atoms at differently excited depths of the sample via the inter-atomic exchange coupling. Our model does not include additional spin transport via laser-excited electrons, which we expect to be smaller for Gd than for the 3d ferromagnets due to the smaller net magnetic moment of the Gd 5d valence electrons44.
The dynamics of the 4f MLD signal observed in ARPES is in contrast to a previous X-ray magnetic circular dichroism (XMCD) measurement at the Gd M5-edge45, which probes the unoccupied 4f states. The latter experiment suggests that the demagnetization of the 4f system initially is as fast as that of the directly excited 5d electrons measured with MOKE46. The XMCD experiment probes the whole film in transmission, and thus bulk properties. In addition, demagnetization will contain contributions from transport of optically excited electrons generated in the nonmagnetic Y-cap layers47 and the Al support into the polycrystalline Gd film. Such additional contributions are not present in our experiment.
Nonetheless, in the photoemission experiment we cannot rule out a small ultrafast drop of the MLD signal followed by a plateau between 0.2 and 2 ps. However, if the 5d and 4f magnetic moments were in equilibrium, the normalized magnetization of the 5d and 4f spin systems would lie on top of each other. Even within the error bars this is clearly not the case. Therefore, the data in Fig. 3 unambiguously support non-equilibrium between the two spin systems lasting for picoseconds. Our photoemission experiment probes the near-surface layers. The 5d and 4f electrons have similar escape depth. In this sample volume, we demonstrate within one measurement disparate spin dynamics, despite the strong intra-atomic exchange. Unravelling the origin of the different 4f dynamics seen in photoemission and X-ray absorption asks for further experimental studies.
The electron and phonon subsystems equilibrate within 1.5 ps (see Supplementary Fig. 3). Simultaneous to lattice heating, a strain field will evolve that propagates through the gadolinium film. The impact of such strain fields on the ultrafast magnetization dynamics has recently been discussed for nickel48. As crystalline gadolinium has a similar magnetostriction coefficient along its c axis49, lattice strain may additionally contribute to the demagnetization dynamics. Note that the response of the polycrystalline film probed in XMCD can be quite different, since it is the average of a positive and negative magnetostriction parallel and perpendicular to the c axis, respectively49. According to the density functional theory calculations25, the observed shift of the surface state to lower binding energies by 50 meV (Fig. 2 and ref. 26) may point to an expansion of the surface interlayer spacing by about 20 pm. This value is similar to the lattice expansion observed when cooling down Gd bulk from TC to about 100 K (refs 49, 50). The anomalous lattice expansion of Gd is related to magnetostriction and indicates a repulsion between the ferromagnetic layers. Vice versa, expansion of the lattice upon laser excitation starting at the surface may stabilize the ferromagnetic state. These arguments are, however, challenged by the ultrafast (≤50 fs) and significant (≥50%) drop of the surface-sensitive magnetic second harmonic signal25, which indicates ultrafast demagnetization of the Gd surface layer41, as well as by the comparable dynamics of 5d exchange splitting and MOKE32. These techniques probe near-surface layers and bulk, respectively.
Our orbital-resolved spin dynamics simulations show that despite the strong intra-atomic exchange, disparate transient spin dynamics can occur. Recently, transient decouplings have been observed for the inter-atomic exchange in permalloy, which showed a 20-fs shift in the transversal-MOKE response between the Fe and Ni M-edges9, as well as in GdFeCo alloy, where the inter-atomic exchange is much weaker (3 meV) and the Gd–Fe decoupling lasted a few picosecond8. Here we show for the first time not only decoupling of the much stronger intra-atomic exchange (Jint=130 meV), but also that this decoupling lasts for about 40 ps. We note further that the intra-atomic exchange interaction has been considered previously in the context of laser-induced magnetization dynamics in ferromagnetic semiconductors51 and in laser-induced phase transitions in manganites7. In the former study, the influence of exchange coupling between localized and itinerant spin degrees of freedom was evaluated. In the latter study, quantum spin-flips mediated by the Hund’s rule coupling of Mn 3d states were proposed for fast switching of the magnetic order. In our approach, we extend in a different way beyond the classical spin limit of one spin per atom, by introducing exchange coupled, orbital-specific spins on an individual Gd atom. As confirmed by recent experiments13 and the presented simulations, our approach provides indeed a very good explanation of the orbital-selective spin dynamics. Since the large energy separation of the 5d and 4f electrons is specific to Gd, a similar observation of transient decoupling in other lanthanide metals will be difficult, as laser irradiation can rapidly heat both 4f and 5d systems.
In conclusion, femtosecond laser-pulse excitation allows us to drive the 5d and 4f spin systems of gadolinium metal out of equilibrium. Despite the strong intra-atomic exchange interaction, their demagnetization dynamics is characterized by time constants that differ by one order of magnitude. Their surprisingly disparate time evolution is well explained by orbital-resolved spin dynamics simulations based on exchange parameters calculated ab initio. Our simultaneous examination of the localized and itinerant magnetism in Gd evidences that ultrafast laser stimulation of the valence electrons offers a route to transiently overcome the massive 4f–5d exchange interaction. Understanding thereby the operation of fundamental magnetic interactions at ultrashort timescales and realizing the ability to manipulate them may have tremendous implications for future magnetic storage devices.
## Methods
### Molecular beam epitaxy of Gd films on W(110)
Single-crystalline, 10-nm-thick Gd(0001) films were grown epitaxially on a W(110) crystal at room temperature. The pressure during evaporation was 10−10 mbar. Subsequent annealing to 700 K allows the Gd lattice to relax and improves the film quality. Film thickness was calibrated by a quartz microbalance, film cleanliness and order were verified by low-energy electron diffraction and photoemission spectroscopy. The surface state intensity is a sensitive probe of the sample surface quality (see Fig. 2a and Supplementary Fig. 5).
### Time- and angle-resolved photoemission spectroscopy
Pump and probe pulses for the time-resolved ARPES experiment were derived from a femtosecond Ti:saphire chirped-pulse laser amplifier. The laser was running at 10 kHz, producing broadband pulses at a centre wavelength of 775 nm. A beam splitter transmits 200-μJ pump pulses, which were incompletely compressed to 300 fs duration in a separate compressor and adjusted in power and focus size. After compression to 45 fs, the remaining 1.3 mJ were focused into 100 mbar argon to produce XUV probe pulses. A toroidal grating monochromator selected the 23rd harmonics with a photon energy of 36.8 eV. For the experiment we used p-polarized probe pulses with a duration of 100 fs at a bandwidth of 150 meV. The pump pulse has a photon energy of 1.6 eV and was stretched to 300 fs pulse duration to reduce space-charge effects. The absorbed fluence is 3.5±1 mJ cm−2.
The ARPES experiments were conducted at 3 × 10−11 mbar with a view-type 100-mm hemispherical photoelectron analyzer. The exchange splitting of the 5d bands was derived from energy distribution curves at normal emission (integrated at ±0.1 Å−1 in Γ–M direction). The surface state, as well as minority and majority spin components of the 5d band were fitted using Lorentzian line shapes (see Supplementary Fig. 5).
For the MLD signal, we corrected slight differences in the space-charge shift between the two magnetization directions by aligning the spectra at the surface state. This is appropriate because the surface state has vanishing Rashba splitting at the Γ-point31. The electron distribution curves for both magnetization directions were normalized in intensity before subtracting one from the other for each pump–probe delay. The MLD signal is the integral over the absolute value of the intensity difference of the 4f state for opposite in-plane magnetization directions. In thermal equilibrium this value is proportional to the 4f magnetization of the sample30,52.
### Density functional theory calculations
For the calculation of the intra- and inter-atomic exchange constants, we have adopted two different computational schemes. The intra-atomic exchange constant Jint=130 meV was calculated with the full-potential linear augmented plane wave method within the local spin density approximation (LSDA), employing the band-structure program ELK. Here, the 4f electrons were included in the valence states. This is necessary for describing correctly the interaction between the 5d and 4f states. The program was modified to allow constraining the magnetizations of the spd and 4f states into an antiparallel alignment (see ref. 35 for details). The computed intra-atomic exchange constant is in good agreement with a previous calculation34.
For the ab initio calculation of the inter-atomic exchange constants, we have found the approach in which the 4f electrons are treated as a part of the core states to be the most efficient. The self-consistent electronic structure was calculated using the tight-binding linear muffin-tin orbital method53, adopting the LSDA54. Treating the f electrons as localized core electrons notably helps to overcome some of the inaccuracies of the LSDA when applied to Gd, namely, its prediction of an antiferromagnetic ground state55 related to the positioning of the minority spin 4f states too close to the Fermi level28. Also, this approach has been successfully applied to predict the spontaneous volume magnetostriction in Gd34.
For the calculation of the inter-atomic exchange constants Jij (≤5.9 meV), we have employed the mapping of the magnetic behaviour of the real material onto an effective Heisenberg Hamiltonian56,57. Specifically, we have used the magnetic force theorem approach56, which allows infinitesimal changes of the total energy to be expressed in terms of the one-particle eigenvalues containing the non-self-consistent changes of the effective one-electron potential accompanying the infinitesimal rotations of the spin quantization axes, that is, without any additional self-consistent calculations besides that for the collinear ground state. The resulting pair-exchange constants are given by
where EF denotes the Fermi level, Ωi denotes the i-th atomic cell, σ=↑,↓ is the spin index, E+=limα→0E+iα, Gσ are spin-dependent one-electron retarded Green functions and Bxc is the magnetic field from the exchange-correlation potential. The validity of this approximation has been examined quantitatively in recent studies and it was found to be rather successful in explaining the thermodynamic properties of a broad class of magnetic materials58,59.
The exchange constants Jij have been computed up to the 29th nearest-neighbour shell. We have used around a million k-points in the full Brillouin zone for energy points close to the Fermi level. In both tight-binding linear muffin-tin orbital and full-potential linear augmented plane wave calculations, the Gd lattice constant adopted was 3.629 Å and the c/a ratio, 1.597.
### Modelling of electronic and lattice temperature
To model the electronic and lattice temperature, we use the well-established two-temperature model40,60. Thereby perpendicular heat diffusion in the electronic sub-system is included. Furthermore, the energy flow into the spin system is taken into account by numerically calculating the time derivative of the Hamiltonian in equation (1) at every time step and adding it to the two-temperature model (see Supplementary Note). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8571388125419617, "perplexity": 2139.0691747356454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00595.warc.gz"} |
http://forum.mackichan.com/node/1727 | ## Trouble importing .sty file
Hi,
I would like to use jf.sty file in the website below to format my article.
http://faculty.haas.berkeley.edu/stanton/texintro/
However, though I saved jf.sty file in the folder, Style Editor of my SW5.5 does not seem to recognize it. How do I use this .sty file to format an article?
Best regards,
Yosh
### This is a latex style file,
This is a latex style file, not an styleEditor one. What you need to do is put it into the SWP55 Truetex tree (say, in C:\swp55\TCITeX\TeX\LaTeX\contrib) and then in your SWP document, put, in the preamble (Typeset -> Preamble):
\usepackage{jf}
Note that SW will automatically transfer this statement to the Options and Packages dialog. If you object to this, you can say, instead
\RequirePackage{jf}
Note that what this doesn't give you is any special constructs created by the style. If you want these you need to use TeX fields for them. Basically what you're getting is possible changes to the way standard LaTeX things are laid out. And there's no guarantee that it will work in connection with other SWP documents - you'll just have to experiment.
### Hi Pviton, Thank you very
Hi Pviton,
Thank you very much. It worked now. Much appreciated.
Best,
Yosh | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709883093833923, "perplexity": 2430.0383278567274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249500704.80/warc/CC-MAIN-20190223102155-20190223124155-00317.warc.gz"} |
http://electronics.stackexchange.com/questions/110617/3db-on-filters-just-approximated | -3dB on filters, just approximated?
I have been told that -3dB is when you get half the power or equivalently the original voltage divided by the square root of 2.
Nonetheless, doing the calculations I get 3.01029dB.
I figure this is because the 3dB value is just the aproximated value of what I have gotten but maybe I am mistaken somewhere, so is the 3dB just an approximation or am I mistaken?
-
20log$_{10}(\dfrac{1}{\sqrt2})$ = 3.01029995664 ! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594214558601379, "perplexity": 795.8931126467646}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053209501.44/warc/CC-MAIN-20160524012649-00074-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://knzhou.github.io/publication/casimirpoisson/ | # Casimir Meets Poisson: Improved Quark/Gluon Discrimination with Counting Observables
### Abstract
Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that track multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.
Type
Publication
Journal of High Energy Physics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577023386955261, "perplexity": 2920.482893268765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358798.23/warc/CC-MAIN-20210227084805-20210227114805-00416.warc.gz"} |
https://www.physicsforums.com/threads/distance-expansion-and-escape-velocity-thought-experiment.566059/ | # Distance expansion and escape velocity ( thought experiment )
1. Jan 8, 2012
### marcus
Distance expansion and escape velocity ("thought experiment")
If you care to, it would help if you would check my arithmetic. I may have made one or more mistakes. Thanks to anyone who can show the conclusion here is wrong.
The question came up if you have a pair of kilogram masses and you place them each at CMB rest slightly over one light second apart, then does their Hubble law recession rate exceed their classic escape speed?
It is imagined that you do this out in "open" space, away from any galaxies, groups of galaxies, superclusters etc. So that even though the force of attraction between the pair of kilogram balls is unimaginably weak all the other forces can be neglected.
As time permits I will show some work indicating how I came by the answer, but for the moment to make a long story short, I found that the Hubble law expansion of 400,000,000 meters even though unimaginably slow does exceed the escape speed. So the two balls would continue to separate and not eventually start to fall towards each other.
That distance is roughly the distance from the Earth to the Moon.
Last edited: Jan 9, 2012
2. Jan 8, 2012
### marcus
Re: Distance expansion and escape velocity ("thought experiment")
Maybe if I made a mistake I'll find it myself as I type in some work. Everybody should know that the Hubble parameter is the reciprocal of a time (called the "Hubble time") and therefore it is a frequency. The Hubble frequency.
Indeed if you type "71 km/s per Mpc " into the googlebox, ending with a space or equal sign, it will tell you "2.3 x 10-18 Hz".
So imagine you have a proper distance between two objects at CMB rest and the distance is 400,000,000 meters. How fast is this distance expanding? Just multiply by the Hubble frequency. 9.2 x 10-10 meters per second.
From a Newtonian center of mass perspective each has kinetic energy .5*(4.6 x 10-10)2 joules.
So the combined kinetic energy is (4.6 x 10-10)2 joules. In other words, 2.12 x 10-19 joules.
If I am not mistaken that more than cancels the Newtonian gravitational potential of a pair of kilo balls 400,000 kilometers apart.
Namely 6.67 x 10-11/400,000,000 = 1.67 x 10-19 joules.
So at 400,000 kilometers expansion rate exceeds escape speed. A similar calculation shows that at 300,000 kilometers it does not. The balls would begin by getting farther from each other but would eventually begin to fall together.
Last edited: Jan 8, 2012 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.927422046661377, "perplexity": 967.0764642146801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864872.17/warc/CC-MAIN-20180522190147-20180522210147-00259.warc.gz"} |
https://vox.veritas.com/t5/NetBackup/NetBackup-Archive-bit-processing/td-p/817791 | cancel
Showing results for
Search instead for
Did you mean:
## NetBackup Archive bit processing
Level 6
Hi all,
I havent been on here for a long time so hello :).
I wanted to post something that worried and disturbed me slightly regarding using the archive bit to backup windows servers. It all looks great on the covers but there is an issue which I dont think most people are aware of.
When you run a full backup the Archive bit is cleared. What people dont realise is that the archive bit is cleared as the backup progresses. Alot of people believe that the "wait time before clearing archive bit" setting is the delay before the bit is cleared when the job completes. This isnt the case and i'll explain why.
If you had a very large file server that took 10+ hours to backup and you clear the archive bit at the end of the job. During those 10 hours someone could have made changes to files that got backed up 9 hours ago. When the archive bit gets cleared it would assume that the files are already backed up and clear the bit on all of them. The users changes are then not backed up in an incremental or cumilative incremental as the archive bit was cleared by the full backup. If the file was not modified again it wouldnt get backed up until the next full backup.
This is why the archive bit is cleared during the backup to stop clearing an Archive bit that could be a new modification to a file.
I too was unaware of the way NetBackup processed the archive bit and never really thought about it until we saw an issue.
We recently logged a case with Symantec (at the time) due to and issue where files had been missed in the backup. What happened was that a full backup had run and failed using the archive bit. We then ran a cumaltive incremental and it backed up fine. We then went to look for some files to restore and noticed that some were missing in the backup from the day before when the full backup had run and failed. Odd! We looked back at the files and discovered the archive bit had been reset but we didnt have them in any backups and the files were created the day before. It turns out that as the full backup had run it was clearing the archive bit for the files it had completed during the backup. As the backup failed it didnt get commited to the catalog but the archive bit had already been cleared. We then didnt get the files in a incremental backup as there was no archive bit set on the files for the incremental to pick up.
When we pushed the issue with Symantec (at the time) and they confirmed that NetBackup by design clears the archive bit as it processes a full backup honoring the wait time before it clears it for each file backed up. We were advised to fix the issue with the full backup which was causing the clearing of the archive bit or move to time stamp backups where this isnt an issue.
If you have a failed full backup using the archive bit you must get another full backup after it. If you think that an incremental will cover all your files from the last incremental with a failed full backup inbetween, it wont.
For me this begs the question is there any point in using the Archive bit? Am i missing something?
If a full fails when you use timestamp NetBackup looks for the last completed backup time stamp for an incremental and uses that timestamp. It doesnt matter if a full backup failed in between incrementals.
Does anyone have any thoughts on this?
0 REPLIES 0 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8025081753730774, "perplexity": 1641.1410478586638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00845.warc.gz"} |
https://www.mathstools.com/section/main/uncompatible_constraints | # Linear Programming examples
### Uncompatible constraints example
Lets consider a linnear programming as follows
Maximize (x + 3y)
Subject to
x + y ≤ 4
x + y ≤ 1
x - y ≥ 2
x, y ≥ 0
The feasible region has the form
That is, the set of feasible set is the empty set. There are not any point satisfying all restrictions, so we have an incompatibility restrictions problem.
Ejecuta aquà este ejemploExecute here this example
Post here
Post here | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597358107566833, "perplexity": 3539.9547900548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00123.warc.gz"} |
https://www.scribd.com/document/353029646/sem-imp | You are on page 1of 36
# Confirmatory Factor Analysis
Definition
Confirmatory factor analysis (CFA) is a procedure for learning the extent to which k observed
variables might measure m abstract variables, wherein m is less than k. In CFA, we indirectly
measure non-observable behavior by taking measures on multiple observed behaviors.
Conceptually, in using CFA we can assume either nominalist or realist constructs, yet most
applications of CFA in the social sciences assume realist constructs.
## Terminology: Factor = Abstract Concept = Abstract Construct = Latent Variable.
CFA differs from EFA in that it specifies a factor structure based upon expected theoretical
relationships. Whereas we might think of EFA as a procedure for inductive theory construction,
CFA is a procedure for testing hypotheses deduced from theory. CFA allows the researcher to
conduct two forms of data analysis not avaliable in EFA:
1. CFA allows for the examination of second-order (i.e., higher-order) latent variables. We
might posit, for example, that marital satisfaction (a latent variable) consists of four sub-
dimensions (each a latent variable), satisfaction with: romance, companionship, family
finances, and child rearing.
2. CFA allows for testing hypotheses related to construct validity. We can test for statistical
significance of the effect of a latent variable on each of the observed variables posited to
measure it.
The web page entitled, "Using CFA to Test Empirical Validity" provides an example of how CFA
can be used to examine construct and predictive validity for a second-order latent variable.
CFA requires one to specify the measurement of and relationships among the factors.
Therefore, it relies upon deductive examination of a theory. Deductive analysis has the
advantage of knowing a priori the factor structure, which allows one to test hypotheses related
to examining the various types of construct validity. However, whereas the EFA model is never
underidentified, the CFA model can be underidentified, requiring one to understand
mathematical identification and the rules for certifying model identification.
Assumptions
1. Typically, realism rather than nominalism: Abstract variables are real in their consequences.
2. Normally distributed observed variables.
3. Continuous-level data.
4. Linear relationships among the observed variables.
5. Content validity of the items used to measure an abstract concept.
6. E(ei) = 0 (random error).
7. Theoretically specified relationships among observed variables and factors.
8. A sample size greater than 100 (more is better).
Note: In CFA:
1. we use the symbol (xi) to refer to an exogenous factor (an independent latent variable).
2. we use the symbol (eta) to refer to an endogenous factor (a dependent latent variable).
3. we use the symbol to refer to the intercept of the measurement model.
4. we use the symbol to refer to the variance/covariance matrix of the factor(s). Note that the
variance of a factor always equals 1 in EFA.
The diagram shown below shows the terminology typically used in CFA and Structural Equation
Modeling (SEM). This course addresses the "measurement" model, meaning the
measurements of and relationships among the exogenous variables. Soc 613 addresses the
"causal" model, referring to relationships among the exogenous and endogenous variables.
## When we address second-order CFA, we
Most of the notes in this lecture refer to measuring
will discuss measuring and . The principles discussed in measuring apply also to
measuring .
Notation
## y p x 1 vector of observed endogenous (i.e., dependent) variables.
x q x 1 vector of observed exogenous (i.e., independent) variables.
m x 1 vector of latent endogenous variables (i.e., dependent factors).
n x 1 vector of latent exogenous variables (i.e., independent factors).
p x 1 vector of measurement errors in y.
q x 1 vector of measurement errors in x.
m x n matrix of coefficients of the variables.
m x m matrix of coefficients of the variables.
m x 1 matrix of coefficients of equation errors in the relationships between and
m x m matrix of coefficients of correlations among the errors in the relationships between
and
## Example of a CFA Model
The model shown below specifies that a set of three abstract variables related
to locus of controlinternal, chance, and powerful otherscan be measured with sufficient
validity and reliability by nine observed variables, wherein each latent variable is measured with
three observed variables.
Software Packages for Conducting CFA
The Sociology 512 web site provides examples of conducting CFA using six well known
software packages: LISREL, MPlus, R, SAS, SPSS/AMOS, and Stata. The examples shown in
these notes rely mainly upon the LISREL software package.
## Consequences of Measurement Error
We noted above that CFA assumes a sample size of at least 100. Understanding the
consequences of measurement error can explain why we make this assumption.
Single Indicator of
X1 = 1 + 111 + 1, where:
## 1. refers to the intercept of the equation
2. the population mean of = K (kappa)
3. the population mean of X = x1 (mu)
## E(X1) = E(1 + 111 + 1), or: x1 = 1 + 11K1
Recall that this equation cannot be solved because of the linear dependencies in the matrix.
To estimate the parameters, we must make one of the following assumptions:
## 1. The variance of the factors equals one.
2. The parameter estimate (11) for the effect of the factor (1) on X1 equals one.
3. For example:
a. let 1 = 0 (standardized variables),
b. set 11 = 1.
4. Then, X1 = 1 + 1.
5. Then, E(X1) = E() + E().
6. Assume: E() = 0 (i.e., random errors in measurement).
7. Then, E(X1) = E(), or x1 = K1.
8. Hence, E(X1) is an unbiased estimator of K1. (so far, so good!).
Multiple Indicators of
X1 = 1 + 111 + 1
X2 = 2 + 211 + 2
## We must set a scale for one of the latent variables.
a. let 1 = 0 (standardized variables),
b. set 11 = 1.
4. Then, E(X1) = k.
5. Having set a scale for using 1 and 11, it is unnecessary and incorrect to so again for 2
and 21.
6. E(X1) = k (as before).
7. The mean of X2, however, may not equal the mean of 1.
## Consider the consequences of estimating the variance of 1:
1. var(X1) = 11211 + var(1), where 11 = var(1)
2. if 11 = 1 and var(1) = 0, then var(X1) = 11 (unbiased estimator)
3. if var(2) 0, however, then var(X2) > 11 (biased estimator).
4. Therefore, var(X2) is a biased estimator of 11.
5. To resolve this issue, CFA relies upon asymptotic distribution theory, which essentially
states that for large sample sizes (n 100), var(X2) is an unbiased estimator of 11.
## The Measurement Model
A measurement model specifies a structural relationship that connects latent variables to one or
more observed variables. The general linear model for specifying these relationships is:
= () = E(XX), where:
1. refers to reality.
2. () refers to theory.
3. E(XX) refers to the correlation matrix of observed variables.
Consider the following example of the measurement model:
## For standardized variables:
X1 = 1 + 111 + 1
X2 = 2 + 211 + 2
or, in general:
X = x +
Most latent variables in the social sciences are abstract ones. Abstract variables require an
arbitrary scale. There are two approaches to setting a scale:
## 1. Set the variances of all latent variables () to 1.
2. Set one of the estimates in x (for each xii) to 1.
3. Do not do both procedures in the same model.
## 1. The covariance matrix for X = E(XX)
2. Therefore, () = E(XX), wherein: X = x +
3. XX = (x + ) (x + )
4. XX = (x + ) (x + )
5. XX = xx + x + x +
6. E(XX) = xE()x + xE() + E()x + E()
Assume:
1. E() = E(),= 0, factors are not correlated with errors (random errors in measurement).
2. E() is the covariance matrix of latent variables:
3. E() is the covariance matrix of errors:
4. Therefore: = () = E(XX) = xx' +
Model Identification
## In conducting CFA we specify a set of parameters to be estimated. We therefore must specify a
model that contains sufficient information to estimate these parameters.
X1 X2 X3 X4
## Let: 11 = 1 to set the scale for 1
Let: 32 = 1 to set the scale for 2
Assume: uncorrelated error terms. This assumption is not necessary in CFA; it is made here to
simplify the presentation regarding model identification.
Then,
X= | X1 |
| X2 |
| X3 |
| X4 |
x= | 1 0 |
| 21 0 |
| 0 1 |
| 0 42 |
= | 1 |
| 2 |
= | |
| |
= | var(1) 0 0 0 |
| 0 var(2) 0 0 |
| 0 0 var(3) 0 |
| 0 0 0 var(4) |
Compute ():
| 1 0 |
x | 21 0 | *
| 0 1 |
| 0 42 |
| |
| | =
| |
| 21 21 |
x | | *
| 42 42 |
x' | 1 21 0 0 |
| 0 0 1 42 | =
'
xx
| |
| 21
21
2
|
| 21 | +
| 42 2142 42
42
2
|
| var(1) 0 0 0 |
| 0 var(2) 0 0 |
| 0 0 var(3) 0 | =
| 0 0 0 var(4) |
() | var(1) |
| 21 212var(2) |
| 21 var(3) |
| 42 2142 42 var(4)
42
2
|
Using E(XX'):
| var(X1) |
| cov(X1 X2) var(X2) |
| cov(X1 X3) cov(X2 X3) var(X3) |
| cov(X1 X4) cov(X2 X4) cov(X3 X4) var(X4) |
Then:
cov(X1 X3)
21 = cov(X2 X3) / cov(X1 X3)
42 = cov(X1 X4) / cov(X1 X3)
11 = [cov(X1 X2) * cov(X1 X3)] / cov(X2 X3)
22 = [cov(X3 X4) * cov(X1 X3)] / cov(X1 X4)
var(1) = var(X1) - 11
var(2) = var(X2) - 2111
var(3) = var(X3) - 22
var(4) = var(X4) - 4211
Example
Assume the correlation matrix shown below. Calculate the parameter estimates given the
model as identified above.
rx1x2 =
| 1 |
| .305 1 |
| .233 .230 1 |
| .216 .213 .308 1 |
rx1x3 = .233
21 = rx2x3 / rx1x3 = .987
41 = rx1x4 / rx1x3 = .927
11 = (rx1x2 * rx1x3) / rx2x3 = .309
22 = (rx3x4 * rx1x4) / rx2x3 = .332
= 1 - .309 = .691
= 1 [.9872 * .309] = .699
= 1 - .332 = .668
= 1 [.9272 * .332] = .715
## Item reliabilities (squared multiple correlation) = ij2ii / var(Xi)
X1 = (12)(.309) / 1 = .309.
X2 = (.9872)(.309) / 1 = .301.
X3 = (12)(.332) / 1 = .332.
X4 = (.9272)(.332) / 1 = .285.
Summary
T-rule
## t = the number of parameters to be estimated
q = the number of observed variables.
## Example: For the model shown above:
1. t = (4)(5) = 10.
2. The nine estimated parameters to be estimated are: 21, 41, 11, 22, 12, , 22,33, and
44
3. Therefore, the model meets the t rule. In this case, the model is said to be "underidentified"
because t < 10.
## 1. Three or more observed variables per latent variable.
2. Each row of x has only one non-zero element in addition to 11 = 1. That is, each X is an
indicator of just one latent variable.
3. is a diagonal matrix. That is, the errors are uncorrelated.
## This rule is sufficient, but not necessary for mathematical identification.
Two Indicator Rule
## 1. Two observed variables per latent variable.
2. Each row of x has only one non-zero element in addition to 11 = 1. That is, each X is an
indicator of just one latent variable.
3. is a diagonal matrix. That is, the errors are uncorrelated.
4. More than one latent variables.
5. has no zero elements. That is, the latent variables are correlated with one another.
## This rule is sufficient, but not necessary for mathematical identification.
Degrees of Freedom
## The degrees of freedom for a CFA model:
d.f. = [q(q+1) / 2] t.
That is, the number of potential parameters minus the number of estimated parameters.
Model Evaluation
Theoretical proposition:
= () = E(XX), where:
1. refers to reality.
2. () refers to theory.
3. E(XX) refers to the correlation matrix of observed variables.
Notation:
S = E(XX), the observed correlation matrix.
( ) = the matrix of estimated parameters.
Alternative Hypothesis: The theory fits the data.
S = ( )
Null Hypothesis: There is no difference between the estimated parameter matrix and the
observed correlation matrix.
S - ( ) = 0
Note: A relatively small value for a model test statistic, such as chi-square, indicates that the
theory fits the data. Such a finding would indicate support for the theory. Thus, in evaluating
model fit, we look for a low chi-square value relative to the degrees of freedom, showing a
probability of alpha < .05.
Note: Measures of overall fit are not applicable to exactly identified models because at least one
degree of freedom is required for the hypothesis.
Note: Although evaluation statistics might indicate an overall good fit for the model, the individual
parameter estimates might be theoretically inappropriate or statistically non-significant.
Ho: S - ( ) = 0
## 2 = (n 1) (log|| + tr(( )-1S) log|S| - q), where:
n = sample size.
log refers to the natural log.
## t = the number of possible parameters to be estimated.
q = the number of observed variables.
Consider the conceptual foundation of chi-square. It equals a summary of the estimated score
minus the observed score in a table. In this same manner, chi-square equals the estimated
parameters plus their item reliabilities (the trace, or diagonal of the observed correlations divided
by the estimated parameters) minus the observed correlation matrix minus the number of
observed variables.
Coefficient of Determination
The coefficient of determination (R-square) calculates the percent of variance explained in the
observed variables (X matrix) by the latent variables ( matrix). It equals 1 minus the
determinant of the errors in estimating X (the matrix) divided by the determinant of Sigma-hat
(i.e., the input correlation matrix).
R2 = 1 [ || / | XX' |
## Goodness of Fit Indexes
Various goodness of fit indexes have been developed to assess model fit. Ones more
commonly used are the Goodness of Fit Index (GFI) and the Adjusted Goodness of Fit Index
(AGFI). The Residual Mean Square (RMS) and Critical N (CN) also are a popular statistics
used to assess model fit. Critical N is equal to "what chi-square would be if the sample size
were 200." Thus, Critical N adjusts chi-square for very large samples, wherein a large sample
size can create a large chi-square statistic even when the "amount of error" is small.
These indexes have the disadvantage of not having ratio scales. Thus, the community of
scholars must arrive at some agreed upon level of the indexes that assures them of adequate
model fit. In general, a GFI or AGFI of .9 or above is considered acceptable. The community of
scholars looks for an RMS of below .05. The community of scholars looks for a CN of above
200, meaning that "a sample size of more than 200 is needed to arrive at a chi-square that
indicates a probability of alpha greater than .05." See related article by Schreiber et al. for a
detailed description of model evaluation for CFA and Structural Equation models.
## Component Fit Measures
The t-test at 1 degree of freedom is used to evaluate the statistical significance of a parameter
estimate, wherein t = estimate / standard error of the estimate. A t-ratio of 1.98 or greater
indicates statistical significance at alpha = .05.
Reliability of the Parameter Estimates
## Consider this model:
Reliability of Xi
The reliability (i.e., communality) of Xi is the magnitude of the direct relationship that all latent
variables have on Xi.
## In the Parallel Model:
11 = 21
1 = 2
In the Tau-Equivalent Model:
11 = 21
1 2
In the Congeneric Model:
11 21
1 2
The Reliability of
q q q
(the reliability of ) = xi)2 / xi)2 + i)
i 1 i 1 i 1
## Reporting standardized parameter estimates enables the community of scholars to compare
different studies of the same model. The formulas for calculating standardized estimates are:
## ijs = ij / jj1/2 ij1/2
ijs = ij / var(Xi)
## where i refers to an observed variable and j refers to a latent variable.
In matrix format:
xs = Dx-1x D
s = D-1 D-1
s = Dx-1 Dx-1
where:
## Dx = (diag [xx' + ]1/2
D= (diag )1/2
Unique Validity Variance
In cases where a measurement model specifies correlated factors or error terms one might want
to know the unique commonality for an observed variable.
Uxij (the unique validity variance, or commonality) of the effect of Xi on Xj) = Rxi2 - Rxi(j)2, where:
Rxi2 is the squared multiple correlation coefficient for Xi. This is the proportion of variance in Xi
explained by all latent variables in the model that have a direct effect on Xi.
## Rxi2 = xi *-1 xi' where:
1. xiis the correlations of on Xi, for all that affect Xi. (a 1 x d vector, where d is the
number of with direct effects on Xi).
2. * = correlation matrix of all with direct effects on Xi.
Rxi(j)2 is the squared multiple correlation coefficient for Xi, controlling for the effects of the latent
variable on other observed variables.
## Rxi(j)2 = [xi() ij*-1 xi()'] / var(Xi), where:
1. xi() = is the correlations of on Xi, for all that affect Xi, except for j, the latent variable of
interest (a 1 x d vector, where d is the number of with direct effects on Xi).
2. (j)* = correlation matrix of all with direct effects on Xi, except for j, the latent variable of
interest.
Note: The unique validity variance might be relatively low in comparison with Rxi2 because Xi
might depend upon highly correlated latent variables.
Degree of Collinearity
A measurement model with more than one latent variable, wherein the latent variables are
correlated with one another, should be evaluated for its degree of collinearity.
## R(j)2 = 122 / 1122 (i.e., the squared correlation of 1 and 2.
Factor Score Estimation
Having found some underlying dimension(s) in the data, the researcher might want to
construct a factor scale. A factor scale is a latent variable derived from two or more
observed variables that have been demonstrated to have content and construct validity,
and which are sufficiently reliable to be used for further analysis.
Factor scales can be used in two ways: 1) to examine observations in terms of their
scores on the latent variables, 2) to use the latent variables in subsequent analysis as
independent and/or dependent variables.
Measurements on factor scales can be constructed in several ways. First, they can be
calculated by simply adding or obtaining the mean of the two or more observed variables
comprising the scale. If the observed variables differ in their item reliabilities, however, the
researcher might want to construct the factor scale based upon weighted observed
variables. Observed variables typically are weighted by their parameter estimates on the
factor. Listed below are three procedures that use different assumptions to create more
refined factor scores.
## Factor Score = (xS-1)x, where S = the observed correlation matrix.
Bollen's Procedure
Bollen suggests accounting for the correlations among the latent variables:
Factor Score = (xS-1)x, where S = the observed correlation matrix.
Bartlett's Procedure
Barlett suggests giving more weight to observed variables with greater item reliability:
Factor Score = [(x'-2)(S-2S)-1]x, where S = the observed correlation matrix.
Hypothesis Testing and Model Comparison
One advantage to theory testing and the subsequent use of CFA is that nested models can be
used to test hypotheses. One can conduct a difference in chi-square test, for example, to
evaluate the extent to which changes in model specification affect model fit.
## Ha: The model fits the data.
If the model fits the data, then chi-square will be low and the prob. of a type-I error will be
over .05 (assuming an assigned type-1 error rate of 5%).
## Ho: There is no relationship between the model and the data.
If there is no relationship between the model and the data, then chi-square will be high and
the prob. of a type-I error will be less than .05 (assuming an assigned type-1 error rate of
5%).
The approach to testing differences in estimates across two samples, or testing for the
moderating effect of an external variable, is to estimate a baseline model that assumes no
difference in estimates across the two samples. Then, estimate less restricted models, ones
that allow for differences in parameter estimates across levels of the external variable. The chi-
square calculation for each less restricted model will be less than the chi-square value for the
baseline model. And the degrees of freedom for the less restricted model will be less than that
of the baseline model. To determine if a less restricted model fits the data better than the
baseline model, one can calculate a chi-square difference test:
2r - 2u
chi-square (baseline) chi-square (less restricted).
This difference score is evaluated at the difference in the degrees of freedom for the two
models:
## df (baseline) df (less restricted).
For example, suppose the chi-square for a baseline model that contains three parameters in the
gamma matrix equals 142.691 at 123 d.f. Suppose that a less restricted model is estimated that
allows for the three parameters in the gamma matrix to be estimated separately for the two
groups under consideration. And suppose that the chi-square for this less restricted model
equals 110.527 at 120 d.f. Then the difference in chi-square equals 32.164 at 3 d.f. The critical
value of chi-square at three degrees of freedom for a type-I error rate of 5% equals 7.815.
Therefore, we would conclude that, at a type-I error rate of 5%, the less restricted model fits the
data better than does the baseline model, meaning that the parameter estimates differ
significantly from one another across the two levels of the external variable. The next step
would be to conduct a chi-square difference test for each of the paths in the gamma matrix to
determine which of the three paths has significantly different parameter estimates across the
two levels of the external variable.
Typically, one would allow a matrix of estimates, such as the lambda, gamma, beta, and error
matrices (psi, theta-delta, and theta-epsilon) matrices to become less restricted to examine the
possibility of differences in parameters across the levels of the external variable. If the chi-
square difference test indicates that the baseline model and less restricted model contain at
least some significantly different parameter estimates, then one would test each path within a
matrix at a time to locate the ones that differ significantly from one another (they might all be
significantly different from one another).
If one finds a less restricted model that fits the data significantly better than the baseline model,
then this model becomes the new "baseline" model for testing of further differences in
parameter estimates across levels of the external variable.
The Sociology 512 web site includes notes on hypothesis testing using the SAS and LISREL
software packages.
## Second-Order (Higher-Order) Factor Analysis
Some latent variables are themselves considered to be composed of multiple latent variables.
The latent variable Locus of Control (LOC), for example, is thought to comprise three sub-
dimensions: internal, chance, and powerful others. The diagram below illustrates a second-
order model of LOC, with the variable "perceived risk" used to assess the predictive validity of
the measure of LOC.
= + , where:
## (eta): dependent, (i.e., endogenous) latent variable.
(gamma): parameter estimates for the independent (i.e., exogenous) latent variables.
(xi): independent (i.e., exogenous) latent variables.
(zeta): errors for the equation.
## Sensitivity Analysis: Testing Equality of Parameter Estimates Across Two Groups
A central premise of CFA is that the theory fits the data. Thus, if an observed variable is posited
to measure just one latent variable then it should not also have a significant parameter estimate
on another latent variable. If an observed variable X1 is posited to measure 1, for example,
then X1 should not have a significant parameter estimate on 2. If it does, then we can question
the construct validity of X1 as an indicator of 1 as well as the theory that specifies that X1
measures only 1.
Sensitivily analysis examines the extent to which a theory has construct validity: the extent to
which hypotheses of no relationship are supported by the data.
Consider, for example, the Locus of Control CFA model as specified by Sapp and Harrod (see:
http://www.soc.iastate.edu/sapp/Soc512MeasurementRefs.html). Sapp and Harrod posit that 1)
the latent variable Internal is measured with three observed variables: Own Actions, Protect,
and Determine, 2) the latent variable Chance is measured with three observed variables:
Accidential Happenings, Bad Luck Happenings, and Lucky, and 3) the latent variable Powerful
Others is measured with three observed variables: Pressure Groups, Powerful Others, and
Powerful People (see: http://www.soc.iastate.edu/sapp/soc512LOCCFAModel.pdf). Implied by
this model is that Own Actions, for example, which is posited to measure the latent variable
Internal, is not significantly related to either of the remaining latent variables: Chance or
Powerful Others.
Sensitivity analysis examines whether the implied hypotheses of no relationship are supported
by the data. Shown below are examples of sensitivity analysis for the Sapp and Harrod LOC
model conducted in LISREL.
The Sociology 512 web site includes notes on hypothesis testing using the SAS and LISREL
software packages.
Means and Intercepts for Latent Variables
In CFA with multiple samples, it is possible to estimate means and intercepts for the latent
variables.
## X(g) = x + x(g) + (g)
where:
x is the constant intercept term for each Xi. This value is set to be equal across samples (g).
Loadings are listed in the Lambda X matrices. Intercepts are listed in the Tau X matrices.
These matrices are the same for all groups.
Item Internal Chance P. Others Var(x) Intercept
## Actions .579 1.144 5.880
Protect .673 0.860 5.670
Detrmine .520 2.453 4.556
Acchap .538 1.449 5.181
Lucky .692 2.743 4.813
Pressure .512 1.935 4.966
Powoth .822 2.231 5.260
Powple .721 1.469 5.002
## Risk Latent Variables
Perceivers Internal Chance Powerful Others
Internal 1.121
## Low Chance .612 .679
(n=67)
P. Others .586 .251 1.024
Internal .869
## High Chance .814 .348
(n=62)
P. Others .627 .696 .974
Factor Means (Kappa Matrix)
## Scaled Factor Mean
g
[( xij x j ) n / g ] / ni
i 1
## Where: i = 1, 2, 3 ... g groups
j = 1, 2, 3 ... k factors
## n = (62 + 67) / 2 = 64.5
i 1
g
[( xij x j ) n / g ] / ni = [ (-.218 + .109) 64.5 ] / 62 = -.113
i 1
Analysis of Ordinal Variables
## Albright and Park (2009) note that:
The maximum likelihood estimation (MLE) approach relies on the strong assumption of
multivariate normality. In practice, a substantial amount of social science data is non-normal.
Survey responses are often coded as yes/no or as scores on an ordered scale (e.g. strongly
disagree, disagree, neutral, agree, strongly agree). In the presence of categorical or ordinal
data, MLE may not work properly, calling for alternative estimation methods.
Mplus and LISREL employ a multi-step method for ordinal outcome variables that analyzes a
matrix of polychoric correlations rather than covariances. This approach works as follows:
## 1) thresholds are estimated by maximum likelihood,
2) these estimates are used to estimate a polychoric correlation matrix, which in turn is used to
3) estimate parameters through (diagonally) weighted least squares using the inverse of the
asymptotic covariance matrix as the weight matrix (Muthn, 1984; Jreskog, 1990).
In LISREL, the diagonally weighted least squares (DWLS) method needs to be specified.
Alternatively, the polychoric correlation matrix and asymptotic covariance matrix is estimated
and saved into a LISREL system file (.dsf) using PRELIS before fitting the model.
Mplus automatically follows above steps when the syntax includes a line identifying observed
variables as categorical.
Instructions
[For those times when you will be using data collected by persons other than those who
graduated from ISU, given that ISU graduates never would be so silly as to collect ordinal-level
data! ]
## In cases of non-normality (i.e., assumed for ordinal-level data), it is a misuse of CFA
methodology to:
Use arbitrary scale scores for categories, pretending that these scores have interval
scale properties.
Compute a covariance matrix or product-moment correlation matrix for such scores.
Analyze cov/correlation matrices using the method of maximum-likelihood.
## Such misuse can lead to:
distorted parameter estimates.
incorrect measures of chi-square.
incorrect estimates of standard error, and therefore incorrect t-ratios.
When conducting CFA with ordinal-level data, use weighted least squares with an asymptotic
covariance matrix. N must be at least 200 if k < 12 and at least 1.5 k(k+1) if K 12.
Power Analysis
## From MEERA: [http://meera.snre.umich.edu/plan-an-evaluation/related-topics/power-
analysis-statistical-significance-effect-size]
What is power?
To understand power, it is helpful to review what inferential statistics test. When you conduct
an inferential statistical test, you are often comparing two hypotheses:
The null hypothesis This hypothesis predicts that your program will not have an
effect on your variable of interest. For example, if you are measuring students level of
concern for the environment before and after a field trip, the null hypothesis is that their
level of concern will remain the same.
The alternative hypothesis This hypothesis predicts that you will find a difference
between groups. Using the example above, the alternative hypothesis is that students
post-trip level of concern for the environment will differ from their pre-trip level of
concern.
Statistical tests look for evidence that you can reject the null hypothesis and conclude that
your program had an effect. With any statistical test, however, there is always the possibility
that you will find a difference between groups when one does not actually exist. This is
called a Type I error. Likewise, it is possible that when a difference does exist, the test will
not be able to identify it. This type of mistake is called a Type II error.
Power refers to the probability that your test will find a statistically significant difference when
such a difference actually exists. In other words, power is the probability that you will reject
the null hypothesis when you should (and thus avoid a Type II error). It is generally accepted
that power should be .8 or greater; that is, you should have an 80% or greater chance of
finding a statistically significant difference when there is one.
## (See pages 338-349).
1. Estimate the more specified model and ACOV(a), the covariance matrix of the parameter
estimates for this model.
2. Calculate the added parameter estimates for the more specified model (Ha)under the
assumption that all standardized estimates equal .1.
3. NCP = [(column x row) matrix of the added parameter estimates] * [diagonal matrix of the
variances of the added parameters (inverse)] * [(row * column) matrix of the added
parameter estimates].
4. Calculate the power of the test.
The Multitrait-Multimethod Matrix
## The Multitrait-Multimethod Matrix (hereafter labeled MTMM) is an approach to
assessing the construct validity of a set of measures in a study. It was developed in
1959 by Campbell and Fiske (Campbell, D. and Fiske, D. (1959). Convergent and
discriminant validation by the multitrait-multimethod matrix. 56, 2, 81-105.) in part
as an attempt to provide a practical methodology that researchers could actually use
(as opposed to the nomological network idea which was theoretically useful but did
not include a methodology). Along with the MTMM, Campbell and Fiske introduced
two new types of validity -- convergent and discriminant -- as subcategories of
construct validity. Convergent validity is the degree to which concepts that should
be related theoretically are interrelated in reality. Discriminant validity is the degree
to which concepts that should not be related theoretically are, in fact, not interrelated
in reality. You can assess both convergent and discriminant validity using the
MTMM. In order to be able to claim that your measures have construct validity, you
have to demonstrate both convergence and discrimination.
## The MTMM is simply a matrix or table of correlations arranged to facilitate the
interpretation of the assessment of construct validity. The MTMM assumes that you
measure each of several concepts (called traits by Campbell and Fiske) by each of
several methods (e.g., a paper-and-pencil test, a direct observation, a performance
measure). The MTMM is a very restrictive methodology -- ideally you should
measure each concept by each method.
The Multitrait-Multimethod Matrix
## To construct an MTMM, you need to arrange the correlation matrix by concepts
within methods. The figure shows an MTMM for three concepts (traits A, B and C)
each of which is measured with three different methods (1, 2 and 3) Note that you
lay the matrix out in blocks by method. Essentially, the MTMM is just a correlation
matrix between your measures, with one exception -- instead of 1's along the
diagonal (as in the typical correlation matrix) we substitute an estimate of the
reliability of each measure as the diagonal.
Before you can interpret an MTMM, you have to understand how to identify the
different parts of the matrix. First, you should note that the matrix is consists of
nothing but correlations. It is a square, symmetric matrix, so we only need to look at
half of it (the figure shows the lower triangle). Second, these correlations can be
grouped into three kinds of shapes: diagonals, triangles, and blocks. The specific
shapes are:
## The Reliability Diagonal
(monotrait-monomethod)
## Estimates of the reliability of each measure in the matrix. You can
estimate reliabilities a number of different ways (e.g., test-retest, internal
consistency). There are as many correlations in the reliability diagonal as
there are measures -- in this example there are nine measures and nine
reliabilities. The first reliability in the example is the correlation of Trait
A, Method 1 with Trait A, Method 1 (hereafter, I'll abbreviate this
relationship A1-A1). Notice that this is essentially the correlation of the
measure with itself. In fact such a correlation would always be perfect
(i.e., r=1.0). Instead, we substitute an estimate of reliability. You could
also consider these values to be monotrait-monomethod correlations.
## The Validity Diagonals
(monotrait-heteromethod)
## Correlations between measures of the same trait measured using
different methods. Since the MTMM is organized into method blocks,
there is one validity diagonal in each method block. For example, look
at the A1-A2 correlation of .57. This is the correlation between two
measures of the same trait (A) measured with two different measures (1
and 2). Because the two measures are of the same trait or concept, we
would expect them to be strongly correlated. You could also consider
these values to be monotrait-heteromethod correlations.
## The Heterotrait-Monomethod Triangles
These are the correlations among measures that share the same method
of measurement. For instance, A1-B1 = .51 in the upper left heterotrait-
monomethod triangle. Note that what these correlations share is
method, not trait or concept. If these correlations are high, it is because
measuring different things with the same method results in correlated
measures. Or, in more straightforward terms, you've got a strong
"methods" factor.
Heterotrait-Heteromethod Triangles
The Multitrait-Multimethod Matrix
These are correlations that differ in both trait and method. For instance,
A1-B2 is .22 in the example. Generally, because these correlations share
neither trait nor method we expect them to be the lowest in the matrix.
## The Monomethod Blocks
These consist of all of the correlations that share the same method of
measurement. There are as many blocks as there are methods of
measurement.
## The Heteromethod Blocks
These consist of all correlations that do not share the same methods.
There are (K(K-1))/2 such blocks, where K = the number of methods.
In the example, there are 3 methods and so there are (3(3-1))/2 =
(3(2))/2 = 6/2 = 3 such blocks.
Now that you can identify the different parts of the MTMM, you can begin to
understand the rules for interpreting it. You should realize that MTMM
interpretation requires the researcher to use judgment. Even though some of the
principles may be violated in an MTMM, you may still wind up concluding that you
have fairly strong construct validity. In other words, you won't necessarily get perfect
adherence to these principles in applied research settings, even when you do have
evidence to support construct validity. To me, interpreting an MTMM is a lot like a
physician's reading of an x-ray. A practiced eye can often spot things that the
neophyte misses! A researcher who is experienced with MTMM can use it identify
weaknesses in measurement as well as for assessing construct validity.
To help make the principles more concrete, let's make the example a bit more
realistic. We'll imagine that we are going to conduct a study of sixth grade students
and that we want to measure three traits or concepts: Self Esteem (SE), Self
Disclosure (SD) and Locus of Control (LC). Furthermore, let's measure each of
these three different ways: a Paper-and-Pencil (P&P) measure, a Teacher rating, and a
Parent rating. The results are arrayed in the MTMM. As the principles are presented,
try to identify the appropriate coefficients in the MTMM and make a judgement
The Multitrait-Multimethod Matrix
## Coefficients in the reliability diagonal should consistently be the highest in the
matrix.
That is, a trait should be more highly correlated with itself than with
anything else! This is uniformly true in our example.
## Coefficients in the validity diagonals should be significantly different from
zero and high enough to warrant further investigation.
## This is essentially evidence of convergent validity. All of the correlations
in our example meet this criterion.
A validity coefficient should be higher than values lying in its column and row
in the same heteromethod block.
## In other words, (SE P&P)-(SE Teacher) should be greater than (SE
P&P)-(SD Teacher), (SE P&P)-(LC Teacher), (SE Teacher)-(SD P&P)
and (SE Teacher)-(LC P&P). This is true in all cases in our example.
## A validity coefficient should be higher than all coefficients in the heterotrait-
monomethod triangles.
## This essentially emphasizes that trait factors should be stronger than
methods factors. Note that this is not true in all cases in our example.
For instance, the (LC P&P)-(LC Teacher) correlation of .46 is less than
(SE Teacher)-(SD Teacher), (SE Teacher)-(LC Teacher), and (SD
Teacher)-(LC Teacher) -- evidence that there might me a methods
factor, especially on the Teacher observation method.
## The same pattern of trait interrelationship should be seen in all triangles.
The example clearly meets this criterion. Notice that in all triangles the
SE-SD relationship is approximately twice as large as the relationships
that involve LC.
## The MTMM idea provided an operational methodology for assessing construct
validity. In the one matrix it was possible to examine both convergent and
discriminant validity simultaneously. By its inclusion of methods on an equal footing
with traits, Campbell and Fiske stressed the importance of looking for the effects of
how we measure in addition to what we measure. And, MTMM provided a rigorous
framework for assessing construct validity.
Despite these advantages, MTMM has received little use since its introduction in
1959. There are several reasons. First, in its purest form, MTMM requires that you
have a fully-crossed measurement design -- each of several traits is measured by each
of several methods. While Campbell and Fiske explicitly recognized that one could
The Multitrait-Multimethod Matrix
## have an incomplete design, they stressed the importance of multiple replication of
the same trait across method. In some applied research contexts, it just isn't possible
to measure all traits with all desired methods (would you use an "observation" of
weight?). In most applied social research, it just wasn't feasible to make methods an
explicit part of the research design. Second, the judgmental nature of the MTMM
may have worked against its wider adoption (although it should actually be perceived
as a strength). many researchers wanted a test for construct validity that would result
in a single statistical coefficient that could be tested -- the equivalent of a reliability
coefficient. It was impossible with MTMM to quantify the degree of construct validity
in a study. Finally, the judgmental nature of MTMM meant that different researchers
could legitimately arrive at different conclusions.
As mentioned
above, one of the
most difficult
aspects of MTMM
from an
implementation
point of view is that
it required a design
that included all
combinations of
both traits and
methods. But the
ideas of convergent
and discriminant
validity do not
require the methods factor. To see this, we have to reconsider what Campbell and
Fiske meant by convergent and discriminant validity.
## It is the principle that measures of theoretically similar constructs should be highly
intercorrelated. We can extend this idea further by thinking of a measure that has
multiple items, for instance, a four-item scale designed to measure self-esteem. If
each of the items actually does reflect the construct of self-esteem, then we would
expect the items to be highly intercorrelated as shown in the figure. These strong
intercorrelations are evidence in support of convergent validity.
The Multitrait-Multimethod Matrix
It is the principle that measures of theoretically different constructs should not correlate highly
with each other. We can see that in the example that shows two constructs --
self-esteem and locus of control -- each measured in two instruments. We would
expect that, because these are measures of different constructs, the cross-construct
correlations would be low, as shown in the figure. These low correlations are
evidence for validity. Finally, we can put this all together to see how we can address
both convergent and discriminant validity simultaneously. Here, we have two
constructs -- self-esteem and locus of control -- each measured with three
instruments. The red and green correlations are within-construct ones. They are a
reflection of convergent validity and should be strong. The blue correlations are
cross-construct and reflect discriminant validity. They should be uniformly lower
than the convergent coefficients.
methods factor as a true MTMM would. The matrix examines both convergent and
discriminant validity (like the MTMM) but it only explicitly looks at construct intra-
and interrelationships. We can see in this example that the MTMM idea really had
two major themes. The first was the idea of looking simultaneously at the pattern of
convergence and discrimination. This idea is similar in purpose to the notions
implicit in the nomological network -- we are looking at the pattern of
interrelationships based upon our theory of the nomological net. The second idea in
MTMM was the emphasis on methods as a potential confounding factor.
The Multitrait-Multimethod Matrix
While methods may confound the results, they won't necessarily do so in any given
study. And, while we need to examine our results for the potential for methods
factors, it may be that combining this desire to assess the confound with the need to
assess construct validity is more than one methodology can feasibly handle. Perhaps
if we split the two agendas, we will find that the possibility that we can examine
convergent and discriminant validity is greater. But what do we do about methods
factors? One way to deal with them is through replication of research projects, rather
than trying to incorporate a methods test into a single research study. Thus, if we
find a particular outcome in a study using several measures, we might see if that same
outcome is obtained when we replicate the study using different measures and
methods of measurement for the same constructs. The methods issue is considered
more as an issue of generalizability (across measurement methods) rather than one of
construct validity.
When viewed this way, we have moved from the idea of a MTMM to that of the
multitrait matrix that enables us to examine convergent and discriminant validity, and
hence construct validity. We will see that when we move away from the explicit
consideration of methods and when we begin to see convergence and discrimination
as differences of degree, we essentially have the foundation for the pattern matching
approach to assessing construct validity. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491566181182861, "perplexity": 3085.564930853253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313428.28/warc/CC-MAIN-20190817143039-20190817165039-00485.warc.gz"} |
https://astronomy.stackexchange.com/questions/43242/how-to-calculate-galaxy-bolometric-luminosity | # How to calculate galaxy bolometric luminosity?
I am a bit confused by bolometric corrections. If I have an x-ray luminosity in the 2-10 keV band, how does one convert that to $$L_{bol}$$? From Netzer's book The Physics and Evolution of Active Galactic Nuclei I got these bolometric correction factors:
Optical: $$BC_{5100} = 53 - log(L_{5100})$$
and x-ray: $$log(L_{5100}) = 1.4\times log(L_X) - 16.8$$
where the bolometric correction for the x-ray luminosity ($$L_X$$) is obtained in two steps, using the equation for the optical BC again. The index $$5100$$ stands for the optical continuum measured at $$5100$$ angstrom. I can't figure out what I have to do with this $$BC_{5100}$$ once I've got it. Multiply by $$L_X$$? The book says "(...) BCs, that can be used to convert a single-band measurement of $$L$$ into an approximate $$L_{bol}$$."
I'm happy to use correction factors defined elsewhere instead of the ones I quoted. I just want to calculate an estimate for $$L_{bol}$$ for my galaxies.
$$BC = M_{\rm bol} - M_{5100} = -2.5\log\left(\frac{L_{\rm bol}}{L_{5100}}\right)$$ $$\log L_{\rm bol} = \log L_{5100} - 0.4BC \ ,$$ $$\log L_{\rm bol} = \log L_{5100} +0.4\log L_{5100} - 21.2 \ ,$$ $$\log L_{\rm bol} = 1.4(1.4 \log L_x -16.8) - 21.2\ ,$$ $$\log L_{\rm bol} = 1.96\log L_x -44.72\ .$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477185606956482, "perplexity": 556.4160190320549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621450.29/warc/CC-MAIN-20210615145601-20210615175601-00362.warc.gz"} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.