url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://mathhelpforum.com/calculus/33296-particular-inegral.html | 1. ## Particular Inegral
$m\frac {d^2x}{dt^2} + 5kx = -mgsin\alpha + 11kl_0
$
I am trying to find the general solution.
Setting
$\omega^2=\frac{k}{m}$, I found $x_c$ to be:
$x(t) = A~cos \left ( \sqrt 5 \omega t + \phi \right )
$
I have no idea how to find $x_p$...
What trial solution would you suggest using?
2. Hello,
You can consider that x is a constant. Thus its second derivate will be null and you'll have an equation with xp
I hope for you it's true that $\omega^2=\frac{5k}{m}$, because this is something i can't do ^^
3. I am totally new at this... I still have no idea.
I divided the original equation through by $m$ to get:
$\frac {d^2x}{dt^2} + 5\omega^2 x = -gsin\alpha + 11\omega^2 l_0$
am i getting any closer?
4. I really don't know for xc... I don't remember the rules for general solution ! If you're not sure, derivate again to see if you get xc so that xc is solution of d²xc/dt²+5w²xc=0
However, for xp :
xp is a particular solution. Let X be a constant satisfying the equation.
The second derivate of X is 0.
We have $5kX = -mgsin\alpha + 11kl_0$
-> $X=x_p=\frac{-mgsin\alpha + 11kl_0}{k}$
5. I don't get it. Where did the 5 go?
I did this:
$
m\frac {d^2x}{dt^2} + 5kx = -mgsin\alpha + 11kl_0
$
$
\frac {d^2x}{dt^2} + 5\omega^2 x = -gsin\alpha + 11\omega^2 l_0
$
$
\frac{d^2x}{dt^2} + 5\omega^2 x = \omega^2 (11l_0-\frac{mg}{k}sin\alpha)
$
$
x_p=\frac{1}{5}(11l_0-\frac{mg}{k}sin\alpha)
$
How's that? Anyone? Anyone?
6. Am sorry, i forgot the 5...
7. So we pretty much have the same answer. Thanks for the help!
8. Originally Posted by billym
I don't get it. Where did the 5 go?
I did this:
$
m\frac {d^2x}{dt^2} + 5kx = -mgsin\alpha + 11kl_0
$
$
\frac {d^2x}{dt^2} + 5\omega^2 x = -gsin\alpha + 11\omega^2 l_0
$
$
\frac{d^2x}{dt^2} + 5\omega^2 x = \omega^2 (11l_0-\frac{mg}{k}sin\alpha)
$
$
x_p=\frac{1}{5}(11l_0-\frac{mg}{k}sin\alpha)
$
How's that? Anyone? Anyone?
Relax guy. Be vague.
Moo dropped a 5. It was a simple error, easily spotted. As you did. If you understand what Moo was doing to get the particular solution, then all's well. So go back, put the 5 in where it's meant to go. There's the answer.
Life's full of small mistakes, most easily spotted and corrected for. The world keeps turning. Don't have cow, man.
9. Don't have cow, man.
Moo
Thanks for explaining it more precisely than i did ^^
10. I really wasn't having a cow man! I seriously wasn't sure if she dropped the 5 on purpose or not. This site feels like I'm doing my homework with God, so I just assume everyone else is right.
11. Originally Posted by billym
I really wasn't having a cow man! I seriously wasn't sure if she dropped the 5 on purpose or not. This site feels like I'm doing my homework with God, so I just assume everyone else is right.
Not at all, we're all equal in front of maths
errare humanum est
Btw, what does "have a cow" means ? ^^'
12. He was likening my concern over your missing 5 to the ordeal of giving birth to a cow. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991798162460327, "perplexity": 1337.1640994920401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542939.6/warc/CC-MAIN-20161202170902-00109-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/15370/tools-for-the-langlands-program/15388 | # Tools for the Langlands Program?
Hi,
I know this might be a bit vague, but I was wondering what are the hypothetical tools necessary to solve the Langlands conjectures (the original statments or the "geometic" analogue). What I mean by this is the following: for the Weil Conjectures it became clear that, in order to prove them, one needed to develop a marvelous cohomology theory that would explain Weil's observations. Of course, we all know that etale cohomology is that marvelous tool. By analogy, what "black box" tools are necessary for the Langlands program? Broadly speaking what tools do we need for the Langlands program?
-
This question is too broad in my opinion. I'm sure there are good papers giving an overview of the Langlands program. I know that Mitya Boyarchenko and David Ben Zvi have some stuff about geometric Langlands that might help you get up to speed. – Harry Gindi Feb 15 '10 at 21:37
– Harry Gindi Feb 15 '10 at 21:39
No, I mean, it's a really really huge program. I think that any satisfying answer to this question either doesn't exist or could take tens if not hundreds of pages. – Harry Gindi Feb 15 '10 at 21:55
If you check out either of the links I gave you, you'll see just how much stuff is actually being used. – Harry Gindi Feb 15 '10 at 21:56
Here is my vague, limited and non-geometric understanding. There is not a black box theory whose existence would prove the conjectures of Langlands as there was with the Weil conjectures. However, one general strategy is to use the Arthur-Selberg trace formula on two different reductive groups, match up the geometric sides of the formula (as much as possible), and then use the spectral sides to relate the automorphic forms of the groups. There are several technical difficulties in getting this to work in general, with a major problem having been the Fundamental Lemma, finally resolved by Ngo. – Zavosh Feb 15 '10 at 22:53
There are all sorts of problems with the Langlands conjectures that we (as far as I know) have no idea at all how to approach. As a very simple example of an issue for $GL(2)$ over $\mathbf{Q}$ that we cannot do, consider this: there should be a canonical bijection between continuous even (i.e. det(complex conj)=+1) irreducible 2-dimensional representations $Gal(\overline{\mathbf{Q}}/\mathbf{Q})\to GL(2,\mathbf{C})$ and normalised algebraic cuspidal Maass new eigenforms on the upper half plane. This is a sort of non-holomorphic analogue of the Deligne-Serre theorem which relates the odd irreducible Galois representations to holomorphic weight 1 newforms. One way of nailing this bijection is that given a Maass newform, then for all primes $p$ not dividing the level, the eigenvalue of $T_p$ (suitably normalised) should be the trace of the representation evaluated at the Frobenius element in the Galois group.
You want a black box which will solve all of Langlands---then you need a black box which will solve this. Unfortunately it seems to me that firstly you'll need several good new ideas to resolve even this simple case, and secondly there is more than one strategy and it's not clear what will work first. As examples of the problems one faces: given the Galois representation, that's just a lump of algebra---a finite amount of data. However is one going to construct a bunch of analysis from it?? One way might be via the theory of base change, which works a treat for cyclic extensions, and just enough has been developed in order to resolve the problem for Galois representations with solvable image (one uses a lot more than the statement that the group is solvable---one uses that it is also "small"---this is not just a formal consequence of cyclic base change). This is the Langlands-Tunnell theorem, which gives the Maass form from the Galois representation if it has solvable image. In the non-solvable case one can dream of non-solvable base change, but non-solvable base change is really nothing but a dream at this point. So there's one big black box but that will only resolve one direction of one small fragment of the Langlands conjectures.
Now what about the other way? Well here we're even more in the dark. Given an algebraic Maass form, we can't even prove that its Hecke eigenvalues are algebraic numbers, let alone the sum of two roots of unity. In the holomorphic modular form case we can get bases of the spaces of forms using e.g. coherent cohomology of the modular curve considered as an algebraic curve over $\mathbf{Q}$, or (in weights 2 or more) singular cohomology of a (typically non-trivial) local system on the curve. Both these machines produce $\mathbf{Q}$-vector spaces with Hecke actions, and hence char polys are in $\mathbf{Q}[x]$ and so eigenvalues are algebraic. But with algebraic Maass forms we have no such luxury. They are not cohomological, so we can't expect to see them in singular cohomology of a local system, and they are not holomorphic, so we can't expect to see them in coherent cohomology either. So we, vaguely speaking, need a black box which, given certain finite-dimensional complex vector spaces with Hecke actions, produces finite-dimensional $\mathbf{Q}$-vector spaces out of thin air, which when tensored up to the complexes give us back our groups. People have tried using base change to do this, or other known instances of functoriality, but everything so far has failed and it's not clear to me that one even has a conjectural approach for doing this direction. And I'm only talking about proving that the eigenvalues are algebraic---not even coming close to attaching the Galois representation!
So one vague black box "non-abelian base change", and one hard problem that as far as I know no-one has ideas about, and, if you put these together, you would solve one teeny tiny insy winsy little part of the Langlands programme. Makes the Weil conjectures look like a walk in the park!
-
"Makes the Weil conjectures look like a walk in the park!" - This was exactly my point in my comment. That's why it's called the Langlands program rather than the Langlands conjecture(s). – Harry Gindi Feb 16 '10 at 0:10
Hey! We're making progress. It used to be called the Langlands philosopy. [Oops, this was meant to be a comment on fpqc's comment.] – JS Milne Feb 16 '10 at 0:44
+1 Just because why not. Wasn't it called the "philosophy of cusp forms" even before that? – Harry Gindi Feb 16 '10 at 0:52
I thought that the reason it's called the Langlands Programme rather than the Langlands conjectures was that actually many of the statements are quite vague, or come in several forms, so it's difficult to say really what is conjectured and what is just a good motivating idea. For example transfer of automorphic reps via a morphism of L-groups should obey local Langlands everywhere, but local Langlands is a bit vague: "there should be a canonical bijection..." and there are issues of strong mult 1 and so on. The true force is in the most powerful statements but these are typically ill-defined. – Kevin Buzzard Feb 16 '10 at 9:03
[grr I want to make longer comments!]. For example the existence of the global Langlands group is a conjecture that, it seems to me, is almost unfalsifiable. Langlands makes some conjecture in Corvallis of the form "this set (iso classes of reps of GL_n(adeles) for all n at once) should have the structure of a Tannakian category in some natural way" for example. Is that really a conjecture or just a really good idea? – Kevin Buzzard Feb 16 '10 at 9:05
This answer deals with the classical Langlands program (if you like, the Langlands program for number fields).
There are (at least) two aspects to this program:
(a) functoriality: this is Langlands original conjecture, explained in the letter to Weil, and further developed in "Problems in the theory of automorphic forms" and later writing. It is a conjecture purely about automorphic forms. Langlands has outlined an approach to proving it in general is his papers on the topic of "Beyond endoscopy" (available online at his collected works).
A proof of functoriality would imply, among other things, the non-solvable base-change discussed in Kevin's answer.
It seems that for the "beyond endoscopy" program to work as Langlands envisages it, one would need unknown (and seemingly out of reach) results in the analytic number theory of $L$-functions.
(b) reciprocity: this is the conjectured relationship between automorphic forms and Galois representations/motives. It has two steps: attaching Galois representations, or even motives, to (certain) automorphic forms, and, conversely, showing that all Galois representations of motives arise in this way. (This converse direction typically incorporates the Fontaine--Mazur conjecture as well, which posits a purely Galois-theoretic criterion for when a Galois representation should arise from a motive.)
If one is given the direction automorphic to Galois, then there are some techniques for deducing the converse direction, namely the Taylor--Wiles method. However this method is not a machine that automatically applies whenever one has the automorphic to Galois direction available; in particular, it doesn't seem to apply in any straightforward way to Galois representations/motives for which some $h^{p,q}$ is greater than 1 (in more Galois-theoretic terms, which have irregular Hodge--Tate weights). Thus in particular, even if one could attach Galois representations to (certain) Maass forms, one would still have the problem of proving that every even 2-dimensional Artin representation of $G_{\mathbb Q}$ arose in this way.
As to constructing Galois representations attached to automorphic forms, here the idea is to use Shimura varieties, and one can hope that, with the fundamental lemma now proved, one will be able to get a pretty comprehensive description of the Galois representations that appear in the cohomology of Shimura varieties. (Here one will also be able to take advantage of recent progress in the understanding of integral models of Shimura varieties, due to people like Harris and Taylor, Mantovan, Shin, Morel, and Kisin, in various different contexts.)
The overarching problem here is that, not only do not all automorphic forms contribute to cohomology (e.g. Maass forms, as discussed in Kevin's answer), but also, not all automorphic forms appear in any Shimura variety context at all. Since Shimura varieties are currently the only game in town for passing from automorphic forms to Galois representations, people are thinking a lot about how to move from any given context to a Shimura variety context, by applying functoriality (e.g. Taylor's construction of Galois reps. attached to certain cuspforms on $GL_2$ of a quadratic imaginary field), or trying to develop new ideas such as $p$-adic functoriality. While there are certainly ideas here, and one can hope for some progress, the questions seem to be hard, and there is no one black box that will solve everything.
In particular, one could imagine having functoriality as a black box, and asking if one can then derive reciprocity. (Think of the way that Langlands--Tunnell played a crucial role in the proof of modularity of elliptic curves.) Langlands has asked this on various occasions. The answer doesn't seem to be any kind of easy yes.
-
I happened to come across this paper yesterday, but haven't been able to read it because of the prohibitive price. You may access this article for 1 day for US$12.00. Ash, Avner; Gross, Robert Generalized non-abelian reciprocity laws: a context for Wiles' proof. Bull. London Math. Soc. 32 (2000), no. 4, 385--397. – Chandan Singh Dalawat Feb 16 '10 at 4:02 if you google for the title, the second link gives you the pdf file. The authors expanded on this in their book "fearless symmetry", btw. – Franz Lemmermeyer Feb 16 '10 at 8:33 For me it was the first search result. Vielen Dank. – Chandan Singh Dalawat Feb 16 '10 at 14:11 I don't know a whole lot about the Langlands program, but if there is one tool that seems to come up a lot in geometric Langlands, it's perverse sheaves. You see a lot of singular algebraic varieties in geometric Langlands, and perverse sheaves are meant as a singular generalization of a vector bundle with a flat connection. Ordinary sheaves are already a singular generalization of vector bundles, but not the relevant one. Perverse sheaves (which are made from sheaves but not sheaves themselves) are a more apropos generalization that incorporates and sort-of just is intersection (co)homology. I can also say that I wasn't going to learn about perverse sheaves until I had to. However, I have now seen several important papers, in the related categorification program, that read this way: "Perverse sheaves + necessary restrictions = a good solution". So now I might be slowly getting used to them. I can also see that even the formalism perverse sheaves or intersection homology is sort-of inevitable. In some of the simpler constructions, the varieties (over$\mathbb{C}$, say) are non-singular and certain answers arise as ordinary cohomology products or intersection products. For instance, the Schubert calculus in a Grassmannian manifold. What choice do you have if the Grassmannian is replaced by a singular variety$X$? For some of these categorification/Langlands questions, you can either propose wrong answers, or ad hoc answers, or you can automatically get the right answer by using intersection homology on$X\$. (With middle perversity, as they say.)
-
Intersection cohomology (either l-adic or Betti or Hodge) also plays a central role in the cohomological study of non-compact Shimura variety and the application of trace formula methods, see e.g Zucker's conjecture or Sophie Morel's PhD thesis. But I don't really see why this is a feature of the Langlands program, rather than a by-product of the fact that many interestingly singular varieties pop up in compactifications of moduli problems. – Simon Pepin Lehalleur Aug 8 '10 at 19:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142296075820923, "perplexity": 509.1033413212048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645866.1/warc/CC-MAIN-20141024030045-00183-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://socratic.org/questions/the-larger-of-two-numbers-is-23-less-than-twice-the-smaller-if-the-sum-of-the-tw | Algebra
Topics
# The larger of two numbers is 23 less than twice the smaller. If the sum of the two numbers is 70, how do you find the two numbers?
39, 31
#### Explanation:
Let $L$ & $S$ be the larger & smaller numbers respectively then
First condition:
$L = 2 S - 23$
$L - 2 S = - 23 \setminus \ldots \ldots \ldots . \left(1\right)$
Second condition:
$L + S = 70 \setminus \ldots \ldots . . \left(2\right)$
Subtracting (1) from (2), we get
$L + S - \left(L - 2 S\right) = 70 - \left(- 23\right)$
$3 S = 93$
$S = 31$
setting $S = 31$ in (1), we get
$L = 2 \left(31\right) - 23 = 39$
Hence, the larger number is $39$ & smaller number is $31$
##### Impact of this question
578 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8575744032859802, "perplexity": 875.5438631491808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00165.warc.gz"} |
https://krntneja.github.io/posts/2018/attention-based-models-2 | # Tutorial on Attention-based Models (Part 2)
Published:
In part one of this series, I introduced the fundamentals of sequence-to-sequence models and attention-based models. I briefly mentioned two sequence-to-sequence models that don't use attention and then introduced soft-alignment based models. In this post, I’m going to discuss about various monotonic attention mechanisms.
### 3.2. Monotonic Alignments
Monotonic alignments are motivated by the limitations of soft alignments i.e. the quadratic-time complexity and no option for online decoding. The mechanisms discussed achieve quadratic-time training but linear-time decoding and the facility of online decoding as well.
3.2.1. Hard Monotonic Mechanism [paper]
Hard monotonic alignments attend to exactly one vector in memory $h_j$ at output time step $i$ i.e. $c_i = h_j$, unlike soft alignments which use expectation over complete memory to calculate the context vector.
If model attends to the input time step $t_{i-1}$ at the output time step $i-1$, we calculate the energy of $h_j$ where $j \in \{t_{i-1}, t_{i-1}+1, \ldots, T\}$ i.e. starting from the memory previously used. We pass each energy output through the logistic sigmoid function to produce 'selection probability' $p_{i, j}$ and sample $z_{i, j}$ from $Bernoulli(p_{i, j})$.
&&e_{i, j} = MonotonicEnergy(s_{i-1},h_j)&&
&&p_{i,j} = \sigma(e_{i,j})&&
&&z_{i,j} = Bernoulli(p_{i,j})&&
As soon as we see $z_{i, t_i} = 1$, we stop and use $h_{t_i}$ as our context vector and repeat the above process starting from input time step $t_i$ at output time step $i$. If $z_{i, j} = 0~\forall~j\in\{t_{i-1}, t_{i-1}+1, \ldots, T\}$, we set $c_i = \mathbf{0}$.
Here is an example of output time step of hard monotonic mecchanism. Here $t_{i-1} = 3$ is the index of memory vector selected at previous step. Model, therefore, starts from $h_3$ and moves forward until it finds $z_{i,7}=1$. Hence, $h_7$ is selected as $c_i$ and we say that $t_i=7$.
Note that the above model only needs $h_k, k\in\{1, 2, \ldots,j\}$ to compute $h_j$. If we use a unidirectional RNN as the encoder, we can perform online decoding where the time complexity will be $\mathcal{O}(\max\{T, U\})$. Also note that, because of sampling, we cannot train this model using back-propagation. We, therefore, use the expectation of $h_j$ during training (inspired from soft alignments) and try to induce discreteness into $p_{i,j}$ to later decode monotonically. The $\alpha_{i,j}$ defines the probability that input time step $j$ is attended at output time step $i$. We go through a simple, worded derivation below for the calculation of $\alpha_{i,j}$. (You can skip it in the first pass of reading.)
&&\alpha_{i, j} = \mathbb{P}_i(h_j\text{ used}) = \mathbb{P}_i(h_j\text{ used}|h_j\text{ checked})\mathbb{P}_i(h_j\text{ checked})&&
&&\mathbb{P}_i(h_j\text{ used}|h_j\text{ checked}) = p_{i,j}&&
&&\mathbb{P}_i(h_j\text{ checked}) = \mathbb{P}_i(h_{j-1}\text{ not used}, h_{j-1}\text{ checked}) + \mathbb{P}_{i-1}(h_j\text{ used}|j\text{ checked})&&
Two possible cases are depicted when $h_j$ is a candidate for context vector. First case on left: $h_j$ is candidate because $h_{j-1}$ was rejected i.e. $z_{i,j-1}=0$. Second case on right: model starts from $h_j$ itself as it was the last used context vector i.e. $c_{i-1}=h_j$ or $t_{i-1} = j$.
The last relation can be reasoned out by noting that $h_j$ will be a candidate for context vector either when $h_{j-1}$ is rejected or $h_j$ itself was selected as context vector in previous output time step $i-1$. Using above relations for memory vector $h_{j-1}$, we get the relations
&&\mathbb{P}_i(j-1\text{ checked}) = \frac{\alpha_{i,j-1}}{p_{i,j-1}}&& &&\mathbb{P}_i(j-1\text{ not used}|j-1\text{ checked}) = 1-p_{i,j-1}&&
Finally putting all relations together, we get
&&\alpha_{i,j} = p_{i,j} \Big((1-p_{i,j-1})\frac{\alpha_{i,j-1}}{p_{i,j-1}} + \alpha_{i-1,j}\Big)&&
which can also be written as following, allowing us to compute this parallelly by writing it in terms of cumulative products and cumulative sums.
&&q_{i,j} = \frac{\alpha_{i,j}}{p_{i,j}} = (1-p_{i,j-1})q_{i,j-1} + \alpha_{i-1,j}&&
A tricky way to promote discreteness, a zero-mean unit-variance Gaussian noise is added to logistic sigmoid activation. This forces the model to learn to produce $p_{i,j}$ close to zero or one, effectively making it binary.
Energy function used for hard monotonic alignments is as follows:
&&\text{MonotonicEnergy}(s_{i-1},h_j) = g\frac{v^T}{||v||}\tanh(W_s s_{i-w}+W_h h_j+b)+r&&
It is very similar to Bahdanau energy function but a little different because sigmoid function applied on monotonic energy is not shift-invariant like the softmax function. Therefore, for giving in more control on values of energy, the vector $v$ in Bahdanau energy is replaced by a normalized vector $\frac{v}{||v||}$ and then scaled with a scalar $g$ and offset with a scalar $r$. As you might guess, $g$ and $r$ are also learned parameters.
3.2.2. Monotonic Chunkwise Mechanism [paper]
Hard monotonic alignments which I just described are just too hard in their conditions! Using only one vector $h_{t_i}$ as context vector $c_i$ is a little too much constraint and this is reflected in its poor performance on some tasks. A novel solution to this problem is Monotonic Chunkwise Mechanism.
We use a middle path between soft alignments and hard monotonic alignments by allowing the model to use soft attention over fixed-size chunks (say, size $w$) of memory ending at input time step $t_i$ for each output time step $i$. Therefore, model uses a context vector derived from memory elements ${h_{v}, h_{v+1}, \ldots h_{t_i}}$ where $v= t_i-w+1$. The memory index $t_i$ is derived in the same way as in the hard monotonic mechanism. The energy of each memory element is given by the following equation.
&&u_{i,k} = \text{ChunkEnergy}(s_{i-1}, h_k) = v^T \tanh(W_{s}s_{i-1} + W_{h}h_j + b)&&
A diagram showing the flow of monotonic chunkwise mechanism. Notice that a soft alignment over a chunk of size $w=4$ is applied in addition to monotonic attention. First, using mechanism explained in an earlier figure, $h_7$ is selected. Then a soft alignment is used over a chunk ending at $h_7$.
The context vector is given by a weighted sum of $w$ memory elements ending at $t_i$. This is exactly applying soft-alignment over small chunks!
&&c_i = \sum_{k=v}^{t_i}\frac{\exp(u_{i,k})}{\sum_{l=v}^{t_i}\exp(u_{i,l})}h_k&&
Similar to the training in the hard monotonic mechanism, we need to take expected value of context vector by using the induced probability distribution. We use the $\alpha_{i,j}$ derived for the hard monotonic mechanism. Given below is (another!) worded derivation of $\beta_{i,j}$ which is the probability of using $h_j$ as context vector for output time step $i$.
&&\beta_{i,j}=\mathbb{P}_i(h_j\text{ used}) = \sum_{k=j}^{j+w-1}\mathbb{P}_i(h_j\text{ used}|t_i = k)\mathbb{P}(t_i = k)&&
&&\mathbb{P}_i(h_j\text{ used}|t_i = k) = \frac{\exp(u_{i,k})}{\sum_{l=v}^{t_i}(u_{i,l})}&&
&&\mathbb{P}(t_i = k) = \alpha_{i,k}&&
&&\beta_{i,j}=\sum_{k=j}^{j+w-1} \frac{\exp(u_{i,k})}{\sum_{l=v}^{t_i}\exp(u_{i,l})}\alpha_{i,k}&&
The equation derived for $\beta_{i,j}$ can be parallelized using moving sum and computation, therefore, is very efficient. Note that number of parameters have increased (and therefore computations as well) as monotonic chunkwise mechanism is using both $\text{MonotonicEnergy}(s_{i-1}, h_j)$ and $\text{ChunkEnergy}(s_{i-1}, h_j)$. This increase is very marginal, about $1\%$, but the performance of model increases significantly, reaching almost at par with soft alignments for some tasks.
3.2.3. Limitations of Monotonic Alignments
There are two major limitations of the hard monotonic mechanism. First, the one we discussed as a motivation for the chunkwise monotonic mechanism, is that there is not enough context in the context vector as we force the model to capture complex dependencies only using a single memory vector. This concern is almost resolved by using chunks of memory summed over a soft distribution.
The second limitation is the assumption of strict monotonicity in input and output alignments. For example, in translation task, we can expect a degraded performance when translation is performed on languages with different sentence structure i.e. a different order of subject, verb and object, though this assumption can be almost true for structurally similar languages.
We note here that soft alignments, though computationally expensive and unsuited for online decoding, are robust to input-output alignment relations and use a much wider context for producing outputs as compared to monotonic alignments.
## References
Online and Linear-Time Attention by Enforcing Monotonic Alignments
Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck
Proceedings of the 34th International Conference on Machine Learning, 2017
Monotonic Chunkwise Attention
Chung-Cheng Chiu, Colin Raffel
International Conference on Learning Representations, 2018
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Sequence to Sequence Learning with Neural Networks
Ilya Sutskever, Oriol Vinyals, Quoc V. Le
Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS 2014)
Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks
Alex Graves, Santiago Feránandez, Faustino Gomez, Jürgen Schmidhuber
Proceedings of the 23rd International Machine Learning Conference, 2006
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio
International Conference on Learning Representations, 2015
Effective Approaches to Attention-based Neural Machine Translation
Minh-Thang Luong, Hieu Pham, Christopher D. Manning Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
Structured Attention Networks
Yoon Kim, Carl Denton, Luong Hoang, Alexander M. Rush
5th International Conference on Learning Representations, 2017
Listen, Attend and Spell
William Chan, Navdeep Jaitly, Quoc V. Le, Oriol Vinyals
2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Attention-Based Models for Speech Recognition
Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, Yoshua Bengio
Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS 2015)
Attention Is All You Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
31st Conference on Neural Information Processing Systems (NIPS 2017)
State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Chung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J. Weiss, Kanishka Rao, Ekaterina Gonina, Navdeep Jaitly, Bo Li, Jan Chorowski, Michiel Bacchiani
arXiv:1712.01769 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162844777107239, "perplexity": 3047.766429151195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00309.warc.gz"} |
https://www.physicsforums.com/threads/solubility-problem.263495/ | # Solubility Problem
1. Oct 11, 2008
### cncbmb
1. The Problem
The Following Steps Occur in Order:
a. Aqueous silver nitrate is added to a sodium bromide solution to form a white precipitate.
b. Aqeuous ammonia is added to the above. The contents of the container change color slightly and there is still a precipitate.
c. After step b, sodium thiosulfate is added and all of the precipitate disappears.
Part 1: Explain why the precipitate disappears in step c.
Part 2: Find the reaction that occurs in part c.
2. Relevant equations
none
3. Attempt to Solve the Problem
After step a, we have silver bromide, which is the initial precipitate.
After step b, I thought that we had $Ag(NH_3)_2^{+}$ and some bromide and nitrate anions, so I predicted that there wouldn't be a precipitate. My prediction was wrong and I realized that I made an error in tracing the reactions.
I. Why is there still a precipitate at the end of part b?
II. Why does the addition of sodium thiosulfate make the precipitates disappear?
Last edited: Oct 12, 2008
2. Oct 12, 2008
### Staff: Mentor
It all depends on the equilibrium between precipitate and the complexing agent. Ammonia complex is not stable enough to dissolve AgBr (although it is stable enough to dissolve AgCl). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325079917907715, "perplexity": 3385.439036602104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544097.11/warc/CC-MAIN-20161202170904-00432-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/a-question-about-operators.147267/ | 1. Dec 8, 2006
Let be the Kelin-gordon equation (m=0) with a potential so:
$$(-\frac{\partial ^{2}}{\partial t^{2}}+V(x) )\Phi=0$$
my question is if you consider the wave function above as an operator..is the K-G operator of the form:
$$<0|T(\Phi(x)\Phi(x')|0>$$ T=time ordered
I think that in both cases..we use the same wave function but once is an scalar (or an spinor for electrons) and the other is an escalar...:shy: :shy:
2. Dec 9, 2006
### dextercioby
1.I don't know who Kelin was. Maybe you could supply some reference.
2. Your equation, misses a laplacian.
3. You depicted the Feynman Green function, which is a Green function for the operator written with a Laplacian.
All of course, if you mean "Klein-Gordon"
Daniel.
3. Dec 9, 2006
I apologize "DSextercioby"... i missed the keyboard.. yes i was referring Klein-Gordon equation with rest mass m=0 so:
$$(-\frac{\partial ^{2}}{\partial t^{2}}+\nabla +V(x))\Phi=0$$
then if you define the Green function by $$G(x,x')=<0|T(\Phi(x)\Phi(x'))0>$$
then my question were if the "Phi" wave function defined in both G and K-G equation is the same ,but in one case is an operator and in the other is an scalar with T=time ordered product.
- By the way i looked at the paper by Scwinger ..taking the Dirac equation with Electromagnetism:
$$(i\gamma_{\mu}\partial _{\mu}-eA_{\mu}+m)\Psi =0$$
he got the Green function (i don't know how he did it..:grumpy: ), he got the functional equation:
$$\partial _{\mu}-eA_{\mu}+m+\frac{\delta}{\delta J_{\mu}}G(x,x')=\delta(x-x')$$
4. Dec 9, 2006
### dextercioby
In the field eqn, the $\varphi (x)$ is not a wavefunction, it is a classical field.
In the VEV of the time-ordered product, it is an operator acting on a Fock space. It still keeps the scalar behavior wrt restricted Poincare' transformations.
As for the second part of your post, please supply the reference to Schwinger's paper.
Daniel.
5. Dec 9, 2006
A brief resume..can be found at:
http://www.pnas.org/cgi/content/full/102/22/7783
with the Dirac equation + magnetic field+ scalar potential V(x) and the functional approach to the Green function involving functional derivatives.
Similar Discussions: A question about operators | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.959384560585022, "perplexity": 1742.9847051989575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00087-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.math.uni-bielefeld.de/~beyn/AG_Numerik/html/de/preprints/sfb_11_31.html | # Preprint des Projektes: SFB 701: Spektrale Strukturen und Topologische Methoden in der Mathematik - Projekt B3
## Numerische Analyse äquivarianter Evolutionsgleichungen
11-031 Raphael Kruse.
Optimal error estimates of Galerkin finite element methods for stochastic partial differential equations with multiplicative noise
We consider Galerkin finite element methods for semilinear stochastic partial differential equations (SPDEs) with multiplicative noise and Lipschitz continuous nonlinearities. We analyze the strong error of convergence for spatially semidiscrete approximations as well as a spatio-temporal discretization which is based on a linear implicit Euler-Maruyama method. In both cases we obtain optimal error estimates. The proofs are based on sharp integral versions of well-known error estimates for the corresponding deterministic linear homogeneous equation together with optimal regularity results for the mild solution of the SPDE. The results hold for different Galerkin methods such as the standard finite element method or spectral Galerkin approximations. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862159013748169, "perplexity": 684.6909052187915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738653.47/warc/CC-MAIN-20200810072511-20200810102511-00228.warc.gz"} |
https://www.physicsforums.com/threads/solution-of-equation-for-decaying-real-scalar-field.812578/ | # Solution of equation for decaying real scalar field
1. May 6, 2015
### karlzr
Suppose there is a real scalar field $\phi$ with some decay width $\Gamma$ to some fermion. The quantum equation of motion after one-loop correction takes the form
$\ddot{\phi}+(m^2+im\Gamma)\phi=0$
where $m$ is the renormalized mass.
The solution can be obtained as $\phi=\phi_0 e^{imt}e^{-\Gamma t/2}$. So how do we use this complex solution? since the solution we need for application must be real. Usually we can take the real part when we have a complex solution. But In this case the real part of this solution does not solve the full quantum equation of motion due to the imaginary component.
2. May 8, 2015
### Brage
Surely the equation $\ddot{\phi}+(m^2+im\Gamma)\phi=0$ has the solution
$$\phi=Ae^{-im^2t}e^{\Gamma m t/2}+Be^{im^2t}e^{-\Gamma m t/2}$$, where A and B are complex constants to be determined from initial and boundary conditions. Unless I've missed something fundamental here?
3. May 8, 2015
### karlzr
But $\phi$ starts as a real field, how can it have complex solution?
In quantum mechanics, an imaginary part in Hamiltonian of some system indicates decaying amplitude. But isn't the total Hamiltonian of the whole system always real in order to obey unitarity? I don't know how to draw an analogy in this case.
4. May 8, 2015
### bhobba
Some solutions to equations are not necessarily physically sensible eg the requirement for the solution to be real will put constraints on A and B if you expand things out.
Thanks
Bill
Last edited: May 8, 2015
5. May 8, 2015
### karlzr
When $\Gamma << m$, the full solution is $\phi(t)= A e^{imt-\Gamma t/2}+B e^{-imt+\Gamma t/2}$. It doesn't seem to be possible to make it real by constraining the two constant coefficients $A$ and $B$.
Actually, this question is from Phys.Lett. B117 (1982) 29
6. May 8, 2015
### bhobba
Well that simply means it has no real solution. So?
Thanks
Bill
7. May 8, 2015
### karlzr
I need to find the evolution of its energy density $\rho=\dot{\phi}^2/2+m^2\phi^2/2$. With complex solution for $\phi$, I get complex energy density which doesn't make sense.
8. May 8, 2015
### bhobba
Maybe the answer is to apply Noethers theorem to get the stress energy tensor:
http://www.itp.phys.ethz.ch/research/qftstrings/archive/12HSQFT1/Chapter04.pdf [Broken]
Notice the Lagrangian in 4.1 while containing a complex field is real.
Thanks
Bill
Last edited by a moderator: May 7, 2017
9. May 8, 2015
### Brage
Well the real part of $\phi(t)= A e^{imt-\Gamma t/2}+B e^{-imt+\Gamma t/2}$ would be $$Re(\phi(t))[Re(A)cos(mt)-Im(A)sin(mt)]e^{-\Gamma t/2}+[Re(B)cos(mt)+Im(B)sin(mt)]e^{\Gamma t/2}$$ just by use of Euler's formula and taking the resulting real part.
10. May 8, 2015
### karlzr
But the real part doesn't solve the equation.
11. May 8, 2015
### bhobba
It has no real solution.
If you want to find the Hamiltonian use the methods in the link I gave and apply Noether. I am pretty sure a simple modification of the Lagrangian in 4.1 of my link will give your equation.
The problem as you stated it at the start is inconsistent - there is no real scalar field that is the solution to that equation. I am pretty sure its Lagrangian contains a complex field so its obvious why that's so. However when you work out the Hamilitonian it will give real values of energy - as it must.
Thanks
Bill
12. May 8, 2015
### karlzr
My equation is indeed from the quantum action. A fermion loop ($m_f<m_\phi/2$) will contribute a complex term to the effective potential, which means the free scalar particle is not the eigenstate of Hamiltonian. It will decay. I believe the Hamiltonian is also complex just like in quantum mechanics.
When I say the scalar field is real, I mean there is only one degree of freedom. I don't know whether it is inconsistent for such a real scalar field to have a complex solution. But my problem is to find the evolution of energy density from my solution to the equation and I expect it to be real. Actually I expect the solution to oscillate as in free field theory and also the energy density to decrease since the decay channel is open in our case.
13. May 8, 2015
### bhobba
A complex field is simply a tricky way of elegantly writing the equation of two real fields. You cant have a single real field that is complex. It makes no sense.
What I think you should do is post the full details of what you are trying to do. We have some quite high powered theorists that post here (I an not one - my knowledge of QFT is not as good as I would like) and hopeful they can sort it out.
Thanks
Bill
14. May 9, 2015
### karlzr
Thanks.
This question is from Phys.Lett. B117 (1982) 29
This is a 5-page paper and my question is from page two. The scalar is the inflaton which reheats the universe at the end of inflation by perturbative decaying. We can assume the potential energy of inflaton has only quadratic term. That's pretty much all the background of my question.
15. May 9, 2015
### bhobba
16. May 9, 2015
### fzero
If we look back at the equation of motion, we've replaced the mass by a complex number. As you've found, the solutions have a complex energy and are no longer real. Sometimes in the literature, these solutions are called Gamow states. The rationalization is that these states represent a resonance that decays after a short time. Usually, the solution of an equation of motion represents an equilibrium configuration (stationary state), but we've cooked up our system in such a way that it does not have equilibrium solutions but quasistationary ones.
As you can see from the paper in question, there is a physical interpretation of the solutions that has a predictive value for certain problems so the utility of the method is evident. But you will not be able to demand that you find a real solution to an equation that has been analytically continued to complex parameters.
17. May 10, 2015
### karlzr
Thanks. Since we are on this paper, I have a related question.
I am a little confused about how energy goes from inflaton to fermions. So during reheating, this paper says we can find the amplitude of fermion production from $<f_i,\bar{f}_i|\phi \bar{\psi}_i\psi_i|0>$. Here $\phi$ is purely classical field which should be replaced by the complex solution of its equation, right? It seems this amplitude depends only on the value of $\phi$ and has nothing to do with the energy density which doesn't make sense to me. Also, in my opinion, a classical field $\phi$ in this case serves only to give a oscillating mass to the fermion (preheating, non-perturbative decay), and I don't know why we can get information of its perturbative decay. Is it the inflaton field or inflaton particle that decays to fermion?
18. May 10, 2015
### fzero
As you say, we have a classical value of $\phi$ so we are talking about the field and not a particle state. The physical situation is that we start with the scalar somewhere away from the minimum of its potential, as it would be at high temperature. This is modeled by putting in a source term that is suddenly turned off at $t=0$. The field will naturally settle into a minimum of the potential, but the energy that was stored in the field has to go somewhere. The paper considers the case where all of this energy goes to fermions via the Yukawa-type coupling.
Given an amplitude $\mathcal{A}_{fi}$ from some initial state $i$ to a final state $j$, Fermi's Golden rule:
$$\Gamma_{i\rightarrow f} \sim | \mathcal{A}_{fi} |^2$$
gives the probability per unit time to create the final state $f$. I think you can convert this into an energy released per unit time by considering the expression for the Hamiltonian evaluated on the classical solution for $\psi$. When you integrate that over time, you will find the energy density of the fermions.
Last edited: May 10, 2015
Similar Discussions: Solution of equation for decaying real scalar field | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8983660340309143, "perplexity": 320.8941654352422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530841.24/warc/CC-MAIN-20171213201654-20171213221654-00346.warc.gz"} |
https://groupprops.subwiki.org/wiki/Divisible_nilpotent_group | Divisible nilpotent group
This page describes a group property obtained as a conjunction (AND) of two (or more) more fundamental group properties: divisible group and nilpotent group
View other group property conjunctions OR view all group properties
Definition
A group $G$ is termed a divisible nilpotent group if it satisfies the following equivalent conditions:
1. $G$ is a divisible group.
2. The abelianization of $G$ is a divisible abelian group.
3. For every positive integer $i$, the quotient group $\gamma_i(G)/\gamma_{i+1}(G)$ of successive members of the lower central series is a divisible abelian group.
4. For any two positive integers $i < j$, if $\gamma_i(G),\gamma_j(G)$ denote respectively the $i^{th}$ and $j^{th}$ members of the lower central series of $G$, then the quotient group $\gamma_i(G)/\gamma_j(G)$ is a divisible group.
Relation with other properties
Stronger properties
Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087700843811035, "perplexity": 465.61679581904644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540507109.28/warc/CC-MAIN-20191208072107-20191208100107-00124.warc.gz"} |
http://physics.stackexchange.com/questions/60072/is-the-diffusion-coefficient-for-a-macromolecule-sensitive-to-mass | Is the diffusion coefficient for a macromolecule sensitive to mass?
Suppose I have two neutrally-buoyant macromolecules diffusing in water. They have the same radius of gyration (i.e. same root-mean-square distance from their center of mass), but one of them is compact (its mass is roughly the cube of its size) and the other is extended (its mass is roughly the square of its size).
Since these molecules are the same size, do they have roughly the same diffusion coefficient?
Alternatively, their root-mean-square velocities should be different since they have different mass. Does this lead to substantially different diffusion coefficients?
-
The mass will play the role in the relaxation time to go from a ballistic regime in the Langevin equation to an overdamped regime where only diffusion matters.
The bigger the mass, the higher the inertia and therefore the longer the time it takes to reach the overdamped regime.
Once the overdamped regime is reached or, to phrase it differently, if your time window allows you to only see the overdamped regime in both cases then you will see no difference between the two.
-
This is what I thought, thanks. – Mark Eichenlaub Apr 5 '13 at 18:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762748837471008, "perplexity": 459.1117409572097}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894250.40/warc/CC-MAIN-20140722025814-00190-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/399368/how-to-prove-the-inequality-between-mathematical-expectations/414701 | # How to prove the inequality between mathematical expectations?
Let $X$ and $Y$ be independent random variables having the same distribution and the finite mathematical expectation. How to prove the inequality $$E(|X-Y|) \le E(|X+Y|)?$$
-
Sorry, I missed the moduli. It has been fixed. – user64494 May 22 '13 at 16:38
What do you mean by "same distribution"? are they IID? – Calvin Lin May 24 '13 at 20:04
@ Calvin Lin : It means that their distribution functions are identical. Could you explain what you mean by IID? – user64494 Jun 3 '13 at 4:19
@user64494 IID is shorthand for “independent, identically distributed” – Ewan Delanoy Jun 6 '13 at 7:55
@user64494: could you tell us where you have found this inequality? – Siméon Oct 2 '13 at 11:01
After a little inspection, we see that $$E(|X+Y|-|X-Y|) = 2E[Z(1_{XY\geq 0}-1_{XY<0})]$$ where $Z = \min(|X|,|Y|)$.
Remember that for any non-negative random variable $T$, $$E(T) = \int_0^\infty P(T>t)\,dt.$$
We apply this with $T=Z\,1_{X \geq 0, Y\geq 0}$, $T=Z\,1_{X < 0, Y< 0}$ and $T=Z\,1_{X \geq 0, Y< 0}$. Since $\{Z > t\} = \{|X| > t\}\cap\{|Y| > t\}$, we obtain
$$E(Z \,1_{X \geq 0,Y\geq 0}) = \int_0^\infty P(X > t)P(Y > t)\,dt = \int_0^\infty P(X > t)^2\,dt$$
$$E(Z\, 1_{X < 0, Y < 0}) = \int_0^\infty P(X < -t)P(Y < - t)\,dt = \int_0^\infty P(X < -t)^2\,dt$$
$$E(Z\,1_{X \geq 0, Y< 0}) = E(Z\,1_{\{X < 0, Y \geq 0\}}) = \int_0^\infty P(X > t)P(X < -t)\,dt$$
So finally, $$E(|X+Y|-|X-Y|) = 2\int_0^\infty (P(X>t)-P(X<-t))^2\,dt \geq 0$$
Remark 1. The inequality is an equality if and only if the distribution of $X$ is symetric, that is $P(X > t) = P(X < -t)$ for any $t \geq 0$.
Remark 2. When $|X|=1$ a.s. the inequality is nothing but the semi-trivial fact that if $X$ and $Y$ are independent with same distribution, then $P(XY \geq 0) \geq \dfrac{1}{2}$.
Remark 3. It is worthwile to mention a nice corollary : $E(|X+Y|) \geq E(|X|)$. The function $x \mapsto |x|$ is convex hence $|X| \leq \frac{1}{2}(|X+Y|+|X-Y|)$. Taking expectations we find $$\Bbb E(|X+Y|-|X|) \geq \frac{1}{2}\Bbb E(|X+Y|-|X-Y|) \geq 0.$$ Furthermore, there is an equality if and only if $X=0$ a.s.
-
Nice! I'm perfectly satisfied with this answer, I’ll wait before awarding the bounty to it, just in case someone comes up with an even better proof. – Ewan Delanoy Jun 8 '13 at 15:01
I am ok with that. Notice that I added some remarks in my answer. – Siméon Jun 9 '13 at 11:06
It's really nice! +1. – 23rd Jun 9 '13 at 11:19
Edit: Question has changed. Will give answer when time permits.
By the linearity of expectation, the inequality $E(X-Y)\le E(X+Y)$ is equivalent to $-E(Y)\le E(Y)$, which in general is false. It is true precisely if $E(Y)\ge 0$.
Independence is not needed for the argument. Neither is the hypothesis that the random variables have the same distribution.
-
Sorry, I missed the moduli. It has been fixed. – user64494 May 22 '13 at 16:03
Below is a set of remarks that’s too long to be put in a comment.
Conjecture. The inequality becomes an equality iff $-X$ has the same distribution as $X$.
Remark 1. The “if” part of the conjecture is easy : if $X$ and $-X$ have the same distribution, then by the independence hypothesis $(X,Y)$ and $(X,-Y)$ have the same joint distribution, therefore $|X+Y|$ and $|X-Y|$ share the same distribution, so they will certainly share the same expectation.
Remark 2. Let $\phi_n(t)=t$ if $|t| \leq n$ and $0$ otherwise. If the inequality holds for any $(\phi_n(X),\phi_n(Y))$ for any $n$, then it will hold for $(X,Y)$ also, by a dominated convergence argument. So we may assume without loss of generality that the support of $X$ is bounded.
-
Thank you. It is helpful. It would be interesting to answer the question in the partial case of absolutely continuous distributions. – user64494 May 31 '13 at 6:52
Let's consider the question of when $E[f(X,Y)] \geq 0$ in the generality of real-valued functions of arbitrary i.i.d. random variables on probability spaces. With no loss of generality take $f$ to be symmetric in $X$ and $Y$, because $E[f]$ is the same as $E$ of the symmetrization of $f$.
There is a simple, and greatly clarifying, reduction to the case of random variables with at most two values. The general case is a mixture of such distributions, by representing the selection of $(X,Y)$ as first choosing an unordered pair according to the induced distribution on those, and then the ordered pair conditional on the unordered one (the conditional distribution is the $1$ or $2$-valued distribution, and the weights in the mixture are the probability distribution on the de-ordered pair). One then sees, after some more or less mechanical analysis of the 2-valued case, that the key property is:
$f(x,y)=|x+y| - |x-y|$, the symmetric function for which we want to prove $E[f(X,Y)] \geq 0$, is diagonally dominant. That is, $f(x,x)$ and $f(y,y)$ both are larger than or equal to $|f(x,y)|$. By symmetry we really need only to check one of those conditions, $\forall x,y \hskip4pt f(x,x) \geq |f(x,y)|$.
A function satisfying these conditions, now on a general probability space, has non-negative expectation in the 2-valued case, because for $p+q=1$ (the probability distribution), $$E[f] = p^2 f(a,a) + q^2 f(b,b) + 2pq f(a,b) \geq (p-q)^2|f(a,b)| \geq 0$$
The equality cases when expectation is zero are when $p=q$ and $f(a,b) = -f(a,a) = -f(b,b)$. For 1-valued random variables, equality holds at values where $f(p,p)=0$. Due to diagonal dominance these are null points, with $f(p,x)=0$ for all $x$.
This allows a generalization and proof of Ewan Delanoy's observation, in the general situation: if the support of the random variable has an involution $\sigma$ such that $\sigma(p)=p$ for null points and for non-null points $b=\sigma(a)$ is the unique solution of $f(a,a)=f(b,b)=-f(a,b)$, then the expectation is zero (when finite) if and only if the distribution is $\sigma$-invariant. That is because the expectation zero case must be a mixture of the $1$ and $2$-atom distributions with zero expectation, and all of those assign probability in a $\sigma$-invariant way to the atoms.
Returning to the original problem, for $f(x,y)=|x+y| - |x-y|$ with the absolute value interpreted as any norm on any vector space, diagonal dominance follows from the triangle inequality, $0$ is a unique null point, and the involution pairing every non-null $x$ with the unique solution of $f(x,y)=-f(x,x)=-f(y,y)$ is $x \to -x$. This recovers the characterization that the distribution is symmetric in the critical case, for any $f$ derived from a norm.
Note (*). In passing between ordered and unordered pairs, there might be some issue of "measurable choice" on general measure spaces, or not, and it is an interesting matter what exactly is true about that and whether any condition is needed on the measure space. In the original problem one has a selection function $(\min(X,Y),\max(X,Y))$, if needed to avoid any complications, and the same would be true in any concrete case by using order statistics on coordinates.
-
Note that mixtures of diagonally dominant 2x2 matrices are diagonal dominant in the linear algebra sense, so the terminology is consistent, and one can quote the theorem on positive-semidefinite nature of such matrices as another argument. – zyx Jun 9 '13 at 16:49
Beautiful and unexpected. All that you say here, I had already guessed more or less intuitively, but I could not find a formal proof. So this proof is like my dream come true. – Ewan Delanoy Jun 9 '13 at 16:53
I am not sure I understand fully your reduction to the two-valued case. How do you choose the unordered pair and how do you manage the case $X=Y$? – Siméon Oct 2 '13 at 6:56
I apologize for my poor English, but I cannot understand what you said in the second paragraph. Did you mean that you had proved the following statement? If $X$ and $Y$ are i.i.d. and if $f(x,y)$ satisfies that $f(x,y)=f(y,x)$ and $f(x,x)\ge |f(x,y)|$ for any $(x,y)$, then $E[f(X,Y)]\ge 0$. However, the statement is clearly false in general. @Ju'x: What is you opinion about my comment? – 23rd Oct 4 '13 at 15:25
@Landscape, it is a few months since I wrote the above, so I do not remember exactly what I meant, but I think that is for the $2$-element case and the statement of diagonal dominance needed in the general case is for mixtures (weighted averages) of $\leq 2$-element cases, $f(x,x) \geq \int_y |f(x,y)|$. If you write it in this "global" form one can write the proof without any reduction to the $2$-element case. However, if the reduction is correct then we do not need to think about globalizing the condition and can reason about $2$-element situations. – zyx Oct 4 '13 at 15:58
let $F(x) = P(X < x)$. I assume that $F$ is differentiable so there is no atom and $F'$ is the cdf of $X$ (and $Y$).
$E(|X+Y|) - E(|X-Y|) = E(|X+Y|-|X-Y|) \\ = 2E(X \; 1_{Y \ge |X|} + Y \; 1_{X \ge |Y|} - X \; 1_{-Y\ge |X|} - Y \; 1_{-X \ge |Y|}) \\ = 4E(X (1_{Y \ge |X|} - 1_{-Y \ge |X|})) \\ = 4E(X(1-F(-X)-F(X))) \\ = 4 \int_\Bbb R x(1-F(-x)-F(x))F'(x)dx \\ = 4 \int_\Bbb R (-x)(1-F(x)-F(-x))F'(-x)dx \\ = 2 \int_\Bbb R x(1-F(x)-F(-x))(F'(x)-F'(-x))dx \\ = \int_\Bbb R (1-F(x)-F(-x))^2dx - [x(1-F(x)-F(-x))^2]_\Bbb R \\ = \int_\Bbb R (1-F(x)-F(-x))^2dx \ge 0$
I am not entirely sure about the last step. $G(x) = 1-F(x)-F(-x)$ does converge to $0$ at both ends, and $G$ has finite variation. But still I am not convinced we can't carefully pick $F$ such that the bracket doesn't vanish.
However this is valid if $X$ has compact support or if $G(x)$ vanishes quickly enough (like the normal distribution for example). In this case it also proves Ewan's conjecture : the difference is $0$ if and only if the distribution is symmetrical with respect to $0$.
-
E[x] is a linear operator.
This means E[X + Y] = E[X] + E[Y]
Also, E[X - Y] = E[X] - E[Y]
The statement will be true when $E[Y] \ge 0$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9606378674507141, "perplexity": 247.278253256815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826025.8/warc/CC-MAIN-20140820021346-00368-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/123624/nearby-homomorphisms-from-compact-lie-groups-are-conjugate | Nearby homomorphisms from compact Lie groups are conjugate
I'm looking for a proof (that I can understand) of the following fact: If $K$ and $G$ are Lie groups, and $K$ is compact, then nearby homomorphisms $K\to G$ are conjugate.
That is, if $\mathrm{Hom}(K,G)$ is the set of Lie group homomorphisms, endowed with a suitable topology (I'd like to say compact-open), then the orbits of the conjugation-by-$G$ action on it are open. (Note that there are obvious conterexamples if $K$ is not compact.)
This is referred to in The space of Lie group homomorphisms. A reference is given there to Connor-Floyd, Differentiable Periodic Maps, Ch. VIII, Lemma 38.1.
Following the reference, we see that Connor and Floyd derive this as an easy consequence of a theorem from Montgomery-Zippin, Topological Transformation Groups, p. 216. That is, by thinking about the graph $K\to K\times G$, of a homomorphism, the statement can be deduced from the following:
• If $K\subseteq G$ is a compact subgroup, then there exists a neighborhood $U$ of $K$ in $G$ such that for any subgroup $H\subset U$, there exists $g\in G$ such that $gHg^{-1}\subseteq K$. (I.e., all subgroups "close" to a compact subgroup are conjugate to a subgroup of it.)
Montgomery-Zippin's proof is an exercise involving geodesics in symmetric spaces, which is opaque to me and will probably always remain so. (They have statements such as: "there exists a neighborhood $U$ of $x$ such that for any geodesic in $U$, for any points $a,b,c$ in that order along the geodesic, $d(x,b)< \mathrm{max}(d(x,a),d(x,c))$" (quoting from memory, don't take it literally). I'm just a simple algebraic topologist, and sort of thing goes right over my head.)
Can anyone describe a more modern proof, or give a reference? I'm imagining such a proof will be an exercise involving the exponential map. (In fact, it seems easy to prove that any subgroup "close" to the identity is trivial in just this way.)
-
You could derive this from the fixed point property for affine actions of compact groups, a la the proof of Property T. The point is that lack of local rigidity for representations of a compact group means that $H^1$ of $K$ with coefficients in Lie algebra is nonzero, which is impossible. Note also that geodesic property you are referring to is just the fact that the distance function on a totally normal neighborhood in Riemannian mld is convex. – Misha Mar 5 '13 at 16:11
Check out the short survey staff.science.uu.nl/~Schat001/survey_Lie_algebras.pdf I think it might go in the direction you want. – Claudio Gorodski Mar 5 '13 at 16:11
Claudio: it goes in a nice direction, but I don't see that it gets there. The claim is true for $\mathrm{Hom}(U(1),U(1))$, but false for $\mathrm{Hom}(\mathbb{R}, U(1))$, so I don't see how I can prove it purely from Lie algebra considerations. As Misha suggests, I probably need to know something about $H^1(K,\mathfrak{g})$, not just $H^1(\mathfrak{k},\mathfrak{g})$. What I'm missing is probably really easy. – Charles Rezk Mar 5 '13 at 19:47
That $H^1(K,\mathgfrak{g})$ is easy: indeed a continuous 1-cocycle induces an affine continuous action of $K$ on the finite-dimensional vector space $\mathfrak{g}$, which has a fixed point iff the 1-cocycle is a 1-coboundary. Now for $K$ compact there is a fixed point (integrate along an orbit, or fix a $K$-invariant Euclidean metric and take the circumcenter of an orbit). – YCor Mar 5 '13 at 21:47
It looks like you need the vanishing of a kind of nonabelian continuous cohomology: If $K$ acts continuously on $G$ then any continuous 1-cocycle $K\to G$ (crossed homomorphism, $f(xy)=f(x)^yf(y)$) that is close enough to the trivial one is a coboundary, i.e. is determined by an element $a\in G$ ($f(x)=a^xa^{−1}$). – Tom Goodwillie Mar 6 '13 at 2:52
Here is a sketch of the proof expanding on my comment.
Suppose that there exists a sequence of continuous homomorphisms $\rho_i: K\to G$ which converges (uniformly) to a representation $\rho: K\to G$. Repeating the arguments from A.Weil "Remarks on cohomology of groups", Annals of Math. (1964), you obtain a cocycle $\zeta\in Z^1(K, {\mathfrak g}_{Ad\circ \rho})$. (Think of rescaling the group $G$, so that in the limit it becomes its Lie algebra ${\mathfrak g}$ and the homomorphism condition for $\rho_i$'s becomes a cocycle condition for $\zeta$.)
Note they Weyl's arguments deal with finitely-generated groups, while your group $K$ is compact and infinitely generated, so you have to work a bit more and use uniformity of convergence to guarantee that the cocycle $\zeta$ is continuous. Now, if you have a continuous $V$-valued cocycle $\zeta$ of a topological group $K$ (where $V$ is a topological vector space), it gives you a continuous affine action of $K$ on $V$ by the formula $g\cdot v= L(g)v + \zeta(g)$, where $L(g)$ is the linear action of $g$ on $V$. In your case, $V$ is the Lie algebra ${\mathfrak g}$ of $G$ and the linear action of $K$ on $V$ is via composition of $Ad$ and $\rho$. Since $K$ is compact, the action on the finite-dimensional space $V$ in question will be isometric for some choice of an inner product, thus, by taking the center of a $K$-orbit on $V$ (with respect to the invariant Euclidean metric), we conclude that the affine action of $K$ has a fixed point. In other words, the cocycle $\zeta$ is a coboundary.
Now, the space of continuous coboundaries $B^1_{c}(K, V)$ is tangent to the orbit of $\rho$ under the group $G$ acting on representations $K\to G$ via conjugation (see Weil's paper). If, again, $K$ were finitely-generated, the space of cocycles would be finite-dimensional and you could take a complementary subspace ${\mathcal H}^1(K, V)\subset Z^1(K, V)$ to $B^1(K, V)$ and postcompose the representations $\rho_i$ with the action of $G$ by conjugation, so that the sequence $\rho_i$ converges to $\rho$ in a conical neighborhood of ${\mathcal H}^1(K, V)$. This would ensure that the cocycle $\zeta$ cannot be a coboundary, thereby giving you a contradiction. In your setting, you maye have to do some analysis to make sure that this argument works (again, using uniformity of convergence).
One possible simplification would be to take a finitely-generated dense subgroup $F$ in $K$ (it always exists: The proof goes back to Hausdorff-Banach-Tarsky paradox) for the last part of the argument and argue with the restriction of the cocycle $\zeta$ and representations $\rho_i$ to $F$, thereby reducing the problem to finite-dimensional one.
-
This is very helpful! Though I'm a little confused by the penultimate paragraph, since it seems that by this point we've shown that $Z^1=B^1$. Looking at Weil's paper, it seems that the deformation theory already tells us that $\mathrm{Hom}(K,G)$ is a manifold (assuming $K$ finitely generated), and so we are done once we have $H^1=0$. – Charles Rezk Mar 7 '13 at 14:22
Charles: Yes, you are absolutely right. I lost track of the fact that in your setting $Hom(K, G)$ is a real-analytic variety (as a homomorphism for connected $K$ is determined by the homomorphism of Lie algebras), so things are easier than I thought. Thus, everything reduces to the fact that $H^1_{cont}(K, {\mathfrak g})=0$, as in Weil's paper. There is one issue you need to check though: Uniform convergence of representations implies $C^1$-convergence (in order to use Lie algebras). But, if you use the topology of $C^1$-convergence, this will not be a problem. – Misha Mar 7 '13 at 23:01
Here is a proof sketch using cohomological ideas. The argument is in four main steps:
I. General theory of families of Lie algebra homomorphisms.
II. The case of a semisimple $G$.
III. The case of a torus (here is a major gap in my argument)
IV. combining both cases.
A preliminary observation: if $H$ is the full linear group of a complex vector space, then the result is well-known, because up to conjugacy, a homomorphism $G \to H$ is given by its character; and the set of characters is a discrete subspace of the space of all smooth maps $G \to \mathbb{C}$.
I. For arbitrary Lie algebras, $Hom_{Lie -alg} (\mathfrak{g},\mathfrak{h})$ is a real algebraic variety and thus it is locally path- connected. Therefore, nearby homomorphisms can be connected by smooth families of homomorphisms (I am not entirely sure whether this is true, but it seems so).
Now consider a smooth family $f_t$, $t \in \mathbb{R}$, of Lie algebra homomorphisms. We study the problem of finding $h: \mathbb{R} \to H$ such that $f_t (X) = Ad (h(t)) f_0 (X)$ holds for all $t$ and $X \in \mathfrak{g}$. If $f_t$ is the derivative of a smooth family of group homomorphisms $\phi_t$, then $h(t)$ conjugates $\phi_0$ to $\phi_t$ and thus solves the original problem.
Let $F_t$ be the derivative of $f_t$ with respect to $t$. Differentiating the equation $[f_t X,f_t Y]=f_t [X,Y]$ shows that $F_t\in Hom (\mathfrak{g},\mathfrak{h})$ satisfies $F_t ([X,Y])= [F_t (X);Y]-[F_t (Y);X]$. This means that $F_t$ is a $1$-cocycle in the Chevalley-Eilenberg complex for $H^{\ast}(\mathfrak{g};f_t)$. By the cohomology I mean cohomology of $\mathfrak{g}$ with coefficients in $\mathfrak{h}$, viewed as a $\mathfrak{g}$-module via $f_t$.
We can consider the collection of all Chevalley-Eilenberg complexes $C^{\ast} (\mathfrak{g},f_t)$ as a complex of vector bundles on the real line; denote the vector bundles by $C^{\ast}(\mathfrak{g},f)$. The derivatives $F_t$ are a smooth family of $1$-cocycles and $[F_t]$ is a family of cohomology classes, smooth in a certain sense. I say that $[F_t]$ is uniformly trivial if there is a smooth family $H_t$ of $0$-cochains such that $[f_t (X);H_t]=F_t (X)$ for all $t$ and all $X \in \mathfrak{g}$ (this means that $d H_t =F_t$, but in a ''uniform way'').
Suppose that the cohomology class $[F_t]$ is uniformly trivial. Then
$$f_t (X) = \int_{0}^{t} F_s (X) ds = - \int_{0}^{1}[H_s;f_s (X)] ds;$$
in other words $f_t (X)$ solves the ODE $\frac{d}{dt} f_t (X) = - [H_t;f_t(X)]$ with initial value $f_0$. Another solution of the same ODE is $Ad (h(t)) f_0(X)$, where $h(t) \in H$ solves $\frac{d}{dt} h(t)= H_t$. So $f_t$ is conjugate to $f_0$. Vice versa, if $Ad (h(t)) f_0(X)$, then $[F_t]$ is uniformly trivial.
If $f_t$ is the derivative of a group homomorphism $G \to H$ and $G$ is compact, then pointwise triviality ($[F_t]=0$ for each $t$) implies uniform triviality. This is by the preliminary observation, which implies that $d_0:C^0 (\mathfrak{g},f) \to C^1 (\mathfrak{g},f)$ has constant rank and so its image is a vector bundle (pass to the complexification of $\mathfrak{h}$, which is unproblematic as we are only interested in the dimension of the invariant subspace). Thus we can pick a smooth $r: im (d_0) \to C^0 (\mathfrak{g},f)$ with $d_0 r = id$. Choosing $H_t:= r (F_t)$ solves the problem. Thus we arrive at
THEOREM: ''If $f_t: \mathfrak{g} \to \mathfrak{h}$ is a family of homomorphisms of Lie algebras and $H$ a Lie group with Lie algebra $\mathfrak{h}$, then there is a smooth map $h: \mathbb{R} \to H$ with $f_t = Ad (h(t))f_0$ if and only if the obstruction cocycle $[F_t]$ is uniformly trivial.''
ADDENDUM: ''If $G$ is a compact Lie group with Lie algebra $\mathfrak{g}$ and if $f_t$ is the derivative of a smooth family of homomorphisms $G \to H$, then pointwise triviality of $[F_t]$ implies uiform triviality.''
II.
Assume $G$ is semisimple. For each representation $V$ of $\mathfrak{g}$, we have an isomorphism $H^{\ast} (\mathfrak{g};V) \cong H^{\ast} (\mathfrak{g};V^{\mathfrak{g}})$, because of the compactness of $G$. But $H^1 (\mathfrak{g})=0$ since $G$ is semisimple, and so the cohomology class $[F_t]$ is zero, and by the addendum, it is uniformly trivial. Thus by the theorem, nearby homomorphisms are conjugate if $G$ is semisimple.
III.
Assume $G=T$ is a torus (sketch). Let $V$ be the universal cover (equal to $\mathfrak{t}$) and $\Gamma \subset V$ be the kernel; this is a lattice. Smooth families $f_t:\mathfrak{t} \to \mathfrak{h}$ are in bijection with smooth families $\psi_t: V \to H$ and induce families of group homomorphisms $g_t: \Gamma \to H$. As Misha indicates, there is a parallel obstruction theory for such families; with an obstruction in $H^{1}_{group}(\Gamma;\mathfrak{h})$. Consult Weil's paper quoted in Mishas answer.
There is the Van Est isomorphism $H_{Lie}^{\ast} (\mathfrak{t},\mathfrak{h}) \cong H_{smooth} (V,\mathfrak{h})$ to smooth group cohomology and furthermore a restriction $H_{smooth} (V,\mathfrak{h}) \to H^{\ast}_{group}(\Gamma; \mathfrak{h})$; this latter map is an isomorphism. This isomorphism should map the corresponding obstructions onto each other (this is the part of the argument where I do not know the details).
So a family of group homomorphisms $V \to H$ is constant up to conjugacy iff the restriction to the lattice $\Gamma$ is constant up to conjugacy. If the family $V \to H$ is the universal cover of a family $T \to H$, then the restriction to $\Gamma$ is constant; thus $T \to H$ is constant up to conjugacy.
IV.
Consider an arbitrary compact $G$. Without loss of generality, we can pass to a finite cover and thus assume $G=T \times K$, $T$ a torus and $K$ semisimple. Consider a family of group homomorphisms $\phi_t:G \to H$, with Lie algebra maps $f_t$ and obstruction cocycle $F_t$ as above. By the solution of the problem for $T$, the restriction $F_t|_{\mathfrak{t}}$ is uniformly trivial. But by the Künneth formula, the restriction $H^1 (\mathfrak{g} ) \cong H^1 (\mathfrak{k})\oplus H^1 (\mathfrak{t})\to H^1 (\mathfrak{t})$ is an isomorphism. Therefore, $[F_t]$ is trivial and thus uniformly trivial, again by the addendum.
Afterthought: It is probably better to study the whole question in the context of smooth cohomology. A family $\phi_t:G \to H$ should give an obstruction class in $H^{1}_{smooth} (G; \mathfrak{h})$. If $G$ is compact, this space is trivial by invariant integration.
-
How's this for another sketch? I don't know if it's "modern". It's a homemade attempt by another simple algebraic topologist.
Let $f:K\to G$ and $f_1:K\to G$ be continuous homomorphisms, where $K$ is a compact group and $G$ is a Lie group. There is a continuous action of $K$ on $G$ given by $f$, which we denote by writing $g^x=f(x)^{-1}gf(x)$. Define $h:K\to G$ by $f_1(x)=f(x)h(x)$. The fact that $f_1$ is another homomorphism means that $$h(xy)=h(x)^yh(y).$$ Call $h$ a crossed homomorphism if it satisfies this equation. Assume that $f_1$ is close to $f$. This means that $h$ is close to the constant function taking all of $K$ to the identity in $G$. We want $a\in G$ such that $f_1(x)=a^{-1}f(x)a$. This is equivalent to $$h(x)=a^xa^{-1}.$$ So forget about $f$; the problem is to show that if $K$ acts continuously on $G$ by homomorphisms then any small crossed homomorphism $h:K\to G$ can be expressed in this way for some $a$.
Actually $G$ acts on the set of crossed homomorphisms $h$ as follows: given $a\in G$ and $h$, let $h'(x)=(a^x)^{-1}h(x)a$. So the problem is to show that if $h$ is small then some $a$ takes it to the trivial map.
First look at the case where $G$ is abelian. Switching to additive notation, we have $G$ acting linearly on a finite-dimensional vector space and the problem is to show that if $h$ is such that $$h(xy)=h(x)^y+h(y)$$ then there exists $a$ such that $$h(x)=a^x-a.$$ Let $-a$ be the average $Av_x h(x)$. Then $$-a=Av_x h(xy)=Av_x (h(x)^y + h(y))=-a^y+h(y).$$ Thus $h(x)=a^x-a$.
Now for the general case: use linear coordinates in $G$ near the identity, writing the group multiplication as $x\ast y=x+y+Q(x,y)$ where $|Q(x,y)|\le |x||y|$ if $|x|$ and $|y|$ are small enough. So we have $K$ acting on a neighborhood of $0$ so as to preserve the multiplication $\ast$, and we have a small $h$ such that $$h(xy)=h(x)^y\ast h(y)$$ and we want to be able to modify $h$ into $$h'(x)=(a^x)^{-1}\ast h(x)\ast a$$ so as to make it $0$. Do it in steps. First, again let $a$ be the average of $h$. Now, modulo second-order terms, averaging $h(xy)=h(x)^y\ast h(y)$ over $x$ again we have $a=a^y+h(y)$. That is, if we modify $h$ by this $a$ then the $h'$ that we get is zero modulo second-order terms. If $h$ is small enough, $|h(x)|\le \epsilon$, then $|h'(x)|\le \epsilon ^2$.
Repeating this infinitely often and taking a limit, surely this gives $0$ as the result of acting on the original $h$ by some $a\in G$, the infinite product of smaller and smaller elements..
-
I'm worried about averaging $h(x)=h(x)^y*h(y)$ over $x$. In the local coordinates, $x\mapsto x^y$ isn't linear, and a non-linear transformation might not play well with the integral. – Charles Rezk Mar 8 '13 at 15:24
That is the worrisome part, isn't it? But I believe that the error should just give more second-order terms. – Tom Goodwillie Mar 8 '13 at 16:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731285572052002, "perplexity": 125.79480317160396}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927863.72/warc/CC-MAIN-20150521113207-00244-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/4395/asymptotic-curvature-of-the-universe-and-correlation-with-local-curvature | # asymptotic curvature of the universe and correlation with local curvature
There is not-so-rough evidence that at very large scale the universe is flat. However we see everywhere that there are local lumps of matter with positive curvature. So i have several questions regarding this:
1) Does the fact that a manifold with a) asymptotic (space) curvature zero and b) local inhomogeneities with positive (space) curvature imply that there will be regions with negative (space) curvature?
2) a Region of negative (space) curvature implies dark energy in that region?
3) assuming answer to both 1) and 2) are true: does this represent an independent confirmation of dark energy? or there is somehow an geometric relationship relating asymptotic flatness to accelerated expansion (the traditional reason to introduce dark energy in the first place)?
EDITED: to reflect distinction between space and space-time curvatures.
-
You have to be careful to distinguish between curvature of space and curvature of spacetime. When we say that the Universe is flat on large scales, we're talking about space -- that is, about a slice through spacetime at constant cosmic time. With respect to spatial curvature, statement 1 is correct: we do have zero curvature on average, and positive curvature in some regions, which implies negative curvature in other regions.
But statement 2 doesn't follow from statement 1, because in this case we want to talk about spacetime curvature. To be specific, ordinary matter produces positive spacetime curvature (i.e. a positive Ricci scalar), and dark energy produces negative spacetime curvature. But spatial curvature and spacetime curvature are different things.
-
great answer. thanks i didn't realise that the paper was relevant only to space curvature – lurscher Feb 1 '11 at 21:54
Ted, you said: "positive curvature in some regions, which implies negative curvature in other regions".. Are you implying that the density of dark energy is larger in certain regions? – dbrane Feb 1 '11 at 22:16
@dbrane -- No. Once again: the statements about dark energy are about spacetime curvature, while the statements about positive and negative curvature are about spatial curvature. And anyway, the fact that curvature varies from place to place doesn't imply that the density of dark energy varies from place to place -- at most, it implies that the total density varies from place to place. DE could be uniform, with the variations caused by the other stuff. – Ted Bunn Feb 1 '11 at 22:18
@dbrane, the first question doesn't mention dark energy at all, 1st question is entirely about geometry – lurscher Feb 1 '11 at 22:31
The universe is flat spatially, but the space is being stretched with time on the Hubble frame. This means how the space is embedded in spacetime is such that there is curvature, such as a “time-time” Ricci curvature $R^{tt}$. This solution to the Einstein field equations is such that the pressure is equal to the negative of the energy density of the vacuum. So dark energy, which is associated with this pressure is due to a positive energy density. The Hamiltonian for this is $H~=~\Lambda x^2/6$, which is similar to the spring potential. However, the force acts in the same direction as the displacement.
-
thanks. Please, can you rephrase your question as answers to each specific subquestion? – lurscher Feb 1 '11 at 21:42
My main comment is on how dark energy is not due to negative energy. That is clearly not the case. – Lawrence B. Crowell Feb 1 '11 at 21:54
and why negative energy is relevant to the question? are you saying that negative energy is required on regions with negative (space-time) curvature/Ricci scalar? – lurscher Feb 1 '11 at 22:08
Negative energy suffers from a range of problems, where here I am thinking of $T^{00}$. Dark energy is not a case of negative energy. That is the main thrust of what I wrote. – Lawrence B. Crowell Feb 1 '11 at 22:34
thats correct. But i never mentioned negative energy in the question, that is why i asked why you implied that it was relevant – lurscher Feb 1 '11 at 22:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097267389297485, "perplexity": 467.58192496234403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042985140.15/warc/CC-MAIN-20150728002305-00099-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathhelpforum.com/math-topics/65444-volume-metal-block.html | # Thread: Volume of Metal Block
1. ## Volume of Metal Block
How do i solve this:
A rectangular block of metal with a square cross-section has a total surface area of 625cm2. Find the maximum volume of the block of metal?
Help!!
2. Let the rectangular block have dimensions x, x and y. (2 x's because the cross section is a square).
So the surface area is given be $2x^2$ (each end face) $+ 4xy$ (each length face).
$2x^2 + 4xy = 625$
$y = \frac{625 - 2x^2}{4x}$
Now,
Volume = 'area of cross section' times 'length' = $x^2y$.
$= x^2 \times \frac{625 - 2x^2}{4x}$
$= x \times \frac{625 - 2x^2}{4}$
$= \frac{1}{4} \left({625x - 2x^3}\right)$
$\frac{dV}{dx} = \frac{1}{4} (625 - 6x^2) = 0$ (for max)
$6x^2 = 625$
$x = \sqrt{\frac{625}{6}}$
Once you have calculated this value, substitute into the equation for y. Use the Volume = $x^2y$ formula to find the max volume. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853148877620697, "perplexity": 1324.8706858439548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542687.37/warc/CC-MAIN-20161202170902-00114-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/is-it-true-photons-will-not-produce-an-interference-pattern-in-a-vacuum.353724/ | # Is it true photons will not produce an interference pattern in a vacuum?
1. Nov 11, 2009
### CosmicVoyager
"In this study, it is shown with reasons that superposition principle does not work in vacuum. This case can be observed by Young type double slit experiment to be carried out. Since field-field interaction is carried through charged particles, in the absence of charged particles linear superposition of two fields is not possible and interference will not be observed."
http://arxiv.org/abs/physics/0212103
2. Nov 11, 2009
### ZapperZ
Staff Emeritus
Please note that while we do tolerate certain arxiv references for high energy physics and string/etc. subject areas (because that is a common practice within that community), the rest of the physics subject areas still adhere to peer-reviewed publications. This applies to these types of articles as well in areas of basic QM. Arxiv is not immune to having strange stuff that goes nowhere fast.
So unless it has been published, it should not be a topic of discussion on here just yet.
Zz. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8766918182373047, "perplexity": 827.0301592632717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589757.30/warc/CC-MAIN-20180717164437-20180717184437-00382.warc.gz"} |
http://www.data-automaton.com/2018/01/09/nonlinear-regression/ | # Nonlinear Regression
##### January 9, 2018
Written by Boutros El-Gamil
# 1. Idea
[watch video]
In a previous post, we saw the linear relationship between heights and weights of a group of persons (see the below figure), where both variables are growing together and shrinking together as well.
This is however not the case with every two variables. For instance, the average heart beats of a person has a nonlinear relationship with the number of minutes that person is spending in physical exercising. Number of heart beats is rapidly increasing during the first 5 minutes. Afterwards, it starts to decrease.
As another example, levels of testosterone hormone increase in men in the age between 25 and 44 years old (on average). Afterwards, testosterone level starts to decrease monotonically as men become older.
In such scenarios, a linear function can not express the relation between these variables, and a non-linear (polynomial) function should be used instead.
Suppose we have a set of observations represented by two dimensions $$x$$ and $$y$$, where these observations have non-linear relation, like the below figures.
Now suppose you need to find a function to represent the relation between these 2 variables (also called features). In this case, the function should be nonlinear (polynomial of degree $$>1$$) function.
As a reminder, a polynomial function of degree $$>1$$ is a function that has more than one direction. In the below figure, the left-side function has two directions, while the right-hand function has three directions.
# 2. Theory
As we explained in linear regression article,we want to find a nonlinear regression function (like the blue function below) that minimize distances (vertical green lines) between true and predicted values of dependent variable $$y$$ (i.e Least Squares approach).
The error function we need to minimize is defined as:
E(\mathbf{w}) = \frac{1}{N} \sum\limits_{i=1}^N [y^{(i)}-f(x^{(i)},\mathbf{w})]^2\tag{1}
which is the same as that of linear regression. The only difference is with hypothesis function $$f(x^{(i)},\mathbf{w})$$. This function will be nonlinear with respect to predictor $$x$$:
f(x^{(i)},\mathbf{w}) = w_0 {x^{(i)}}^0 + w_1 {x^{(i)}}^1 + w_2 {x^{(i)}}^2 + w_3 {x^{(i)}}^3 + … + w_P {x^{(i)}}^P\tag{2}
Or in a compact form:
f(x^{(i)},\mathbf{w}) = \sum\limits_{p=0}^P w_p {x^{(i)}}^p\tag{3}
where $$P$$ is the polynomial degree of function $$f(x^{(i)},\mathbf{w})$$.
In order to minimize function $$E(\mathbf{w})$$ (Equation 1), we need to put its derivatives with respect to $$\mathbf{w}$$ to $$0$$:
\nabla E(\mathbf{w})= 0\tag{4}
Let’s compute the derivative of $$E(\mathbf{w})$$ with respect to each coefficient $$w \in \mathbf{w}$$:
• For $$w_0$$:
\frac{\partial E(\mathbf{w})}{\partial w_0}= \frac{1}{N} \sum\limits_{i=1}^N \frac{\partial [y^{(i)} – (w_0 + w_1 {x^{(i)}} + w_2 {x^{(i)}}^2 + w_3 {x^{(i)}}^3 + … + w_P {x^{(i)}}^P)]^2}{\partial w_0}\tag{5}
\frac{\partial E(\mathbf{w})}{\partial w_0}= \frac{1}{N} \sum\limits_{i=1}^N \frac{\partial [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)]^2}{\partial w_0}\tag{6}
\frac{\partial E(\mathbf{w})}{\partial w_0}= \frac{1}{N} \sum\limits_{i=1}^N \frac{\partial [{y^{(i)}}^2 -2 w_0 y^{(i)} – 2 y^{(i)} \sum\limits_{p=1}^P w_p {x^{(i)}}^p + w_0^2 + 2w_0 \sum\limits_{p=1}^P w_p {x^{(i)}}^p + \sum\limits_{p=1}^P w_p {x^{(i)}}^{2p}]}{\partial w_0}\tag{7}
\frac{\partial E(\mathbf{w})}{\partial w_0}= \frac{1}{N} \sum\limits_{i=1}^N -2 y^{(i)} + 2w_0 + 2 \sum\limits_{p=1}^P w_p {x^{(i)}}^p\tag{8}
\frac{\partial E(\mathbf{w})}{\partial w_0}= \frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)]\tag{9}
• For $$w_1$$:
\frac{\partial E(\mathbf{w})}{\partial w_1}= \frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)]x^{(i)}\tag{10}
• For $$w_2$$:
\frac{\partial E(\mathbf{w})}{\partial w_2}= \frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)]{x^{(i)}}^2\tag{11}
• For $$w_p$$ of any polynomial degree $$p \in [0:P]$$:
\frac{\partial E(\mathbf{w})}{\partial w_p}= \frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)] {x^{(i)}}^p,
\forall p \in [0:P]\tag{12}
# 3. Closed Form Approach
[watch video]
In this method, we convert equation (12) into $$P+1$$ non-linear system, and solve the generated equations system using metrics’ operations. Let’s equalize equation (12) to 0 and re-order it:
\frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)] {x^{(i)}}^p = 0\tag{13}
\sum\limits_{i=1}^N {x^{(i)}}^p y^{(i)} = \sum\limits_{i=1}^N {x^{(i)}}^p (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)\tag{14}
\sum\limits_{i=1}^N {x^{(i)}}^p y^{(i)} = w_0 N + \sum\limits_{p=1}^P w_p {x^{(i)}}^{2p}\tag{15}
This non-linear equation system is written in matrix form as:
\begin{gather}
\begin{bmatrix}
\sum\limits_{i=1}^N y^{(i)} \\\\
\sum\limits_{i=1}^N x^{(i)} y^{(i)} \\\\
\sum\limits_{i=1}^N {x^{(i)}}^2 y^{(i)} \\\\
\vdots \\\\
\sum\limits_{i=1}^N {x^{(i)}}^p y^{(i)}
\end{bmatrix}
=
\begin{bmatrix}
w_0 \\\\
w_1 \\\\
w_2 \\\\
\vdots \\\\
w_p
\end{bmatrix}
\begin{bmatrix}
N & \sum\limits_{i=1}^N x^{(i)} & \sum\limits_{i=1}^N {x^{(i)}}^2 & … & \sum\limits_{i=1}^N {x^{(i)}}^p \\\\
\sum\limits_{i=1}^N x^{(i)} & \sum\limits_{i=1}^N {x^{(i)}}^2 & \sum\limits_{i=1}^N {x^{(i)}}^3 & … & \sum\limits_{i=1}^N {x^{(i)}}^{p+1} \\\\
\sum\limits_{i=1}^N {x^{(i)}}^2 & \sum\limits_{i=1}^N {x^{(i)}}^3 & \sum\limits_{i=1}^N {x^{(i)}}^4 & … & \sum\limits_{i=1}^N {x^{(i)}}^{p+2} \\\\
\vdots & \vdots & \vdots & \vdots\\\\
\sum\limits_{i=1}^N {x^{(i)}}^p & \sum\limits_{i=1}^N {x^{(i)}}^{p+1} & \sum\limits_{i=1}^N {x^{(i)}}^{p+2} & … & \sum\limits_{i=1}^N {x^{(i)}}^{2p}
\end{bmatrix} \tag{16}
\end{gather}
which simulates the form:
\mathbf{B} = \mathbf{wA}\tag{17}
then the vector $$\mathbf{w}$$ is computed as:
\mathbf{w} = \mathbf{A}^{-1} \mathbf{B}\tag{18}
We follow the same gradient descent algorithm of linear regression (listed below). The only difference will be with the new error function defined in equation (3) and its derivatives (equation (12)).
Algorithm1 GradientDescent($$E(\mathbf{w}),\mathbf{w},\eta$$)
1. Input:
2. $$E(\mathbf{w})$$: cost function
3. $$\mathbf{w}$$: initial values of coefficients vector
4. $$\eta$$: learning rate
5. Output:
6. $$\mathbf{w}$$: updated values of coefficients vector
7. Procedure:
8. repeat
1. $$w_p=w_p – \eta \frac{\partial E(\mathbf{w})}{\partial w_p}, \forall w_p \in \mathbf{w}$$
9. until $$\nabla E(\mathbf{w})= 0$$
10. return $$\mathbf{w}$$
Coefficients $$w_p$$’s in step (8.A.) are updated as follows:
w_p = w_p – \eta \frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)] {x^{(i)}}^p\tag{19}
# 5. Multivariate Nonlinear Regression
Let’s consider the following multivariate nonlinear function (equation 20) between dependent variable $$y$$, and vector of independent variables $$\mathbf{x} = [x_1,x_2]$$:
f(\mathbf{x}^{(i)},\mathbf{w}) = w_0 + w_1 x_1^{(i)} + w_2 x_2^{(i)} + w_3 {x_1^{(i)}}^2 + w_4 x_1^{(i)} x_2^{(i)} + w_5 {x_2^{(i)}}^2\tag{20}
This nonlinear equation can be transformed to linear equation, by introducing a new vector $$\mathbf{z}$$:
\mathbf{z} = [x_1^{(i)},x_2^{(i)}, {x_1^{(i)}}^2, x_1^{(i)} x_2^{(i)}, {x_2^{(i)}}^2]\tag{21}
By substituting $$\mathbf{z}$$ with it’s corresponded values in equation (20), we obtain the following linear equation:
f(\mathbf{z},\mathbf{w}) = w_0 + w_1 z_1 + w_2 z_2 + w_3 z_3 + w_4 z_4 + w_5 z_5\tag{22}
Equation (22) converts the multivariate nonlinear regression into linear regression, where we can use the same methods of multivariate linear regression.
## 5.1 Closed Form Approach
We build the following $$N \times (M+1)$$ matrix $$\mathbf{X}$$:
\begin{gather}
\mathbf{X}
= \begin{bmatrix}
1 & z_1^{(1)} & z_2^{(1)} & z_3^{(1)} & z_4^{(1)} & z_5^{(1)} \\\\
1 & z_1^{(2)} & z_2^{(2)} & z_3^{(2)} & z_4^{(2)} & z_5^{(2)} \\\\
\vdots \\\\
1 & z_1^{(N)} & z_2^{(N)} & z_3^{(N)} & z_4^{(N)} & z_5^{(N)}
\end{bmatrix}
= \begin{bmatrix}
1 & x_1^{(1)} & x_2^{(1)} & {x_1^{(1)}}^2 & x_1^{(1)} x_2^{(1)} & {x_2^{(1)}}^2 \\\\
1 & x_1^{(2)} & x_2^{(2)} & {x_1^{(2)}}^2 & x_1^{(2)} x_2^{(2)} & {x_2^{(2)}}^2 \\\\
\vdots \\\\
1 & x_1^{(N)} & x_2^{(N)} & {x_1^{(N)}}^2 & x_1^{(N)} x_2^{(N)} & {x_2^{(N)}}^2
\end{bmatrix}\tag{23}
\end{gather}
and a $$N\times 1$$ target vector $$\mathbf{Y}$$:
\begin{gather}
\mathbf{Y}
= \begin{bmatrix}
y^{(1)} \\\\
y^{(2)} \\\\
\vdots \\\\
y^{(N)}
\end{bmatrix}\tag{24}
\end{gather}
The vector of coefficients $$\mathbf{w}$$ is calculated as:
\mathbf{w} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{Y}\tag{25}
We use the same algorithm above with the following coefficients’ updates:
w_0= w_0 – \eta\frac{-2}{N} \sum\limits_{i=1}^N y^{(i)} – (w_0 + \sum\limits_{j=1}^5 w_j z_j)\tag{26}
w_j= w_j – \eta\frac{-2}{N} \sum\limits_{i=1}^N z_j [y^{(i)} – (w_0 + \sum\limits_{j=1}^5 w_j z_j)]\tag{27}
We also substitute each $$z_j$$ in equations (26) and (27) with it’s corresponded value in $$\mathbf{x}$$ in equation (21).
# 6. Polynomial Degree Selection (Bias -Variance Tradeoff)
[watch video]
A question may rise in the discussion of nonlinear regression: which polynomial degree ($$p$$) should we use to build our regression function? There is no standard answer to this question. A good way to estimate the value of $$p$$ is to try out different values and select between. Your selection should be in the middle between underfitting and overfitting situations.
On the one hand, the case of underfitting (e.g. $$p=1$$) generate a regression function which is highly biased toward erroneous assumptions of training data (like the case of below figure, where nonlinear variable is predicted by linear function), and thus is far from simulating the relationship between target and predictor variables.
On the other hand, the case of overfitting (e.g. $$p=N-1$$ in below figure) generates regression function with high variance, makes it highly sensitive to small fluctuations in the training data, and accordingly recording error rate close to $$0$$. Overfitting is usually generating models that can not precisely interact with unseen data, as these models are highly biased to the training set of observations, and the corresponded regression functions are usually quite complicated.
The proper regression model (like the one in the following figure) is usually estimated by making a tradeoff between underfitting and overfitting scenarios, where the regression function is minimizing the error rate, while keeping itself as simple as possible (i.e. with lowest possible number of coefficients).
## 6.1 Regularization
A common technique to overcome the problem of overfitting is to add a penalty term to the error function $$E(\mathbf{w})$$. The role of this penalty term – sometimes called also smooth term – is to minimize the number of coefficients used in regression function. Now, we define the new error function ($$\tilde{E}$$) to be:
\tilde{E}(\mathbf{w}) = \frac{1}{N} \sum\limits_{i=1}^N [y^{(i)}-f(x^{(i)},\mathbf{w})]^2 + \frac{1}{2} \lambda \sum\limits_{p=1}^P w_p^2\tag{28}
The $$\lambda$$ term in equation (28) is called the regularization coefficient, and its role is to control the simplicity of the error function in terms of $$\mathbf{w}$$ length. If $$\lambda$$ is large, the function tends to be simple, and we move to the underfitting situation, while if $$\lambda$$ is small, the function tends to be complex and we move to the overfitting situation. If $$\lambda = 0$$, the regularization term is deleted and we return to the definition (1) of error function. The following plots show nonlinear regression functions generated by equation (28) with different $$\lambda$$ values.
Please note also, that the intercept coefficient $$w_0$$ is excluded from the penalty term in (28), to make our error function independent from the origin of target variable $$y$$. The intercept coefficient $$w_0$$ has nothing to do with the simplicity or complexity degree of the regression function. The only effect of changing $$w_0$$ is to move the fitted curve from it’s place into one direction of the target axis. Therefore, $$w_0$$ does not need to be regularized with other coefficients.
Now, we will expand equation (28) by substituting the hypothesis function with the compact form of nonlinear regression formula in equation (3):
\tilde{E}(\mathbf{w}) = \frac{1}{N} \sum\limits_{i=1}^N [y^{(i)}-(\sum\limits_{p=0}^P w_p {x^{(i)}}^p)]^2 + \frac{1}{2} \lambda \sum\limits_{p=1}^P w_p^2\tag{29}
and compute the derivative with respect to each $$w_p \in \mathbf{w}$$:
\frac{\partial \tilde{E}(\mathbf{w})}{\partial w_p}= \frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)] {x^{(i)}}^p + \frac{\partial (\frac{1}{2} \lambda w_p^2)}{\partial w_p}\tag{30}
which is reduced to:
\frac{\partial \tilde{E}(\mathbf{w})}{\partial w_p}= \frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)] {x^{(i)}}^p + \lambda w_p\tag{31}
Putting equation (31) to zero will generate:
\frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – \sum\limits_{p=0}^P w_p {x^{(i)}}^p] {x^{(i)}}^p + \lambda w_p = 0\tag{32}
### 6.1.1 Closed Form Approach
Let’s re-write function (32) into matrix form:
[\mathbf{Y} – \mathbf{wX}] \mathbf{X}^T + \lambda \mathbf{w} = 0\tag{33}
where $$\mathbf{X}$$, $$\mathbf{Y}$$, and $$\mathbf{w}$$ are defined in equation (16). Now we re-arrange equation (33):
\mathbf{X}^T \mathbf{Y}= \mathbf{X}^T \mathbf{Xw} – \lambda \mathbf{w}\tag{34}
\mathbf{X}^T \mathbf{Y}= \mathbf{w} [\mathbf{X}^T \mathbf{X} – \lambda\mathbf{I}] \tag{35}
Coefficients are calculated as:
\mathbf{w} = [\mathbf{X}^T \mathbf{X} – \lambda\mathbf{I}]^{-1} \mathbf{X}^T \mathbf{Y} \tag{36}
For your convenience, putting $$\lambda = 0$$ in (36) will give us same solution as equation (25).
Please note, that both (25) and (36) equations are valid for both uni- and multivariate regression as closed-form.
In order to apply gradient descent algorithm, we add the penalty term $$\lambda w_p$$ to equation (27):
w_0= w_0 – \eta\frac{-2}{N} \sum\limits_{i=1}^N y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p) \tag{37}
w_p= w_p – \eta\frac{-2}{N} \sum\limits_{i=1}^N [y^{(i)} – (w_0 + \sum\limits_{p=1}^P w_p {x^{(i)}}^p)]{x^{(i)}}^p + \lambda w_p,
The intercept coefficient $$w_0$$ is computed the same as in equation (26). The formulas (37) and (38) are valid for both uni- and multivariate regressions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8866828680038452, "perplexity": 3210.760999957583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00200.warc.gz"} |
https://4gravitons.com/2019/11/22/qcd-and-reductionism-stranger-than-youd-think/?like_comment=12279&_wpnonce=6b37fbf842 | # QCD and Reductionism: Stranger Than You’d Think
Earlier this year, I made a list of topics I wanted to understand. The most abstract and technical of them was something called “Wilsonian effective field theory”. I still don’t understand Wilsonian effective field theory. But while thinking about it, I noticed something that seemed weird. It’s something I think many physicists already understand, but that hasn’t really sunk in with the public yet.
There’s an old problem in particle physics, described in many different ways over the years. Take our theories and try to calculate some reasonable number (say, the angle an electron turns in a magnetic field), and instead of that reasonable number we get infinity. We fix this problem with a process called renormalization that hides that infinity away, changing the “normalization” of some constant like a mass or a charge. While renormalization first seemed like a shady trick, physicists eventually understood it better. First, we thought of it as a way to work around our ignorance, that the true final theory would have no infinities at all. Later, physicists instead thought about renormalization in terms of scaling.
Imagine looking at the world on a camera screen. You can zoom in, or zoom out. The further you zoom out, the more details you’ll miss: they’re just too small to be visible on your screen. You can guess what they might be, but your picture will be different depending on how you zoom.
In particle physics, many of our theories are like that camera. They come with a choice of “zoom setting”, a minimum scale where they still effectively tell the whole story. We call theories like these effective field theories. Some physicists argue that these are all we can ever have: since our experiments are never perfect, there will always be a scale so small we have no evidence about it.
In general, theories can be quite different at different scales. Some theories, though, are especially nice: they look almost the same as we zoom in to smaller scales. The only things that change are the mass of different particles, and their charges.
One theory like this is Quantum Chromodynamics (or QCD), the theory of quarks and gluons. Zoom in, and the theory looks pretty much the same, with one crucial change: the force between particles get weaker. There’s a number, called the “coupling constant“, that describes how strong a force is, think of it as sort of like an electric charge. As you zoom in to quarks and gluons, you find you can still describe them with QCD, just with a smaller coupling constant. If you could zoom “all the way in”, the constant (and thus the force between particles) would be zero.
This makes QCD a rare kind of theory: one that could be complete to any scale. No matter how far you zoom in, QCD still “makes sense”. It never gives contradictions or nonsense results. That doesn’t mean it’s actually true: it interacts with other forces, like gravity, that don’t have complete theories, so it probably isn’t complete either. But if we didn’t have gravity or electricity or magnetism, if all we had were quarks and gluons, then QCD could have been the final theory that described them.
And this starts feeling a little weird, when you think about reductionism.
Philosophers define reductionism in many different ways. I won’t be that sophisticated. Instead, I’ll suggest the following naive definition: Reductionism is the claim that theories on larger scales reduce to theories on smaller scales.
Here “reduce to” is intentionally a bit vague. It might mean “are caused by” or “can be derived from” or “are explained by”. I’m gesturing at the sort of thing people mean when they say that biology reduces to chemistry, or chemistry to physics.
What happens when we think about QCD, with this intuition?
QCD on larger scales does indeed reduce to QCD on smaller scales. If you want to ask why QCD on some scale has some coupling constant, you can explain it by looking at the (smaller) QCD coupling constant on a smaller scale. If you have equations for QCD on a smaller scale, you can derive the right equations for a larger scale. In some sense, everything you observe in your larger-scale theory of QCD is caused by what happens in your smaller-scale theory of QCD.
But this isn’t quite the reductionism you’re used to. When we say biology reduces to chemistry, or chemistry reduces to physics, we’re thinking of just a few layers: one specific theory reduces to another specific theory. Here, we have an infinite number of layers, every point on the scale from large to small, each one explained by the next.
Maybe you think you can get out of this, by saying that everything should reduce to the smallest scale. But remember, the smaller the scale the smaller our “coupling constant”, and the weaker the forces between particles. At “the smallest scale”, the coupling constant is zero, and there is no force. It’s only when you put your hand on the zoom nob and start turning that the force starts to exist.
It’s reductionism, perhaps, but not as we know it.
Now that I understand this a bit better, I get some of the objections to my post about naturalness a while back. I was being too naive about this kind of thing, as some of the commenters (particularly Jacques Distler) noted. I believe there’s a way to rephrase the argument so that it still works, but I’d have to think harder about how.
I also get why I was uneasy about Sabine Hossenfelder’s FQXi essay on reductionism. She considered a more complicated case, where the chain from large to small scale could be broken, a more elaborate variant of a problem in Quantum Electrodynamics. But if I’m right here, then it’s not clear that scaling in effective field theories is even the right way to think about this. When you have an infinite series of theories that reduce to other theories, you’re pretty far removed from what most people mean by reductionism.
Finally, this is the clearest reason I can find why you can’t do science without an observer. The “zoom” is just a choice we scientists make, an arbitrary scale describing our ignorance. But without it, there’s no way to describe QCD. The notion of scale is an inherent and inextricable part of the theory, and it doesn’t have to mean our theory is incomplete.
Experts, please chime in here if I’m wrong on the physics here. As I mentioned at the beginning, I still don’t think I understand Wilsonian effective field theory. If I’m right though, this seems genuinely weird, and something more of the public should appreciate.
## 27 thoughts on “QCD and Reductionism: Stranger Than You’d Think”
1. Wouter M.
hi,
for those of us not in Particle Physics, it would be helpful to give an example or analogy for a case where the scale factor changes the physical effect of an interaction. As I see it, the familiar physical laws like electromagnetism are independent of scale : E = c q1 q2 /r as the electrostatic potential between two charges. The value of E changes as r changes, but the constant c does not change. So, this is not a good example of the effect you discuss. But what is?
What is the simplest case where renormalisation is required?
Is there an analogue in pure mathematics where a similar “technique” is used to get rid of infinities? Surely not the old L’Hopital’s rule?
Like
1. 4gravitons Post author
This is a tricky question to answer, in part because some of what I’m talking about is genuinely a quantum effect. So it’s hard to find a physics analogy…but here’s a vaguer one:
Let’s say instead of physics, you’re studying international relations. You could think about different nations as having their own interests and desires, communicating and competing with each other. Alternatively, you could zoom in to a smaller scale, and look at the behavior of individual people. Once again, you’d model them with their own interests and desires, communicating and competing. Your models might look very similar, but some aspects would be different: maybe nations have more trouble communicating than people do, or have stronger “desires”. You’ve changed the scale and you’ve got almost the same theory, but a few of your “constants” have changed.
Quantum field theory is like that, but instead of just two “scales” it’s continuous. You need to do this in pretty much every quantum field theory. Only a few special ones have no infinities, and none of those are ones we use for everyday physics. Quantum electrodynamics, quantum chromodynamics, the Higgs boson, they all need renormalization.
There are lots of cases in mathematics where one regularizes to avoid infinities in one way or another. There are many different ways of doing this, depending on what you’re interested in doing. Here’s just one context.
Like
2. itaibn
I’d say that the scaling laws of classical materials can be considered a simpler example of this phenomenon. There is the classic fact, which I believe is due to Galileo, that if you have an object that’s a certain size you cannot scale all of its parts proportionally and expect this resized object to behave the same way. Compare a pillar in some cathedral that supports a certain weight, with its matching pillar in the doubled cathedral, which is twice as long and broad and must support a larger weight. If we imagine it is a square pillar then the doubled pillar can be thought of as being made of eight versions of the original pillar bunched together. Since placing one pillar on top of the other doesn’t affect how much weight it can support, the doubled pillar is equivalent to four of the original pillar placed side-by-side, and can support four times as much weight. On the other hand, the structure the doubled pillar is supporting can also be thought of as eight versions of the original structure, so has eight times the weight. So even if the original pillar can support its weight doesn’t mean that the pillar in the twice-as-large cathedral can.
To put it in a more idealized setting, if we consider a cube of some material with side-length L, and let $f (L)$ be the its weight and $g (L)$ be the weight it can support, then $f (2 L) = 8 f (L)$ and $g (2 L) = 4 g (L)$. More important than the actual formulas is how they are derived: By imagining a cube with side-length 2L as being made up of eight cubes with side-length L. In fact, for any length L’ much smaller than L we can model the cube of length L as being made up of many smaller cubes of length L’. The smaller L’ is, the better the model for the cube is, because it can account for things such the cube stretching and bending in a very detailed way or small pieces breaking off of it. However, for this to work the properties (strength weight, …) of the cube of length L’ need to be set to be compatible with the properties of the cube of length L.
Like
3. Iliody
Pure speculattion, maybe (almost surelly wrong):
If really don’t have E=c q_1 q_2/r, but instead you have E=k q_1 q_2/(r^1.3), but you supposed it was like the first law and made an experiment with a typical scale L in which you measured “c”. Then, when you Made again your experiment ay different scales you saw that E≈c q_1 q_2/r -c ln(L’/L)q_1 q_2/r=(c-c ln(r/L)) q_1 q_2 as if your constant c wasn’t constant, but ir was something like c(r)≈c(L)-0.3 c(L) ln(r/L). The thing here is that your theory classically looks as if it behaved well only if you put the first thing, but then Quantum field theory rules moved your theory to behave un the second way.
Like
1. 4gravitons Post author
This is kind of what’s going on, but there’s a key subtlety. In your example, the scientist doesn’t have to keep trying to measure “c”: if they’re smart enough they can figure out the real scaling and measure “k” instead.
The difference is that in quantum field theory, the equivalent “k” would have to be infinite. There just isn’t a “bare constant” that makes sense in theories like QCD. So the “c version” is our only choice in those cases.
Like
1. Iliody
I am not sure if it’s necesarelly Infinity. c and k have different scalling dimensions and c(lenght scale) will diverge if what’s really going on is k-like, but for knowing about k we need a nonperturbative definition on the theory, thing that we ussually don’t have.
In some CFT’s in two dimensions you know that you have that exactly.
I don’t know if that happens in QCD (I find really unlikely that it was the case, because of mass gap and that kind ir things). In N=4 is known what are the two point functions at nonperturbative level?
Like
1. 4gravitons Post author
Ah sure. Yeah, if your theory is integrable or otherwise has a closed-form nonperturbative description then yeah, the analogy is much better. But indeed as you say QCD probably isn’t like that.
At least in planar N=4 the operator dimensions are known nonperturbatively yeah.
Like
4. Schmelzer
Water near the point of boiling. You have already small bubbles of vapor. The average size of these bubbles depends on temperature and increases if one comes close to the boiling temperature. The mathematics to handle this is the mathematics of renormalization.
The point of this type of renormalization is not to get rid of infinities. It is the atomic structure which regularizes the theory.
Like
2. Kevin Zhou
As a phenomenologist, this post sounds odd to me, because it seems to be conflating what could be consistent mathematically with what is true physically. Yes, it’s possible that QCD (or more realistically the SM) could be correct up to very high energy scales, in the sense that it wouldn’t be mathematically inconsistent.
But the point of parametrizing things with cutoff scales is that we don’t know if that actually is the case, in our universe! It is, after all, the point of science to figure out what actually exists. So the fact that QCD has a continuum limit and QED has a Landau pole just isn’t relevant from this point of view. We don’t care about extrapolating up to 10^100 TeV when qualitatively new stuff could appear at 10 TeV. The cutoff is meant to stand in for any number of unknown effects, which could range from just the same old boring scaling, to new particles, to even a breakdown of quantum field theory itself.
Like
1. 4gravitons Post author
I agree, and I’m not saying that we should take QCD seriously as a theory to 10^100 TeV or whatever.
But some theory should be valid up to 10^100 TeV (even if all the theory says is “10^100 TeV doesn’t make sense”). It’s worth thinking about what sort of properties such a theory would need to have. And the impression I get (from people who seem to know Wilsonian EFT more than I do) is that one way such a theory could be is that it could be like QCD. That is, one type of theory that can be valid up to 10^100 TeV and beyond is an asymptotically free theory. Again, that doesn’t mean QCD itself is valid that far, just that when we do have a “final theory” it may well be asymptotically free (or I suppose asymptotically safe). Not the only option, but one of them.
And my point in this post is that’s weird! If the real world really was described by an asymptotically free theory up to arbitrarily high energies, that clashes with our naive intuitions about reductionism pretty heavily.
So does that mean the real world cannot be asymptotically free? I don’t think so, not on the basis of this at least. Does it mean our intuitions about reductionism are wrong? Probably! Does it mean quantum field theory itself is going to break down? Not because of this I don’t think, though it might break down anyway!
And yeah, it’s perfectly fine to approach all this by saying “we never have access to the far UV so who cares?” I don’t object to that attitude. But I think some people will still care, and it’s worth thinking about what to tell them.
Like
3. clayton
I’m not sure there’s really any weirdness — there is no energy E at which a sufficiently precise experiment would be compatible with alpha_s(E) = 0. You’d always measure some deflection if you fired two quarks at one another from asymptotically far away (for example). Such an energy simply does not exist, given the input that alpha_s=4pi at some other energy. Thus, alpha_s always has somewhere to run from. Perhaps there’s an ambiguity about what “asymptotically far away” and “some deflection” mean in my sentence above, but those are IR questions, not UV ones, and those need to be dealt with to create a sensible S-matrix at any energy (which presumably, you’ve done if you’re talking about scattering and so forth)
Like
1. 4gravitons Post author
I think that’s part of the weirdness though: yes, at any given energy, there’s a finite higher energy to run there from. The weirdness is that there’s no “final energy”, no lowest-level theory that your low-energy theories “reduce to”. It’s just an infinite (continuous) chain of one theory reducing to another. There’s nothing illogical or impossible about that, but it’s weird! I think it’s very much not something the average person would have expected to be possible.
Like
1. clayton
Not trying to be obtuse, but: what is the alternative that you think is most satisfying? QED runs to a Landau pole, but maybe it’s just electronballs behaving like billiard balls at higher energy. Imagine QCD turning into nuclear physics (which itself is in some sense “residual” QCD forces between nucleons), just reversed as a function of energy — it doesn’t have to be something “heartier” like string theory (or the physics that underlies chemistry) up there to resolve the Landau pole. It might “just be”. This, too, is probably not “maximally classic reductionism,” but we don’t know which is the answer until we run an experiment at a sufficiently high energy. One case is equivalent to saying “there’s a few constants we can’t predict (like the electronball mass gap or electronball spectroscopic splittings),” while the other is saying “we were totally misled by what appeared to simple organizing principles at low energy, which have very different interpretations at high energies.”
Having a theory that is replaced (in a predictable way) by the same theory is a little simpler than either of these, but I think one way or another this points to the fact that there’s a continuum (or, if you prefer: “a wide range”) of possibility for UV completions. Perhaps this only reinforces your claim that the real picture is “very much not something the average person would have expected to be possible,” but I think you overestimate “what the average person would have expected to be possible” in the sense that the “average person” probably doesn’t even know that chemistry is derived from physics. I’d wager that understanding thermodynamic limits already puts someone in sufficiently rarefied company that they can accept that EFT is complicated 🙂
(As an unrelated quibble, let me say also that I think that your definition “Reductionism is the claim that theories on larger scales reduce to theories on smaller scales” uses “reduce to” in exactly the opposite way that most phenomenologists use it. To go to high energies is to “complete” the theory; to go to low energies is to “reduce” the theory. Think about the a- or c-theorems: the function monotonically decreases as energy decreases, which sounds like a reduction to me…)
Like
1. 4gravitons Post author
There’s two uses of the word reduction that are clashing here, yes, and I’m leaning on the philosophical one (though I’ll admit I’ve never heard of going to the IR as “reducing” a theory, but maybe I just don’t talk to enough phenomenologists).
I think I see what you’re getting at here, but let me know if I’m misunderstanding. You’re saying that, if there are going to be some “physical accidents” in your theory anyway (value of the coupling, mass gaps, etc.) that can’t be explained by a more fundamental theory, then it shouldn’t really matter whether they’re continuously varying with scale (like QCD) or whether there is a final scale where they are defined (“electronballs”). Either way we have some high-energy theory that has unexplained properties.
In my view there is a meaningful difference, and it is that you can define your electronball theory without RG. That is, you can specify all the theory’s parameters without adding an artificial cutoff scale. The theory you get won’t necessarily be useful at all scales, but at least in principle physics at any scale would be derivable from a scale-agnostic theory.
That’s not true of QCD. You can’t define QCD without adding in a cutoff scale. If you don’t, you have no way of specifying the coupling. The theory depends on what is essentially a subjective quantity, the scale at which you choose to cut off your description.
I don’t blame you for being comfortable with that, I think as physicists have learned to accept it. But I think most people would find it strange, in the same way they find quantum mechanics strange. And I find it interesting that we don’t emphasize this when we talk about physics, in the same way we emphasize the weirdness of say quantum mechanics.
(As an aside: yes, the average person doesn’t care about physics at all. That’s a fully general argument against any point about “intuition” at all. For now, maybe it’s better to think about an “average reader of this blog”. I think my non-physicist readers would find this behavior of QCD interesting and weird, even if the literal average person wouldn’t.)
Like
1. clayton
So, I guess where we part ways is the statement “you can define your electronball theory without RG” — why do you think that’s true? There are some (dimensionful) “high energy constants” in that theory, but there’s also running couplings that have threshold corrections and the whole works. The nuclear EFT, which can be defined at many scales (a “full” EFT, a pionless one, etc.), certainly has these properties (and they’re certainly not fully understood! RGEs in the nuclear EFT are an area of active research).
Perhaps QCD feels weird because, despite going to higher energies, the degrees of freedom remain the same — I think that’s ultimately the difference between QCD and electronball (or electronstring) theory: it just so happens that with QCD we’ve identified good degrees of freedom at what feels like a low scale. QED has a definite cutoff beyond which the degrees of freedom change to those of electronball/electronstring theory (whichever one is verified at the Nth collider after this one, I’m agnostic until then 🙂 ), but what that means to me is that when you say QCD “depends on what is essentially a subjective quantity, the scale at which you choose to cut off your description”, I’d say “so does QED” — you need to know how far you are from the Landau pole to know how good your calculation of electron scattering angles can possibly be, since there will be fractional corrections of order s/Lambda_Landau^2. This is why the prediction of the muon’s g-2 is less precise than the electron’s — schematically, alpha_EMalpha_sm_mu^2/Lambda_QCD^2 is a larger number than alpha_EMalpha_sm_e^2/Lambda_QCD^2, and hadronic matrix elements can’t be ignored in the muon calculation even though they can be in the electron one. If QCD-charged particles didn’t couple to the photon (say), then we would only have fractional uncertainties of order alpha_EMalpha_sm_mu^2/v_EW^2, which is quite a bit smaller. Ergo, the “scale at which you choose to cut off your description” matters here, too.
Let me say: I appreciate the back and forth, I’m glad you’re learning about EFT, and I’m by no means an expert in all this, so please keep at it if I’m being unclear or inconsistent!
Like
1. 4gravitons Post author
Likewise, glad for the enlightening discussion!
I think in retrospect I was confused by whether in your last comment you were literally thinking of the electronballs behaving like (classical? quantum?) billiard balls, or whether you were thinking of a distinct QFT taking over there. Literal classical billiard balls can be certainly be defined without RG: you still might find RG useful for handling them, but you can specify all the “parameters of your theory” without it. I agree if you ended up with something more analogous to nuclear EFT then you likely can’t avoid RG, so my point is closer to “QFT is weird” than “specifically QCD is weird”. The latter just felt like a cleaner example.
That said, I think your argument about precision is missing the point. Yes, you don’t want to do perturbation theory in the bare coupling, even if you had one. That’s a good reason to use RG in practice. But when I pointed that out in a post explaining running couplings a while back, I got some (deserved!) pushback. RG isn’t just some pragmatic thing we do to make sure our perturbation theory works, in many cases it’s part of how the nonperturbative theory is defined. In the context of this post, the need to use RG to do perturbation theory isn’t particularly strange: you’re using something subjective (cutoff scale) to do something else subjective (get a good approximation under certain experimental conditions). What’s strange is if subjectivity is baked into the physics at its core, so that even if you could calculate everything exactly and nonperturbatively you’d need to add an arbitrary choice of scale.
Like
1. clayton
Ah, sorry — by “like billiard balls” I just meant “without a long-range force.” But I didn’t mean for that to be taken too literally. Anything could happen above the Landau pole; we’d need to do experiments to figure out the right theory up there!
Anyway, I’m still not getting the point you’re trying to make. I simply don’t see a qualitative difference between QED/electronballology and QCD/nuclear physics. QED gives us an unambiguous Landau pole, QCD gives us an unambiguous strong coupling scale. The centrality of that energy doesn’t seem (to me) to be subjective in either theory. In some sense there’s a direction of RG flow that makes asymptotic freedom feel different, but in both cases there’s a strong coupling energy and simple prescription for what you do away from that energy.
Like
1. 4gravitons Post author
There’s a particular loophole I was worried about that it sounds like you might be invoking, so to clarify: are you saying that once you specify the strong-coupling scale, QCD is unambiguous? I.E., there are no other free parameters, and the only difference between QCD with different couplings is the choice of units you use for energy?
If that’s the case, I agree that my argument doesn’t work. If it is, is the same true for QCD with massive quarks? I would think that at some point you’d need to consider a theory with multiple scales and there would be a free parameter that wouldn’t just come down to a choice of units.
If that’s not what you were getting at (or in a sufficiently complicated theory if QCD doesn’t work for this), then I’d argue that the direction of the flow actually does matter, at least if we still care about philosophical reductionism. If we do, then we expect parameters at low energy to be determined by parameters at high energy, and not the reverse. Even if we can do the math either way, one direction has a causal meaning that the other doesn’t.
Again, this may well mean we shouldn’t care about philosophical reductionism. It also may just mean, as I comment in the post, that it’s wrong to think about EFT in terms of low-energy scale EFTs philosophically reducing to high-energy scale EFTs, and that really we should be thinking of reductionism in terms of non-QFT theories reducing to QFT, and not of different scales of the same EFT reducing to each other.
Like
1. clayton
To answer the question as cleanly as possible (and according to my best understanding, which, again, may not be perfect): once you specify Lambda_QCD and the numbers and masses of quarks*, then QCD is unique. It just so happens that QCD appears “simple” at energies above strong coupling and QED appears “simple” at energies below; the “aboveness or belowness” is determined by the gauge group and matter content, but each is unique, and (to me) neither is weirder or less scale-sensitive than the other. Interestingly, going to more supersymmetric QFTs typically allows one to formulate a duality between theories with different matter and gauge content such that there’s a “simple” calculation at any scale (one theory is weakly coupled when the other is strongly coupled) — this, I think, is where some of the truest “weirdness” (or beauty) of QFT lurks 🙂
*you need to know the masses to know when to add threshold corrections and alter the running
Like
1. 4gravitons Post author
Hmm. Ok, so is one way to think about this that a theory is defined by the shape of its RG trajectories? So I draw some curve for the behavior of the coupling, it diverges at some point and its running changes at other points, and the ratio between the point where it diverges and the point its running changes is the ratio between Lambda_QCD and one of the quark masses?
I think that works from one of the perspectives I was describing: if you call QCD itself a “theory”, then you’re fine: you have a well-defined theory, and theories at lower energies (pion models, chemistry…) (philosophically) reduce to that theory.
It’s still confusing from the other perspective, where the individual theories are things like “QCD at scale 1 TeV” instead of “QCD as a whole”. From that perspective, you ask “why does the coupling diverge at Lambda_QCD?” and the answer is “because the coupling was X at Y energy above Lambda_QCD”, and you get the weird infinite regress of “why X? Because Y” that was bugging me earlier. (Again, maybe that means this perspective is just wrong!)
Overall, there’s something deeper that bugs me, which is the statement that you need RG to define (not just use in practice, but define) any of these theories in the first place. But maybe I’m mistaken about that, and you can imagine one of these theories just defined by all its correlation functions with no mention of RG?
I agree that there’s a lot of weirdness once duality enters the picture, but on a certain level (at least from my perspective), it’s not as weird, because the theories related are usually CFTs, or otherwise theories without UV divergences (and often, integrable theories). So you’ve got two perspectives, one where the coupling is strong and the other where it is weak, but ultimately you can think about the theory as defined by correlation functions with some parameters, and the duality just corresponding to how you interpret the “meaning” of those parameters and correlation functions.
Like
2. clayton
I think I’d agree that “a theory is defined by the shape of its RG trajectories,” but if you want to literally define a theory which is “QCD at scale 1 TeV” instead of “QCD as a whole”, then you are no longer allowed to ask “why does the coupling diverge at Lambda_QCD?” — you only have one energy, which is 1 TeV. The coupling is well-defined for this “candidate theory,” but you can’t access Lambda_QCD. Only “QCD as a whole” allows you to run between energies and ask questions about different energies
As for “Overall, there’s something deeper that bugs me, which is the statement that you need RG to define (not just use in practice, but define) any of these theories in the first place. But maybe I’m mistaken about that, and you can imagine one of these theories just defined by all its correlation functions with no mention of RG?” — I’ll hazard that an alternate “complete” definition of a theory is via its correlation functions as well as every operator’s (exact) scaling dimension (which in a weakly coupled limit tells you how it behaves under RG evolution), but now I’m really out of my element 🙂
Like
3. 4gravitons Post author
Wouldn’t you still be able to access Lambda_QCD from the “QCD at scale 1 TeV” theory, though? The higher-energy theories on the trajectory should have all the information about what happens at lower energies (barring a version of a Landau pole that presents an essential singularity like Hossenfelder was speculating about).
Fair enough about correlation functions+scaling dimensions. In CFT indeed all you need is the dimension of each operator and the structure constants (both as exact functions of the coupling). But I had the impression that there’s an issue with this for non-CFTs. I’m out of my element there too so I may just be misremembering!
Like
4. clayton
I was just being pedantic about accessing Lambda_QCD from 1 TeV — if you define a theory in which sqrt(s) = 10 TeV but all other facts about QCD obtain, then, well: this is a theory where sqrt(s) is always 10 TeV. You can change the number of incoming states that you prepare when you run an experiment, perhaps, and you can speculate about an alternate universe where sqrt(s) sometimes reaches as low as Lambda_QCD, but sqrt(s) for you is always 10 TeV, so you’ll never test those speculations. But, yes, you’d get the right energy scale when you (and your 10 TeV pencil) did the calculation 🙂
Like
5. 4gravitons Post author
Ok, now I’m confused. I had the impression that the “scale” of a theory wasn’t literally sqrt(s) (after all, if you aren’t computing a scattering amplitude you don’t even have an “s”!) Rather, it was a cutoff scale that characterizes how you’ve coarse-grained. Calculating things at other energies will introduce large logarithms and make your perturbation theory worse, but the theory isn’t inapplicable at those energies, it’s just less useful. Am I mistaken?
Like
6. clayton
Ah — I misunderstood. Yes, what you’ve written is correct: there are higher-twist operators and so forth which would seem to blow up beyond X TeV, but which you can calculate with at any energy below X TeV. I thought you were describing something narrower — indeed, you’re using totally standard terminology, I just misinterpreted. Carry on 🙂
Like
4. Andrew Oh-Willeke
I’m not sure that I buy your line of reasoning in the case of QCD.
Generally speaking, the theory that most people would think about reducing to QCD would be effective theories of nuclear physics involving atoms made up of multiple nucleons.
In theory, you can explain the behavior of these atoms predominantly with QCD from first principles, with a little sprinkling of electroweak theory. But, in practice, we have a hard enough time explaining the behavior of single hadrons or two to four hadrons with QCD from first principles without resorting to numerical or analytical approximations because the math involved in QCD is so hard (in part because it is a strong non-abelian force in which gluons interact with each other, unlike photons which carry an abelian force without self-interactions and the weak force that usually gets summarized as a black box set of input-output matrixes).
So, instead we resort to a phenomenological residual nuclear force inspired by our qualitative understanding of QCD to understand bigger atoms (hadrons compounds made of hadron other than nucleons don’t seem to be observed at scales large than meson and baryon molecules with a couple of hadrons in a non-confined system anyway, and quark-gluon plasma can be thought of a single special case with the mix of quarks in it and the temperature as parameters for that special case). At the most basic level, the phenomenology is done crudely by estimating based upon measured atomic isotope masses the nuclear binding energy in an atomic isotope to determine the exothermic or endothermic properties of possible nuclear fusion and nuclear fission reactions involve them. At a more advanced level, we model it as one or several Yukawa forces mediated by several massive but light mesons, but mostly by pions.
Likewise, at the next discrete level, we then turn to chemistry and condensed matter physics, which is dominated by underlying QED interactions, to understand molecules and ionic compounds, as opposed to single atoms, since the residual nuclear force and weak force are rarely important in understanding chemistry and condensed matter physics at this scale.
In basic structure (e.g. in path integrals and boson propagators), QED and QCD are very similar, except for the self-interaction terms and the fact that the QCD infinite series require far more terms than the parallel infinite series in QED, to produce meaningful results. So, I’m a bit puzzled while QED or weak force theory would be singled out in the same way as a mere effective field theory relative to QCD.
Also, while QCD is, in theory, generalizable up to arbitrary scales, in practice, the twin boundaries of confinement and asymptotic freedom, tightly defines the scale at which it applies in the context of the fundamental masses (subject to only modest renormalization in most circumstances) of the hadronizing quarks (i.e. u, d, s, c and b), with confinement preventing pure, direct gluon-quark interactions between hadrons from being important, and asymptotic freedom preventing very small distance QCD interactions from being very interesting.
Further, above the roughly 1 GeV temperature scale, you get a phase shift into quark-gluon plasma and a discrete change in the effective phenomenological outcome into the QGP special case with one vector parameter representing the relative proportions of different quark types and one scalar parameter representing temperature, as a very precise first order approximation.
It is still stunning how much structure can arise from the naively very simple rules of QCD and its modest number of experimentally determined parameters. But, I also don’t know how much time I’ve squandered puzzling over why it is that the universe has scores of hadrons and hadron molecules, that somehow end up boiling down to two kinds of baryons (the proton and the neutron), and a few light mesons to carry the residual nuclear force, for all but the first few moments after the big bang, and some isolated and sporadic interactions that take mere moments in the history of the universe in particle colliders and perhaps in supernovae, in and around black holes, neutron stars and perhaps a few other hypothetic stellar creatures like quark stars. How did we end up with a universe that gets so little mileage out of such a simple yet rich fundamental theory like QCD?
Like
1. 4gravitons Post author
Actually deriving the full behavior of a theory on large scales from its small-scale properties is almost always out of reach, and QCD is indeed no exception. What’s bothering me in this post is less what we can calculate in practice, and more what our theories determine in principle. If the world was QCD, what would that mean? Is QCD the kind of thing the world can be?
And with that in mind, it bothers me that once we get down to our supposedly most fundamental theory, it’s not actually one theory. It’s a series of theories, each one at a different scale, stretching off to the (infinite) limit of asymptotic freedom. And it’s not clear that we can define it in any other way, not just in practice but perhaps in principle. That’s weird!
(By the way, I think you’re conflating “QCD becomes QGP at high energies” with “QCD isn’t described by field theory at high energies”. QCD collectively behaves as QGP at high energies, but an individual quark inside QGP is basically as close to nice perturbative QCD behavior as you can get.)
Like | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561882376670837, "perplexity": 699.7383339937073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00370.warc.gz"} |
https://www.physicsforums.com/threads/beginner-calculus-help.646540/ | # Beginner calculus help
1. Oct 23, 2012
### Murph84
I realize this question will probably be too easy for people who are good at it.
But I can't figure this out.
I'm trying to find the limit for
lim x→7 (sin(x − 7))/(x2 + 2x − 63)
Sorry, I don't know how to make it look like the proper equation.
Thanks for any help....
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Oct 23, 2012
### lanedance
welcome to PF!
now what have you tried, or what relevant equations/theorems do you know?
Similar Discussions: Beginner calculus help | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658963441848755, "perplexity": 2231.721676513243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813712.73/warc/CC-MAIN-20180221182824-20180221202824-00641.warc.gz"} |
http://mathhelpforum.com/calculus/128602-differentiate.html | 1. ## Differentiate
Differentiate this function f(x)= (4x)/(sqrt2x).
I know I use the quotient rule to differentiate this, this is what I have done so far:
(2x)^1/2 (0) - (4) [1/2(2x)^-1/2] divided by (2x)
0 - 2(2x)^-1/2 divided by (2x)
-(2x)^-1/2 divided by x
-2 divided by x^1/2 * x
-2 divided by sqrtx^3
Please help if something here is confusing because i dont know how to use Latex please tell I will try to show it more better.
Thanks
2. fastter than that, is note that: $f(x)=\dfrac{4}{\sqrt{2}}\cdot \dfrac{x}{\sqrt{x}}$
3. but what I have done is not right. Tha answer given is (2)/(sqrt2x)
4. Originally Posted by Nacho
fastter than that, is note that: $f(x)=\dfrac{4}{\sqrt{2}}\cdot \dfrac{x}{\sqrt{x}}$
Nacho is pointing out that his approach and hint will work better. Instead, split it into a constant ( $\frac{4}{\sqrt 2}$) multiplied by a term in x ( $\frac{x}{\sqrt x}=\sqrt x$)
So then you can just use the power rule. So before differentiating you have changed the function to $y=\frac{4\sqrt x}{\sqrt2}$
Can you differentiate that more easily?
5. ok, I don´t undestand why you multiplicate by zero in the first line, if the quotient rule says: $
\left( {\frac{h}
{g}} \right)^\prime = \frac{{h'g - hg'}}
{{g^2 }}
$
In your example $h=4x$ and $g=\sqrt{2x}$
Whit my way is easy $
f(x) = \frac{4}
{{\sqrt 2 }} \cdot x^{1/2} \Rightarrow f'(x) = \frac{4}
{{\sqrt 2 }} \cdot \frac{1}
{2} \cdot x^{ - 1/2} = \frac{{\sqrt 2 }}
{{\sqrt x }}
$
6. Originally Posted by Nacho
ok, I don´t undestand why you multiplicate by zero in the first line, if the quotient rule says: $
\left( {\frac{h}
{g}} \right)^\prime = \frac{{h'g - hg'}}
{{g^2 }}
$
In your example $h=4x$ and $g=\sqrt{2x}$
Whit my way is easy $
f(x) = \frac{4}
{{\sqrt 2 }} \cdot x^{1/2} \Rightarrow f'(x) = \frac{4}
{{\sqrt 2 }} \cdot \frac{1}
{2} \cdot x^{ - 1/2} = \frac{{\sqrt 2 }}
{{\sqrt x }}
$
Just pointing out simplification
$\frac{4}{\sqrt{2}}\cdot \frac{1}{2}\cdot x^{-\frac{1}{2}}=\frac{2}{\sqrt{2}}\cdot x^{-\frac{1}{2}}=\frac{2}{\sqrt{2x}}$
7. Originally Posted by Keithfert488
Just pointing out an error in simplification
$\frac{4}{\sqrt{2}}\cdot \frac{1}{2}\cdot x^{-\frac{1}{2}}=\frac{2}{\sqrt{2}}\cdot x^{-\frac{1}{2}}=\frac{2}{\sqrt{2x}}$
I just continue the simplification
$
\frac{2}
{{\sqrt 2 }} = \frac{{\sqrt 2 \sqrt 2 }}
{{\sqrt 2 }} = \sqrt 2
$
8. Originally Posted by Keithfert488
Just pointing out an error in simplification
$\frac{4}{\sqrt{2}}\cdot \frac{1}{2}\cdot x^{-\frac{1}{2}}=\frac{2}{\sqrt{2}}\cdot x^{-\frac{1}{2}}=\frac{2}{\sqrt{2x}}$
Your solution and his solution are equal.
9. Originally Posted by drumist
Your solution and his solution are equal.
Wow. I feel stupid. Well...I was doing it so we can see it through to the answer that he gave us.
10. lols thanks guys. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929263174533844, "perplexity": 1127.3102303596384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822822.66/warc/CC-MAIN-20171018070528-20171018090528-00071.warc.gz"} |
http://www.mathworks.com/help/matlab/ref/gamma.html?s_tid=gn_loc_drop&requestedDomain=www.mathworks.com&nocookie=true | Documentation
This is machine translation
Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.
gamma
Gamma function
Syntax
``Y = gamma(X)``
Description
example
````Y = gamma(X)` returns the `gamma` function evaluated at the elements of `X`. ```
Examples
collapse all
Evaluate the gamma function with a scalar and a vector.
Evaluate , which is equal to .
`y = gamma(0.5)`
```y = 1.7725 ```
Evaluate several values of the gamma function between `[-3.5 3.5]`.
```x = -3.5:3.5; y = gamma(x)```
```y = Columns 1 through 7 0.2701 -0.9453 2.3633 -3.5449 1.7725 0.8862 1.3293 Column 8 3.3234 ```
Plot the gamma function and its inverse.
Use `fplot` to plot the gamma function and its inverse. The gamma function increases quickly for positive arguments and has simple poles at all negative integer arguments (as well as 0). The function does not have any zeros. Conversely, the inverse gamma function has zeros at all negative integer arguments (as well as 0).
```fplot(@gamma) hold on fplot(@(x) 1./gamma(x)) legend('\Gamma(x)','1/\Gamma(x)') hold off grid on```
Input Arguments
collapse all
Input array, specified as a scalar, vector, matrix, or multidimensional array. The elements of `X` must be real.
Data Types: `single` | `double`
collapse all
Gamma Function
The `gamma` function is defined for real `x > 0` by the integral:
`$\Gamma \left(x\right)={\int }_{0}^{\infty }{e}^{-t}{t}^{x-1}dt$`
The `gamma` function interpolates the `factorial` function. For integer `n`:
`gamma(n+1) = factorial(n) = prod(1:n)`
The domain of the `gamma` function extends to negative real numbers by analytic continuation, with simple poles at the negative integers. This extension arises from repeated application of the recursion relation
`$\Gamma \left(n-1\right)=\frac{\Gamma \left(n\right)}{n-1}\text{\hspace{0.17em}}.$`
Algorithms
The computation of `gamma` is based on algorithms outlined in [1]. Several different minimax rational approximations are used depending upon the value of `A`.
References
[1] Cody, J., An Overview of Software Development for Special Functions, Lecture Notes in Mathematics, 506, Numerical Analysis Dundee, G. A. Watson (ed.), Springer Verlag, Berlin, 1976.
[2] Abramowitz, M. and I.A. Stegun, Handbook of Mathematical Functions, National Bureau of Standards, Applied Math. Series #55, Dover Publications, 1965, sec. 6.5. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679729104042053, "perplexity": 1234.573725632956}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806620.63/warc/CC-MAIN-20171122175744-20171122195744-00286.warc.gz"} |
https://www.lmfdb.org/GaloisGroup/16T34 | # Properties
Label 16T34 Order $$32$$ n $$16$$ Cyclic No Abelian No Solvable Yes Primitive No $p$-group Yes Group: $C_2^2:D_4$
# Related objects
## Group action invariants
Degree $n$ : $16$ Transitive number $t$ : $34$ Group : $C_2^2:D_4$ Parity: $1$ Primitive: No Nilpotency class: $2$ Generators: (1,11,16,13)(2,12,15,14)(3,9,5,7)(4,10,6,8), (1,15)(2,16)(3,5)(4,6)(7,8)(9,10), (1,10,16,8)(2,9,15,7)(3,12,5,14)(4,11,6,13) $|\Aut(F/K)|$: $4$
## Low degree resolvents
|G/N|Galois groups for stem field(s)
2: $C_2$ x 7
4: $C_2^2$ x 7
8: $D_{4}$ x 4, $C_2^3$
16: $D_4\times C_2$ x 2, $Q_8:C_2$
Resolvents shown for degrees $\leq 47$
## Subfields
Degree 2: $C_2$ x 3
Degree 4: $C_2^2$, $D_{4}$ x 4
Degree 8: $D_4\times C_2$ x 2, $Q_8:C_2$
## Low degree siblings
16T34, 16T43 x 2, 32T20
Siblings are shown with degree $\leq 47$
A number field with this Galois group has no arithmetically equivalent fields.
## Conjugacy Classes
Cycle Type Size Order Representative $1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1$ $1$ $1$ $()$ $2, 2, 2, 2, 2, 2, 1, 1, 1, 1$ $4$ $2$ $( 3, 4)( 5, 6)( 7, 9)( 8,10)(11,14)(12,13)$ $2, 2, 2, 2, 2, 2, 2, 2$ $1$ $2$ $( 1, 2)( 3, 4)( 5, 6)( 7, 8)( 9,10)(11,12)(13,14)(15,16)$ $2, 2, 2, 2, 2, 2, 2, 2$ $2$ $2$ $( 1, 3)( 2, 4)( 5,16)( 6,15)( 7,13)( 8,14)( 9,11)(10,12)$ $4, 4, 4, 4$ $4$ $4$ $( 1, 3, 2, 4)( 5,15, 6,16)( 7,11, 8,12)( 9,13,10,14)$ $2, 2, 2, 2, 2, 2, 2, 2$ $2$ $2$ $( 1, 5)( 2, 6)( 3,16)( 4,15)( 7,11)( 8,12)( 9,13)(10,14)$ $4, 4, 4, 4$ $2$ $4$ $( 1, 7,16, 9)( 2, 8,15,10)( 3,13, 5,11)( 4,14, 6,12)$ $2, 2, 2, 2, 2, 2, 2, 2$ $4$ $2$ $( 1, 7)( 2, 8)( 3,14)( 4,13)( 5,12)( 6,11)( 9,16)(10,15)$ $4, 4, 4, 4$ $2$ $4$ $( 1, 8,16,10)( 2, 7,15, 9)( 3,14, 5,12)( 4,13, 6,11)$ $4, 4, 4, 4$ $2$ $4$ $( 1,11,16,13)( 2,12,15,14)( 3, 9, 5, 7)( 4,10, 6, 8)$ $4, 4, 4, 4$ $4$ $4$ $( 1,11, 2,12)( 3,10, 4, 9)( 5, 8, 6, 7)(13,15,14,16)$ $4, 4, 4, 4$ $2$ $4$ $( 1,12,16,14)( 2,11,15,13)( 3,10, 5, 8)( 4, 9, 6, 7)$ $2, 2, 2, 2, 2, 2, 2, 2$ $1$ $2$ $( 1,15)( 2,16)( 3, 6)( 4, 5)( 7,10)( 8, 9)(11,14)(12,13)$ $2, 2, 2, 2, 2, 2, 2, 2$ $1$ $2$ $( 1,16)( 2,15)( 3, 5)( 4, 6)( 7, 9)( 8,10)(11,13)(12,14)$
## Group invariants
Order: $32=2^{5}$ Cyclic: No Abelian: No Solvable: Yes GAP id: [32, 28]
Character table: 2 5 3 5 4 3 4 4 3 4 4 3 4 5 5 1a 2a 2b 2c 4a 2d 4b 2e 4c 4d 4e 4f 2f 2g 2P 1a 1a 1a 1a 2b 1a 2g 1a 2g 2g 2b 2g 1a 1a 3P 1a 2a 2b 2c 4a 2d 4b 2e 4c 4f 4e 4d 2f 2g X.1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 X.2 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 1 1 X.3 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 1 X.4 1 -1 1 1 -1 1 -1 1 -1 -1 1 -1 1 1 X.5 1 -1 1 1 -1 1 1 -1 1 1 -1 1 1 1 X.6 1 1 1 -1 -1 -1 -1 -1 -1 1 1 1 1 1 X.7 1 1 1 -1 -1 -1 1 1 1 -1 -1 -1 1 1 X.8 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 1 1 X.9 2 . 2 2 . -2 . . . . . . -2 -2 X.10 2 . 2 -2 . 2 . . . . . . -2 -2 X.11 2 . -2 . . . -2 . 2 . . . -2 2 X.12 2 . -2 . . . 2 . -2 . . . -2 2 X.13 2 . -2 . . . . . . A . -A 2 -2 X.14 2 . -2 . . . . . . -A . A 2 -2 A = -2*E(4) = -2*Sqrt(-1) = -2i | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210391640663147, "perplexity": 264.9483214643797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145708.59/warc/CC-MAIN-20200222150029-20200222180029-00460.warc.gz"} |
http://link.springer.com/article/10.1134%2FS1061934811040071 | , Volume 66, Issue 6, pp 618-622
Date: 10 Jun 2011
# Study on the interaction of DNA with resveratrol by resonance light scattering technique and its analytical application
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
## Abstract
The interaction between resveratrol and DNA has been studied by resonance light scattering (RLS) technique. In strongly acidic solution, resveratrol has a maximum peak at 368 nm and the RLS intensity is remarkably enhanced by trace amounts of DNA due to its interaction with resveratrol. Based on this, a novel assay for nucleic acids has been developed. The characteristics of RLS, fluorescence and UV-VIS absorption spectra, the influential factors and optimum conditions of the reaction have been studied. The enhanced RLS intensity at 368 nm is proportional to the concentration of DNA within the range of 0–1600 μg/L for calf thymus DNA. The determination limit (3σ) is 5.2 ng/mL. The study of foreign substance effect on the determination of DNA indicates that most of metal ions have little effect on the determination of DNA. Three synthetic samples of DNA were analysed with satisfactory results. The results show that the proposed method is very sensitive, convenient, rapid and reproducible.
The article is published in the original. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324615955352783, "perplexity": 2077.297319877004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460577.67/warc/CC-MAIN-20150226074100-00102-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://cstheory.stackexchange.com/questions/34877/complexity-of-the-homomorphism-problem-parameterized-by-treewidth | # Complexity of the homomorphism problem parameterized by treewidth
The homomorphism problem $\text{Hom}(\mathcal{G}, \mathcal{H})$ for two classes $\mathcal{G}$ and $\mathcal{H}$ of graphs is defined as follows:
Input: a graph $G$ in $\mathcal{G}$, a graph $H$ in $\mathcal{H}$
Output: decide if there is a homomorphism from $G$ to $H$, i.e., a mapping $h$ from the vertices of $G$ to those of $H$ such that, for any edge $\{x, y\}$ of $G$, $\{h(x), h(y)\}$ is an edge of $H$.
For each $k \in \mathbb{N}$, I will call $\mathcal{T}_k$ the class of the graphs of treewidth at most $k$. I'm interested in the problem $\text{Hom}(\mathcal{T}_k, \mathcal{T}_k)$, which I see as a parameterized problem (by the treewidth bound $k$). My question is: what is the complexity of this parameterized problem? Is it known to be FPT? or is it W[1]-hard?
Here are some things that I found about the $\text{Hom}$ problem, but which do not help me answer the question. (I write $-$ for the class of all graphs.)
• http://www.sciencedirect.com/science/article/pii/009589569090132J: If $\mathcal{H}$ is bipartite then $\text{Hom}(-, \mathcal{H})$ is in PTIME, otherwise it is NP-complete, but of course the NP-hardness relies on allowing arbitrary $G$.
• http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.9013&rep=rep1&type=pdf: If the treewidth of $\mathcal{G}$ (modulo homomorphic equivalence) is bounded by a constant then $\text{Hom}(\mathcal{G}, -)$ is in PTIME (and otherwise it isn't, assuming FPT != W[1]). Hence, in particular my problem $\text{Hom}(\mathcal{T}_k, \mathcal{T}_k)$ is in PTIME for fixed $k$, but this doesn't tell me what is the dependency on the parameter.
• From Flum and Grohe's book Parameterized Complexity Theory, Corollary 13.17: The problem $\text{Hom}(\mathcal{T}_k, -)$ is FPT when parameterized by the size of $G$ (but I am parameterizing by the treewidth)
• http://users.uoa.gr/~sedthilk/papers/homo.pdf, Corollary 3.2: When fixing a specific graph $H$, the problem $\text{Hom}(\mathcal{T}_k, \{H\})$, parameterized by k, is FPT (this even holds for more complicated counting variants), but I do not want to restrict to fixed $H$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9749955534934998, "perplexity": 239.73415479599578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00326.warc.gz"} |
http://mathhelpforum.com/calculus/12542-application-2-a.html | 1. ## application 2
problem: Two posts, one 3 meters high and the other 6 meters high, stand 10 meters apart. They are to be stayed by wires attached to a single stake at ground level, the wires running to the tops of the posts. Where should the stake be placed to use the least amount of wire?
2. Hello, cazimi!
This is one of the more unpleasant problems . . .
Two posts, one 3m high and the other 6m high, stand 10m apart.
They are to be stayed by wires attached to a single stake at ground level,
the wires running to the tops of the posts.
Where should the stake be placed to use the least amount of wire?
Code:
*C
* |
* |
* |
A* * | 6
| * * |
3 | * * |
| * * |
B* - - - - - * - - - - - - - *D
: x P 10-x :
AB is the 3-meter pole; CD is the 6-meter pole.
They are 10 meters apart: BD = 10.
The wire runs from A to a point P on the ground, then up to C.
Let x = BP, then PD = 10 - x.
From the right triangles (and Pythagorus), we have:
. . . . . . . . ______ . . . . . . . .___________
. . AP .= .√x² + 3², . CP .= .√(10 - x)² + 6²
The length of the wire is: .L .= .[x² + 9]^½ + [(10 - x)² + 36]^½
And that is the function you must minimize.
I'll wait in the car . . .
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
There is a very clever geometric approach to this problem
. . which eliminates the need for all that Calculus.
I'll let someone else explain it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8779435157775879, "perplexity": 1080.684474273215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867217.1/warc/CC-MAIN-20180525200131-20180525220131-00059.warc.gz"} |
https://tex.stackexchange.com/questions/216212/how-may-i-align-the-following-set-of-equations-as-represented-in-the-image | # How may I align the following set of equations as represented in the image?
If IEEEeqnarray environment is not best-suited for this particular alignment, what could be the best environment to be used?
\documentclass[11pt,a4paper]{article}
\usepackage{blindtext}
\usepackage{mathtools}
\usepackage{IEEEtrantools}
\begin{document}
\begin{IEEEeqnarray*}{rCl}
(x-a)(x-2a)(x-3a)(x-4a)=x^{4}-P_{1}x^{3}+P_{2}x^{2}-P_{3}x+P_{4}.\\
\shortintertext{Here}
P_{1}&=&a+2a+3a+4a=10a,\\
P_{2}&=&1\times 2a^{2}+1\times 3a^{2}+1\times 4a^{2}+2\times 4a^{2}+3\times 4a^{2}=35a^{2},\\
P_{3}&=&2\times 3\times 4a^{3}+1\times 3\times 4a^{2}+1\times 2\times 4a^{3}+1\times 2\times 3a^{3}=50a^{3},\\
P_{4}&=&1\times 2\times 3\times 4a^{4}=24a^{4}.
\shortintertext{so that}
(x-a)(x-2a)(x-3a)(x-4a)=x^{4}-10ax^{3}+35a^{2}x^{2}-50a^{3}x+24a^{4}.
\end{IEEEeqnarray*}
\end{document}
The presentation of your alignment is not really that accurate. So, I assume the following will be sufficient:
\documentclass{article}
\usepackage{array}
\begin{document}
$(x - a)(x - 2a)(x - 3a)(x - 4a) = x^4 - P_1 x^3 + P_2 x^2 - P_3 x + P_4.$
Here
$\renewcommand{\arraystretch}{1.3} \begin{array}{r@{}>{{}}l@{}r@{}>{{}}l} P_1 &= a+2a+3a+4a &&= 10a, \\ P_2 & \multicolumn{3}{@{}l}{{}= 1 \times 2a^2 + 1 \times 3a^2 + 1 \times 4a^2 + 2 \times 4a^2 + 3 \times 4a^2} \\ &&&= 35a^2,\\ P_3 & \multicolumn{3}{@{}l}{{}= 2 \times 3 \times 4a^3 + 1 \times 3 \times 4a^2 + 1 \times 2\times 4a^3 + 1 \times 2 \times 3a^3} \\ &&&= 50a^3,\\ P_4 &= 1 \times 2 \times 3 \times 4a^4 &&= 24a^4, \\ \end{array}$
so that
$(x - a)(x - 2a)(x - 3a)(x - 4a) = x^4 - 10ax^3 + 35a^2 x^{2} - 50a^3 x + 24a^4.$
\end{document}
Multiple alignments that's not strictly adhered to (some lines use the alignment points and some don't) is difficult to do with standard align and friends. Using an array may circumvent this difficulty with the aid of \multicolumn.
The use of the array package above is not really needed, but I've used it anyway.
• This is the exact alignment I wanted to have. I apologise for the inconvenience I caused by not making my set up inaccurate. Thank you very much for dedicating your precious time in looking to this matter. Dec 9, 2014 at 20:17
Why not this simple way?
\documentclass{article}
\usepackage{mathtools}
\begin{document}
\noindent We have
\begin{align*}
\MoveEqLeft (x - a)(x - 2a)(x - 3a)(x - 4a)
= x^{4} - P_{1}x^{3} + P_{2}x^{2} - P_{3}x + P_{4}\\
\intertext{where}
P_{1} &= a + 2a + 3a + 4a = 10a,\\
P_{2} &= 1 \cdot 2a^{2} + 1 \cdot 3a^{2} + 1 \cdot 4a^{2} + 2 \cdot 4a^{2} + 3 \cdot 4a^{2} = 35a^{2},\\
P_{3} &= 2 \cdot 3 \cdot 4a^{3} + 1 \cdot 3 \cdot 4a^{2} + 1 \cdot 2 \cdot 4a^{3} + 1 \cdot 2 \cdot 3a^{3}= 50a^{3},\\
P_{4} &= 1 \cdot 2 \cdot 3 \cdot 4a^{4} = 24a^{4},
\intertext{so that}
\MoveEqLeft (x - a)(x - 2a)(x - 3a)(x - 4a)
= x^{4} - 10ax^{3} + 35a^{2}x^{2} - 50a^{3}x + 24a^{4}.
\end{align*}
\end{document}
• I honestly did not know the \MoveEqLeft-command, and also I seldom used the align-environment. Your set-up allowed me to understand how to use the align-environment coherently. Thank you very much, Bernard! Dec 10, 2014 at 3:41
Here is how I would do it:
\documentclass{article}
\usepackage{mathtools}
\begin{document}
\noindent We have
\begin{equation*}
(x - a)(x - 2a)(x - 3a)(x - 4a)
= x^{4} - P_{1}x^{3} + P_{2}x^{2} - P_{3}x + P_{4}
\end{equation*}
where
\begin{align*}
P_{1} &= a + 2a + 3a + 4a\\
&= 10a,\\
P_{2} &= 1 \cdot 2a^{2} + 1 \cdot 3a^{2} + 1 \cdot 4a^{2} + 2 \cdot 4a^{2} + 3 \cdot 4a^{2}\\
&= 35a^{2},\\
P_{3} &= 2 \cdot 3 \cdot 4a^{3} + 1 \cdot 3 \cdot 4a^{2} + 1 \cdot 2 \cdot 4a^{3} + 1 \cdot 2 \cdot 3a^{3}\\
&= 50a^{3},\\
P_{4} &= 1 \cdot 2 \cdot 3 \cdot 4a^{4}\\
&= 24a^{4},
\end{align*}
so that
\begin{equation*}
(x - a)(x - 2a)(x - 3a)(x - 4a)
= x^{4} - 10ax^{3} + 35a^{2}x^{2} - 50a^{3}x + 24a^{4}.
\end{equation*}
\end{document}
• Thank you very much for all the corrections you made. I could use the aforementioned set up for my work for it is more organised than the set up I had in mind. Dec 9, 2014 at 20:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000081062316895, "perplexity": 2280.5212167108684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103850139.45/warc/CC-MAIN-20220630153307-20220630183307-00601.warc.gz"} |
http://mathhelpforum.com/algebra/40241-homework-due-tomoz-had-total-blankk-most-simple-thing.html | # Math Help - homework due tomoz!!! and had a total BLANKK on the most simple thing!!
1. ## homework due tomoz!!! and had a total BLANKK on the most simple thing!!
fractions; i suck at them and i just had a total blank on the most simple fraction sum.
1 - 1/9 = ?? plz help
1/5 of ?? = 6
and i have a kinda word prob,
if 5 * 3 = 4
2 * 8 = 2
6 * 3 = 3
find the value of
1 * 7 ??
im so stupid i know you think i am lol
please get back to me ASAP!!
thanks xoxo
2. Originally Posted by emm
fractions; i suck at them and i just had a total blank on the most simple fraction sum.
1 - 1/9 = ?? plz help
1/5 of ?? = 6
$1 - \frac{1}{9} = \frac{9}{9} - \frac{1}{9}$ = ?
$\frac{1}{5} \cdot x = 6$
$5 \cdot \frac{1}{5} \cdot x = 5 \cdot 6$
-Dan
3. Originally Posted by emm
im so stupid i know you think i am lol
Stop demeaning yourself. Everyone needs help every now and again.
-Dan | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034205079078674, "perplexity": 2256.2704790934154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678704059/warc/CC-MAIN-20140313024504-00079-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/help-with-solving-a-cauchy-euler-differential-equation.387581/ | # Homework Help: Help with Solving a Cauchy-Euler Differential Equation
1. Mar 17, 2010
### Jim4592
1. The problem statement, all variables and given/known data
x2 y'' + x y' + 4 y = 0
2. Relevant equations
y = xr
y' = r xr-1
y'' = (r2-r)xr-2
3. The attempt at a solution
x2{(r2-r)xr-1} + x{r xr-1} + 4xr
r2 - r + r + 4
r2 + 4 = 0
r = +- 2i
y = C1x2i + c2x-2i
My question is how can i remove the imaginary number with cos[] and sin[]
If you could be as descriptive as possible i'd really appreciate it!
2. Mar 18, 2010
### vela
Staff Emeritus
Use the definition of exponentiation: xa=ea log x.
3. Mar 18, 2010
### HallsofIvy
In fact, the substitution y= ln(x) will change any "Cauchy-Euler" equation into an equation with constant coefficients with the same characteristic equation. An equation with constant coefficients, with $\pm 2$ as characteristic roots, has general solution C cos(2y)+ D sin(2y). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004661440849304, "perplexity": 2612.8195509035827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00119.warc.gz"} |
https://www.cfd-online.com/Forums/fluent/88612-2nd-order-upwind-scheme-fluent-cfx-print.html | CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
- FLUENT (https://www.cfd-online.com/Forums/fluent/)
- - 2nd order upwind scheme (Fluent and CFX) (https://www.cfd-online.com/Forums/fluent/88612-2nd-order-upwind-scheme-fluent-cfx.html)
Far May 22, 2011 01:50
2nd order upwind scheme (Fluent and CFX)
It is stated in the CFX theory (above link) that when one selects the high resolution scheme as below
is the value at the upwind node.
On the other hand when user selects the specified blend factor for (between 0 and 1), is equal to the average of the adjacent nodal gradients. I wanna know, this scheme is the upwind or central differencing scheme?
http://my.fit.edu/itresources/manual...ug/node992.htm
Where as in fluent user guide (above link) 2nd order upwind scheme is given by following formula
is the gradient of in the upwind cell
Both high resolution (CFX) and 2nd order upwind scheme (Fluent) are based on the principles by Barth and Jespersen [1] so that no new extrema is introduced in the solution, therfore monotonic behavior is preserved.
1. Does it mean that the high resolution scheme of CFX and 2nd order upwind scheme of fluent are equivalent.
2. Does it mean that the CFX 2nd order scheme is more like a baised 2nd order scheme with one term of upwind and 2nd term (anti diffusive term) is central differencing type?
3. Will 2nd order upwind (CFX definition) will make the solution worst than even 1st order upwind scheme?
References:
[1]
Barth and Jespersen "The design and application of upwind schemes on unstructured meshes" .
Technical Report AIAA-89-0366, AIAA 27th Aerospace Sciences Meeting, Reno, Nevada, 1989.
All times are GMT -4. The time now is 21:06. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8517034649848938, "perplexity": 4071.3899436941024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00471-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/869652/a-field-having-an-automorphism-of-order-2 | # A field having an automorphism of order 2
The following fact is used in the Unitary space.
If $F$ is a field having an automorphism $\alpha$ of order 2. Let $F_0=\{a\in F: \alpha(a)=a\}$. Then $|F:F_0|=2$.
## Is there any easy proof (or reference) for this fact?
Let $Char F \ne 2$, and write $\bar{a}$ for $\alpha(a)$. Note $x=1/2(x+\bar{x})+1/2(x-\bar{x})\ \ \ (*)$ and $\overline{x-\bar{x}}=-(x-\bar{x})$ and $x+\bar{x} \in F_0$. So there exists $a$ such that $\bar{a}=-a$ and $a \not \in F_0$. By (*), we just need to proof the elements $b$ such that $\bar{b}=-b$ is in the space $K$ generated by 1 and $a$ over $F_0$ (Clearly, $|K:F_0|=2$).
Since $\bar{a}=-a$, we get $k=a^2\in F_0$, and $a^{-1}=k^{-1}a \in K$. Now $\overline{ab}=\bar{a}\bar{b}=(-a)(-b)=ab$, we get $l=ab\in F_0$, and $b=la^{-1} \in K$. Hence $K=F$ and then $|F:F_0|=2$.
But I can not prove the case that $Char K=2$. Any idea about this case?
-
This is a special case of Artin's theorem from Galois theory. The field $F_0$ is strictly smaller than $F$ (not all elements are fixed by $\alpha$). If $a, b \in F \setminus F_0$ then
$$a - \frac{a-\alpha(a)}{b-\alpha(b)} b + \frac{a \, \alpha(b)-b\,\alpha(a)}{b-\alpha(b)} =0$$
which shows that $1, a, b$ are linearly dependent over $F_0$. Therefore $|F:F_0|=2$.
-
Sweet! Artin's book is a gem, isn't it! +1! – Robert Lewis Jul 17 '14 at 6:58
Thanks. A beautiful proof. I have think about this for a long time. – Wei Zhou Jul 17 '14 at 8:03
Here's a different demonstration which, like Wei Zhou's argument, works in the case $\text{char} F \ne 2$, although I admit part of it was inspired by WimC's most excellent answer. But, based upon that inspiration (and exactly what that is will shortly become manifest), we have a self-contained proof as follows:
First of all, we note that, unless $\alpha$ is the trivial automorphism, i.e., unless $\alpha$ fixes all of $F$, that there must exist $\aleph \in F$ with $\alpha(\aleph) \ne \aleph$; by definition, such an $\aleph \notin F_0$, and $\alpha(\aleph) \notin F_0$ either, lest we have $\aleph = \alpha(\alpha(\aleph)) = \alpha(\aleph) \in F_0$. Choosing such an $\aleph$, consider the field elements $\aleph + \alpha(\aleph), \aleph \alpha(\aleph) \in F$; we see they are both fixed by $\alpha$, for
$\alpha(\aleph + \alpha(\aleph)) = \alpha(\aleph) + \alpha^2(\aleph) = \aleph + \alpha(\aleph) \tag{1}$
since $\alpha^2 = 1$, and likewise
$\alpha(\aleph \alpha(\aleph)) = \alpha(\aleph) \alpha^2(\aleph) = \aleph \alpha(\aleph), \tag{2}$
again since $\alpha^2 = 1$. Thus we see that both $\aleph + \alpha(\aleph), \aleph \alpha(\aleph) \in F_0$, since $F_0$ is the fixed field of $\alpha$. Next consider the polynomial
$p_\aleph(x) = x^2 - (\aleph +\alpha(\aleph))x + \aleph \alpha(\aleph) \in F_0[x]; \tag{3}$
we have $p_\aleph(x) \in F_0[x]$ since its coefficients, as has been seen, are all fixed by $\alpha$. The roots of $p_\aleph(x)$ are easily seen to be $\aleph$ and $\alpha(\aleph)$; indeed we have
$p_\aleph(\aleph) = \aleph^2 - (\aleph + \alpha(\aleph)) \aleph + \aleph \alpha(\aleph) = \aleph^2 -\aleph^2 - \alpha(\aleph) \aleph + \aleph \alpha(\aleph) = 0, \tag{4}$
with a similar calculation showing that
$p_\aleph(\alpha(\aleph)) = 0 \tag{5}$
as well; alternatively, it may be observed that $p_\aleph(x)$ splits in $F$ as
$p_\aleph(x) = x^2 - (\aleph +\alpha(\aleph))x + \aleph \alpha(\aleph) = (x - \aleph)(x - \alpha(\aleph)), \tag{6}$
which also shows the roots are $\aleph$, $\alpha(\aleph)$. Based on these considerations, we may conclude that (i) $p_\aleph(x)$ is irreducible over $F_0$, since $\aleph, \alpha(\aleph) \notin F_0$; (ii.) $F_0(\aleph) \subset F$ is the splitting field of $p_\aleph(x)$ over $F_0$, since $\alpha(\aleph) = \aleph^{-1}(\aleph \alpha(\aleph)) \in F_0(\aleph)$ by virtue of $\aleph \in F_0(\aleph)$, $\aleph \alpha(\aleph) \in F_0 \subset F(\aleph)$; (iii.) $[F_0(\aleph):F_0] = 2$, since $\deg p_\aleph(x) = 2$.
Having $[F_0(\aleph):F_0] = 2$, we conclude by showing that $F = F_0(\aleph)$. Clearly $F_0(\aleph) \subset F$, so let $\beth \in F$ and consider the product $\gimel = (\beth - \alpha(\beth))(\aleph - \alpha(\aleph))$; we have
$\alpha((\beth - \alpha(\beth))(\aleph - \alpha(\aleph))) = (\alpha(\beth) - \beth)(\alpha(\aleph) - \aleph) = (\beth - \alpha(\beth))(\aleph - \alpha(\aleph)), \tag{7}$
that is, $\gimel = (\beth - \alpha(\beth))(\aleph - \alpha(\aleph))$ is fixed by $\alpha$, hence
$(\beth - \alpha(\beth))(\aleph - \alpha(\aleph)) = \gimel \in F_0. \tag{8}$
We now have
$\beth - \alpha(\beth) = (\aleph - \alpha(\aleph))^{-1} \gimel \in F_0(\aleph), \tag{9}$
and since
$\beth + \alpha(\beth) \in F_0, \tag{10}$
being fixed by $\alpha$ just as is $\aleph + \alpha(\aleph)$, we conclude that (and this is where we need the assumption $\text{char}F \ne 2$):
$2\beth = (\beth - \alpha(\beth)) + (\beth + \alpha(\beth)) \in F_0(\aleph), \tag{11}$
whence
$\beth \in F_0(\aleph) \tag{12}$
and hence $F \subset F_0(\aleph)$; thus in fact $F = F_0(\aleph)$ and finally
$[F:F_0] = [F_0(\aleph): F_0] = 2, \tag{13}$
the desired conclusion. QED.
Note: The inspiration I took from WimC's answer was to examine the quantity $\gimel = (\beth - \alpha(\beth))(\aleph - \alpha(\aleph))$; this originated in a careful scrutiny of the coefficient of $b$ in his equation
$a - \dfrac{a-\alpha(a)}{b-\alpha(b)} b + \dfrac{a \, \alpha(b)-b\,\alpha(a)}{b-\alpha(b)} =0, \tag{14}$
which is also invariant under $\alpha$. I too would like to see if and how the assumption $\text{char} F \ne 2$ could be circumvented in the context of the above argument. End of Note.
Hope this helps. Cheers,
and as always,
Fiat Lux!!!
-
Interesting answer. – Wei Zhou Jul 18 '14 at 0:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974517822265625, "perplexity": 165.8250248762725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449793.41/warc/CC-MAIN-20151124205409-00314-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/hp-50g-expand-and-simplify.343199/ | # Calculators Hp 50g expand and simplify
1. Oct 5, 2009
### graycolor
I remember setting my calculator so it would automatically use the expand function or simplify fractions for me in RPN mode.Since I did a reset I don't know how to put that function on again.
For example if I was to put 1 enter 3 divided, I would get 1/3 then if I were to multiply by 2 the calculator would give this 1/3*2
I want the calculator to automatically simplify the fraction to 2/3. This is done automatically in algebraic mode how can I do this with rpn.
Hp calculators are very confusing.
2. May 18, 2017
### LouisBRZ
Solving it is very easy. Press MODE, then press CAS and uncheck "Approx". I know it's been 8 years, but maybe someone could be interested.
3. Jan 20, 2018
i do. thanks | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431252479553223, "perplexity": 1725.348212196602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594790.48/warc/CC-MAIN-20180723012644-20180723032644-00190.warc.gz"} |
https://www.physicsforums.com/threads/need-help-with-entropy-question.49552/ | # Need help with entropy question
• Start date
• #1
551
1
Can anyone help me with this, please?
1. 1 mol of an ideal gas is compressed slowly and isothermally at 400 K in a piston-cylinder arrangement. Initial pressure = 100 kPa, final pressure 1000 kPa. The system is surrounded by a resevoir at 300 K such that heat exchange can take place between the piston-cylinder arrangement and the resevoir. System is isolated, so no heat exchange with outside world.
Calculate the entropy change of the gas, resevoir and universe if:
i. the piston is frictionless.
I'm stuck trying to do the entropy change for the gas. If I manage to do it, I should be able to do the rest.
I know delta S = INT dQ/T (haven't used tex before)
Also, from the 1st law: delta U = Qin + Won
For an isothermal change, delta U = 0 (as U depends on T only)
=> Qin = -Won
=> Qin = INT P dV
I'm not sure where to go from there, cos I can't put dQ = P dV in the integral above, can I (then substitute P = nRT/V, obviously)?
Related Introductory Physics Homework Help News on Phys.org
• #2
17
0
Sure. It says in the excersize that the gas is compressed slowly and isothermally, so there always is an equilibrium. nRT/V can be put into the integral, making Q = nRT * INT (dV/V) from V1 to V2. (Which makes Q = nRT *ln(V2/V1).
Using relative volumes is enough here. Good luck!
• #3
551
1
Ok, thanks :).
• #4
551
1
I'm stuck again. How do I calculate the entropy change for the resevoir now?
1st law: delta U = Qin + Won
Won = 0, right? So delta U = Qin and I'm stuck :/.
• #5
17
0
Alright, you already showed us that delta U = 0, since it's an ideal gas. So Qin = -Won. In this case the Work on the gas is positive, the piston has to do work on the gas to compress it, so heat has to be removed from the gas, which follows fromthe formula.
Won = - INT P dV ---> so Qin = INT P dV = nRT * INT dV/V
So not Won = 0 but delta U = 0. This is because it's an isothermal process, which already suggests no change of internal energy (in the case of an ideal gas that is).
Now solve the integral and put relative values for V2 and V1 into it. There you go!
• Last Post
Replies
3
Views
7K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
0
Views
3K
• Last Post
Replies
1
Views
7K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
2
Views
5K
• Last Post
Replies
2
Views
902
• Last Post
Replies
3
Views
5K
• Last Post
Replies
5
Views
2K
• Last Post
Replies
1
Views
2K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089689254760742, "perplexity": 1738.2537901201015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889173.38/warc/CC-MAIN-20201025125131-20201025155131-00188.warc.gz"} |
https://tug.org/pipermail/texworks/2009q1/000603.html | # [texworks] commands
Hans Hagen pragma at wxs.nl
Thu Mar 19 13:21:49 CET 2009
Joseph Wright wrote:
> Hans Hagen wrote:
>> Hi
>>
>> A few questions ...
>>
>> - I want to add a 'typesetting command' that does not need a file at all
>> (it just runs). In that case I need an option not to ask for an input
>> file or save or whatever. Maybe check if the command has an argument
>> that specifies a file will do. Or otherwise a checkbox like the one that
>> can be used not to open a pdf afterwards.
>
> I'm intrigued as to what this is!
things like updating, help info and more
>> - I think that the 'cleanup aux' files in the menu is too specific for a
>> macro package, so it should either be configurable or be made
>> typesetting command dependent or whatever. I can even imagine that there
>> is an extra menu with options that change depending on the typesetting
>> command being selected.
>
> You can customise it in ~/TeXworks/configuration/texworks-config.txt, at
> least in the sense of altering what is on the list.
i think that there should be a config per typesetting command, as most
of what is in this file does not make sense for context (we have no
\include and \includegraphics but different commands and there cleanup
suffixes are also different and could even lead to unwanted cleanup)
>> - The open file menu can have a context entry as well. Valid suffixes
>> are: .tex .mkii .mkiv .xml .ctx (and maybe a few more in the future)
>
> I guess that the assumption was that ConTeXt only uses .tex (this would
> have been my guess, for example).
actualy there are a few more but the average user will not use them
i'm playing with coloring xml and wondering if it makes sense to add an
option to associate file suffixes with specific syntax patterns
another thing that comes to mind (when testing) is a quick way to open
the most recent file(s) [esp handy when testing changes in the
configuration, the multistep menu choice involved too many steps or
maybe there's already a magic key combination that does that]
Hans
----------------------------------------------------------------- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883709728717804, "perplexity": 3686.916616725484}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572934.73/warc/CC-MAIN-20190916200355-20190916222355-00133.warc.gz"} |
http://www.latex-community.org/forum/viewtopic.php?f=5&t=6906&start=0 | ## LaTeX forum ⇒ General ⇒ Tcilatex
LaTeX specific issues not fitting into one of the other forums of this category.
bazman
Posts: 78
Joined: Mon Jan 26, 2009 3:24 am
### Tcilatex
\documentclass{article}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\usepackage{amsmath} %TCIDATA{OutputFilter=LATEX.DLL}%TCIDATA{Created=Friday, September 22, 2006 08:26:57}%TCIDATA{LastRevised=Saturday, September 30, 2006 18:37:59}%TCIDATA{<META NAME="GraphicsSave" CONTENT="32">}%TCIDATA{<META NAME="DocumentShell" CONTENT="Articles\SW\Standard LaTeX Article">}%TCIDATA{CSTFile=LaTeX article (bright).cst} \newtheorem{theorem}{Theorem}\newtheorem{acknowledgement}[theorem]{Acknowledgement}\newtheorem{algorithm}[theorem]{Algorithm}\newtheorem{axiom}[theorem]{Axiom}\newtheorem{case}[theorem]{Case}\newtheorem{claim}[theorem]{Claim}\newtheorem{conclusion}[theorem]{Conclusion}\newtheorem{condition}[theorem]{Condition}\newtheorem{conjecture}[theorem]{Conjecture}\newtheorem{corollary}[theorem]{Corollary}\newtheorem{criterion}[theorem]{Criterion}\newtheorem{definition}[theorem]{Definition}\newtheorem{example}[theorem]{Example}\newtheorem{exercise}[theorem]{Exercise}\newtheorem{lemma}[theorem]{Lemma}\newtheorem{notation}[theorem]{Notation}\newtheorem{problem}[theorem]{Problem}\newtheorem{proposition}[theorem]{Proposition}\newtheorem{remark}[theorem]{Remark}\newtheorem{solution}[theorem]{Solution}\newtheorem{summary}[theorem]{Summary}\newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}\input{tcilatex} \begin{document}
when I try to run the above document I get the error:
I as using someone else's file here so I have no idea what tcilatex does?
Kind Regards
Baz
localghost
Site Moderator
Posts: 9206
Joined: Fri Feb 02, 2007 12:06 pm
Location: Braunschweig, Germany
bazman wrote:[...] I as using someone else's file here so I have no idea what tcilatex does? [...]
Ask the person you've got the file from.
Best regards
Thorsten
LaTeX Community Moderator
¹ System: openSUSE 13.1 (Linux 3.11.10), TeX Live 2013 (vanilla), TeXworks 0.5 (r1351)
² Posting stopped indefinitely due to offenses
lalop
Posts: 63
Joined: Fri Sep 11, 2009 11:25 pm
There seems to be some copies of tcilatex.tex on google. No promises they are the one you want, however.
Apparently, it's a macro collection.
sitex
Posts: 70
Joined: Sat May 09, 2009 12:37 pm
Hello,
I think the person who created the file is using Scientific WorkPlace or Scientific Word. Depending on what the file contains, you may be able to compile the it by simply removing the tcilatex command. If this does not work ask the creator to save the file as a Portable Latex File and send you a copy.
If all this fails and the file is not too big, you can post it and I can try to fix it for you.
Tom
daleif
Posts: 199
Joined: Wed Nov 19, 2008 12:46 am
best thing to do is to ask the person using Scientific Workplace to save the Document as Portable LaTeX, it is in the dropdown under the file name when use uses the 'Save as' feature.
This will remove the tcilatex from the document.
ticlatex is only found in Scientific Workplace and the firm for some reason does not share their stuff with the rest of us. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8494266867637634, "perplexity": 4720.4470986310835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542213.61/warc/CC-MAIN-20161202170902-00181-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/where-does-the-energy-come-from.130816/ | # Where does the energy come from?
1. Sep 5, 2006
### goodboy
put a iron beside a magnet,then we konw the iron would be attracted toward the magnet.but where does the kinetic energy of the iron come from?
2. Sep 5, 2006
### Tomsk
The iron is in a magnetic field, hence it has potential energy due to its position. This becomes KE when the iron is released. The total energy, KE+PE, is conserved.
3. Sep 5, 2006
4. Sep 5, 2006
### Meir Achuz
You are now wrong. You were right the first time. For a permanent magnet and ferromagnetic iron, the energy considerations do work just as they would in electrostatics with a charge attractilng a polarizable object.
Similar Discussions: Where does the energy come from? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9319864511489868, "perplexity": 1694.4832051380274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607120.76/warc/CC-MAIN-20170522211031-20170522231031-00187.warc.gz"} |
http://mathhelpforum.com/calculus/10857-large-derivative.html | # Math Help - large derivative
1. ## large derivative
Can someone give the derivative of the equation below.
Joram
Attached Thumbnails
2. The trouble I am having with this is that your derivative isn't picking out a specific value of "i."
-Dan
3. ## Thanks...
Yes the derivative when i=k is easily obtained. However I need the complete system of solutions. I forgot to mention though that the derivative needs to be zero. To be precise the vector of derivatives need to equal a empty vector, see below.
Joram
Attached Thumbnails
4. Originally Posted by Patek
Yes the derivative when i=k is easily obtained. However I need the complete system of solutions. I forgot to mention though that the derivative needs to be zero. To be precise the vector of derivatives need to equal a empty vector, see below.
Joram
If I'm understanding you correctly then what you need to do is take the derivative for i = k, where k is at the moment undefined. Then your vector components will be in terms of the index k as k runs from 1 to m.
-Dan
5. ## Thanks...
Thanks, your hint helped me solve it. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973152756690979, "perplexity": 512.3940185830938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869647.75/warc/CC-MAIN-20150124161109-00160-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://soffer801.wordpress.com/blog/ | ## Maximum modulus principle
Here’s a fact you probably never noticed: Holomoprhic functions have no local maxima. Okay, constant functions do, but those are lame.
Theorem 1 (Maximum modulus principle) Let ${f\in{\mathcal H}(U)}$. Then if ${z_0\in U}$ has ${|f(z_0)|\ge|f(z)|}$ for every ${z\in U}$, then ${f}$ is constant.
Proof: If ${|f|}$ has a local maximum at ${z_0}$, then in a small ball around ${z_0}$, consider the image of ${|f|}$. It looks something like ${(f(z_0)-\varepsilon, f(z_0)]}$. We don’t really care about the lower bound. The important point is that the upper bound is attained. This tells us that, the image of the small ball ${B_r(z_0)}$ under ${f}$ is completely contained in ${\overline{B_{f(z_0)}(0)}}$, and that it touches the boundary. Thus, the image of this ball has a boundary and thus cannot be an open set. But by the open mapping theorem, this is only possible if ${f}$ were in fact constant. $\Box$
## Open mapping theorem
Today we’ll prove the open mapping theorem:
Theorem 1 (Open mapping Theorem) Let ${f\in{\mathcal H}(U)}$, for some open set ${U}$ in ${{\mathbb C}}$. Then ${f(U)}$, the set of all possible images of ${f}$ is either constant, or is open in ${{\mathbb C}}$.
Let ${z_0\in f(U)}$, and let ${w_0}$ be a preimage. That is, ${f(w_0)=z_0}$. Since ${U}$ is open, there must be a small closed ball around ${w_0}$ completely contained in ${U}$. Let ${r}$ be it’s radius. Then ${\overline{B_r(w_0)}\subseteq U}$. Let ${g(z)=f(z)-z_0}$. What do we know about ${g}$?
We know that the roots of ${g}$ are isolated points. After all, if not, we would have a convergent sequence of roots, contradiction one of the theorems we proved here. So we can choose ${r}$ to be even smaller so that the only root of ${g}$ in ${\overline{B_r(w_0)}}$ is ${w_0}$. Since the boundary of ${\overline{B_r(w_0)}}$ is a circle, and hence compact, and ${|g(z)|}$ is continuous it attains its minimum when restricted to the boundary. Let ${m}$ be the minimum value of ${|g(z)|}$ for ${z}$ on the boundary of ${\overline{B_r(w_0)}}$.
Now, by Rouché’s theorem, ${g}$ will have the same number of roots in in ${B_r(w_0)}$ as ${f(z)-z_1}$ for any ${z_1}$ in ${B_m(z_0)}$. (Application of Rouché’s theorem is left as an exercise.)
In particular, this means that for every complex number in ${z_1\in B_m(z_0)}$, the function ${f(z)-z_1}$ has at least one root in ${B_r(w_0)}$, and so ${B_m(z_0)}$ is contained in ${f(U)}$, meaning our arbitrarily chosen point ${z_0}$ is in the interior of ${f(U)}$. Thus ${f(U)}$ is open.
## Rouche’s Theorem
Today we prove Rouché’s theorem. The gist is that it helps us count the number of roots of a holomorphic function, given some bounds on its values.
Theorem 1 Suppose ${f}$ and ${g}$ are holomorphic functions inside and on the boundary of some closed contour ${\gamma}$. If
$\displaystyle |g(z)|<|f(z)|$
on ${\gamma}$, then ${f}$ and ${f+g}$ have the same number of zeros on the interior of ${\gamma}$.
Before we begin proving this, it should be emphasized that we count with multiplicity. We would count the number 1 as a root of ${x^2-2x+1}$ twice. Most root counting we every do will be done this way. I feel confident in saying that it is the correct way to count roots, even if at first it is unintuitive.
Proof: By hypothesis, ${f}$ has no roots on the boundary ${\gamma}$. Define ${F(z)=\frac{f(z)+g(z)}{f(z)}}$. The roots of ${F}$ are the roots of ${f+g}$. The poles of ${F}$ are the roots of ${f}$. So it suffices to use the argument principal to show that ${N(F)=P(F)}$. That is, we need to show that
$\displaystyle \displaystyle\frac1{2\pi i}\int_\gamma\frac{F'}{F}=N(F)-P(F)$
is zero.
But from our hypotheses, we can conclude that
$\displaystyle |F(z)-1|=\left|\frac{f(z)+g(z)}{f(z)}-1\right|=\left|\frac{f(z)}{g(z)}\right|<1.$
That is, ${F}$ never takes on values more than ${1}$ away from ${1}$. Imagine a dog tethered by a leash of length less than ${1}$ to the point ${1\in{\mathbb C}}$. That dog can’s reach the origin, but more importantly, he can’t walk a loop around the origin. So computing the winding number about the origin must give us zero. This proves the theorem. $\Box$
## Argument Principle
Let ${z_0}$ be a zero of a meromorphic ${f}$ with multiplicity ${m}$. Then we can write ${f(z)=(z-z_0)^m\cdot g(z)}$ where ${g(z_0)\ne0}$. Taking derivatives yields
$\displaystyle f'(z)=m(z-z_0)^{m-1}\cdot g(z)+(z-z_0)^m\cdot g'(z).$
Hence, for ${f'/f}$, we get
$\displaystyle \displaystyle\frac{f'(z)}{f(z)}=\frac{m}{z-z_0}+\frac{g'(z)}{g(z)}.$
The residue of this sum is simply the sum of the residues of the two parts. The first term has residue ${m}$ at ${z_0}$. The second part has no pole at ${z_0}$, and hence zero residue. Thus, the residue of ${f'/f}$ at ${z_0}$ is ${m}$.
What if we pick a pole of ${z_p}$ of ${f}$? Then by a similar construction, if ${z_p}$ is an order ${q}$ pole, we can write ${f(z)=(z-z_p)^{-q}\cdot h(z)}$ and compute
$\displaystyle \displaystyle\frac{f'(z)}{f(z)}=\frac{-q}{z-z_p}+\frac{h'(z)}{h(z)},$
yielding a total residue of ${-q}$.
If a point ${z}$ is neither a pole nor a zero, then ${f'/f}$ is holomorphic at ${z}$, and has residue zero at ${z}$.
If we take a big contour around all of these points, then the integral will be the sum of the residues inside the contour, which we have just shown is the number of roots minus the number of poles (counted with multiplicity and order, respectively).
That is, if ${N_\gamma(f)}$ denotes the number of zeros counted with multiplicity inside a contour ${\gamma}$, and ${P_\gamma(f)}$ denotes the number of poles counted with order, then
$\displaystyle \displaystyle\frac{1}{2\pi i}\int_\gamma \frac{f'(z)}{f(z)}dz=N_\gamma(f)-P_\gamma(f)$
This result is known as the argument principle.
## A joke
Today, a definition, then a joke.
I defined poles of meromorphic functions, but we can be a bit more descriptive. Suppose we have a meromorphic function $f$ which is is undefined at some point $z_0$. We can expand it as a Laurent series, and get something like:
$\displaystyle\sum_{-\infty}^\infty a_n(z-z_0)^n$
It may be that we can make $n$ arbitrarily negative and still have $a_n$ be nonzero. This is basically the worst situation possible. We don’t in fact call it a pole. We say that it is an essential singularity at $z_0$
In a better situation, it might be that $a_{-17}$ is nonzero, but with any smaller (more negative) index, $a_n$ is zero. Then, we would say that $f$ has a pole at $z_0$ of order $17$. Of course there’s nothing special about seventeen.
If a pole has order 1, we say that it is a simple pole.
Now for a joke.
An airplane is on its way out of Warsaw, and the pilot suffers a heart attack and dies. A passenger is asked to navigate the plane to safety. He looks worried, so the stewardess asks “what’s wrong?” He responds “I’m just a simple Pole in a complex plane!”
Laugh, damn it!
## Residue calculus
There’s a neat trick we can use to integrate real integrals using complex analysis. These can be made arbitrarily complicated, but I’ll give you a simple example. Compute the integral:
$\displaystyle \displaystyle\int_{-\infty}^\infty\frac1{(1+x^2)^3}dx$
This is another way to write down
$\displaystyle \displaystyle\lim_{a\rightarrow\infty}\int_{-a}^a\frac1{(1+x^2)^2}dx.$
Let’s change all the ${x}$s to ${z}$s, and pretend we’re integrating along the path ${[-a,a]}$ in the complex plane. Our notion of integrating in ${{\mathbb C}}$ is defined in such a way that this makes sense. Okay, but we normally integrate loops, not paths, so let’s complete a full loop ${\gamma_a}$ is the semi-circle in the upper half-plane with base ${[-a,a]}$ and radius ${a}$. Then we can break the path into two pieces:
$\displaystyle\int_{\gamma_a}\frac1{(1+z^2)^{2}}dz=\int_{-a}^a\frac1{(1+x^2)^{2}}dx+\int_0^\pi\frac1{(1+(ae^{i\theta})^2)^{2}}d\theta.$
The first piece is the integral we want to compute, and the second is the curved part of the semi-circle. Together they make a closed loop whose integral we can actually calculate using the residue theorem. Since our function is meromorphic on all of ${{\mathbb C}}$, we need to simply figure out which poles are inside the semicircle ${\gamma_a}$, and find their residues. We can tell that the only poles are at ${i}$ and ${-i}$, of which only ${i}$ is in the semicircle ${\gamma_a}$ (for every ${a>1}$). It’s residue we compute as $-i/4$.
Thus, the left hand side, via the residue theorem, must be ${2\pi i\cdot (-i/4)=\pi/2}$.
Now we just need to show that the semi-circular arc’s contiribution is neglibible for large ${a}$. Then, taking ${a\rightarrow\infty}$ yields
$\displaystyle \displaystyle\lim_{a\rightarrow\infty}\int_{-a}^a\frac1{(1+x^2)^2}dx=\pi/2.$
Indeed,
$\displaystyle \begin{array}{rcl} \left|\displaystyle\int_0^\pi\frac1{(1+(ae^{i\theta})^2)^{2}}d\theta\right| &\le& \displaystyle\int_0^\pi\left|\frac1{(1+(ae^{i\theta})^2)^{2}}\right|d\theta\\ &=& \displaystyle\int_0^\pi\frac1{(1+a^2)^{2}}d\theta\\ &=& \frac{\pi}{(1+a^2)^2} \end{array}$
which decreases to zero as ${a\rightarrow\infty}$.
I’m awful at integration in general, so I’m not sure if this integral can be done using integration by parts, or a tricky ${u}$-substitution. There are definitely some integrals though, for which those standard methods just don’t cut it.
## Meromorphic functions and residues
Last time, we discussed Laurent series, which are essentially two-way power series. They are almost as nice as holomorphic functions, but not quite. Maybe we can recoup some of the lost beauty of holomorphicity by imposing a reasonable condition.
We want to allow ourselves to have points where the function isn’t defined, but let’s limit these points. Let’s require them to be isolated points. We say that a function ${f}$ is meromorphic on an open set ${U\subseteq {\mathbb C}}$ if ${f}$ is holomorphic on ${U}$, except at some number of isolated points. We write ${f\in{\mathcal M}(U)}$. These isolated points are called poles. We obviously shouldn’t expect ${f}$ to have a power series centered at one of these poles, but it does have a Laurent series.
Let ${f\in{\mathcal M}(U)}$ and let ${z_0\in U}$ be a pole of ${f}$. Then as we saw last time, ${f}$ has a Laurent series centered at ${z_0}$:
$\displaystyle f(z)=\cdots+a_{-2}(z-z_0)^{-2}+a_{-1}(z-z_0)^{-1}+a_0+a_1(z-z_0)+a_2(z-z_0)^2+\cdots$
Moreover, the series has inner radius of convergence 0, so this representation is valid for all ${z}$ close enough to ${z_0}$.
Now if we take a loop ${\gamma}$ in ${U}$, and integrate, if ${f}$ has no poles on the interior of ${\gamma}$, then the integral is zero. This is the Cauchy integral theorem. What if it does have a pole? We can use the generalized Cauchy integral formula we saw last time:
Theorem 1 (Residue theorem) Let ${f\in{\mathcal M}(U)}$ and let ${z_0}$ be a pole of ${f}$ in ${U}$. Expand ${f}$ as a Laurent series as above. Let ${\gamma}$ be a small counter-clockwise circle about ${z_0}$ such that the only pole in its interior is ${z_0}$. Then
$\displaystyle \displaystyle\frac{1}{2\pi i}\int_\gamma f(z)dz=a_{-1}.$
Proof:The generalized Cauchy integral formula we saw last time said that
$\displaystyle a_n=\displaystyle\frac{1}{2\pi i}\int_\gamma\frac{f(z)}{(z-z_0)^{n+1}}dz.$
Let ${n=-1}$.
$\Box$
If we take a loop that goes around several poles, the integral must be the sum of the integrals, as we can build a homotopy as shown in the images below (click to blow up).
The value ${a_{-1}}$ (for an expansion about ${z_0}$) is called the residue of ${f}$ at the pole ${z_0}$. In other words, the theorem states that the value of an integral about a contour is the sum of the residues of the poles inside. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 179, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882963299751282, "perplexity": 120.08084500373258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891546.92/warc/CC-MAIN-20180122232843-20180123012843-00188.warc.gz"} |
http://math.stackexchange.com/questions/684770/homology-isomorphism-of-h-nsd-times-x-and-h-n-1sd-1-times-x | # Homology isomorphism of $H_n(S^d\times X)$ and $H_{n-1}(S^{d-1}\times X)$
$X$ is an arbitrary space, $d\geq 1$.
The existence of such isomorphism in the title supposedly follows from the Mayer-Vietoris sequence of $(S^d\times X,S^d_{+}\times X,S^d_{-}\times X)$:
$..\rightarrow H_n(S^d_+\times X)\oplus H_n(S^d_-\times X) \rightarrow H_n(S^d\times X)\rightarrow H_{n-1}(S^{d-1}\times X)\rightarrow H_{n-1}(S^d_+\times X)\oplus H_{n-1}(S^d_-\times X)\rightarrow ..$
This is an exact sequence and the homomorphism in the middle should be an isomorphism. But $S^d_+ \times X$ and $S^d_-\times X$ are homotopy equivalent to $X$, so the corresponding homology groups need not be zero. So how can I show this?
-
Since $H_n(S^d\times X)\cong H_n(X)\oplus H_{n-d}(X)$, the statement is generally false. – Carsten S Feb 21 '14 at 11:39
Thanks. Maybe I made a mistake copying the statement. – user35359 Feb 21 '14 at 11:45
One way to fix it is to consider the kernel of the map $H_n(S^d\times X)\to H_n(X)$ induced by the projection map instead. – Carsten S Feb 21 '14 at 12:07
The spaces $S^d_+\times X$ and $X$ are homotopy equivalent, thus the map you are considering is an isomorphism in reduced homology if, and only if $X$ is acyclic. – Daniel Robert-Nicoud Feb 22 '14 at 0:36
Let $X = S^1$ and $d = n = 2$, then $H_2(S^2 \times S^1) = \mathbb{Z}$ but $H_1(S^1 \times S^1) = \mathbb{Z} \oplus \mathbb{Z}$ so you might want to prove something else. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931527376174927, "perplexity": 156.9233918491054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049279525.46/warc/CC-MAIN-20160524002119-00086-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/dirac-gamma-matrices.49915/ | Dirac Gamma Matrices
1. Oct 27, 2004
Kane O'Donnell
Hi everyone,
From the condition:
$$\gamma_{\mu}\gamma_{\nu}+\gamma_{\nu}\gamma_{\mu} = 2g_{\mu\nu}$$
how does one formally proceed to show that the objects $$\gamma_{\mu}$$ must be 4x4 matrices? I unfortunately know very little about Clifford algebras, and for this special relativity project of mine I'd much rather not need brute force!
Cheerio!
Kane
PS: I'm using the signature (+---) for the metric tensor, although this should only change the content of the matrices, not the proof itself, I suspect.
PPS: I quite realise that it probably cannot be shown that the gamma matrices *must* be 4x4 matrices, what I want to know is if the anticommutator conditions are precisely the defining relations for a R(1,3) Clifford algebra or something like that, and how we eliminate the possibility of lower-dimensional 'isomorphisms' (don't know the correct algebra mapping term) existing.
Last edited: Oct 27, 2004
2. Oct 27, 2004
nrqed
I don't know any sophisticated demonstration with a lot of jargon, but I know the simple, dumb approach.
From $\gamma_0^2 =1$ one sees that the eigenvalues are $\pm 1$ (and $\pm i$ for the other gamma matrices). Also it's easy to show from the anticommutation relations that the matrices must be traceless. From those two conditions, a representation must be even dimensional. Since we need 4 linearly independent matrices, 2 dimensions is not enough (there's only the 3 Pauli matrices available). So the next possibility is 4 dimensions.
Pat
3. Oct 28, 2004
pat_connell
I could be wrong here, forgive me if I am. I haven't really gotten around to calculating specifically Dirac's gamma matrices. however, note that the metric tensor g is symmetric under interchange of indices, maybe proceed from there.
Last edited: Oct 28, 2004
4. Oct 28, 2004
Kane O'Donnell
No it's fine , I am really just trying to understand how to justify the selections step by step other than just guessing the things.
Thanks,
Kane
5. Oct 28, 2004
Kane O'Donnell
I get all of it now except why the matrices must be of even dimension. I've read on the net about a hundred times that it is because the eigenvalues are so and so and the matrices are traceless, but I can't see it - why can't you have a traceless odd-dimension square matrix with eigenvalues of say plus or minus 1?
Cheers,
Kane
6. Oct 28, 2004
dextercioby
You can find a good proof in the case D=4 in the article written by Wolfgang Pauli:"Contributions mathématiques à la théorie des matrices de Dirac"Ann.Inst.Henri Poincaré 6,109-136(1936).You can find this article in the book :"Wolfgang Pauli Collected Scientific Papers" edited by R.Kronig & V.F.Weisskopf,Interscience Publishers,a division of John Wiley & Sons,Inc.,1964,volume 2,page 753.
He uses $$\displaystyle{x_{4} = x^4 = ict}$$ so be careful with the transcription to coordinates with $$\displaystyle{\eta_{\mu\nu} = diag (+1\-1\-1\-1)}$$.
Good luck in all!!
7. Oct 28, 2004
Staff Emeritus
Umm, so the eigenvalues are all $$\pm 1 and \pm i$$. We assume the matrices are of full rank (to check the case dim=n) and that they are diagonal in the eigenvalue basis. So the trace is the sum of the elements on the diagonal, i.e. the eigenvalues. Now if there are an even number of elements, you can arrange the 1's and i's to cancel out, but in an odd number of dimensions, you can't, there will always be an unmatched term, and the trace in an odd dimension cannot therefore be 0.
8. Oct 28, 2004
nrqed
Because (as SelfAdjoint already mentioned) if you go to a basis where the gamma are diagonal, the trace is the sum of the eigenvalues. If the eigenvalues are $\pm 1$ and the sum of the eigenvalues is 0, therefore....
Pat
9. Oct 28, 2004
dextercioby
That's the simplest explanation,but it's got so much mathematics in it.In order to go to a basis in which those $$\gamma$$ matrices are diagonal,you've got to use a theorem which enables passing from one to another irreductible representation of the Clifford algebra.This theorem (proved in many QFT books for the case D=4;e.g.Jauch,Rohrlich,Appendix A2,but also the article by Wolfgang Pauli (see above))
states that if $$\gamma_{\mu}$$ and $$\gamma_{\mu}\prime$$ are 2 irreductible representations of the Clifford algebra (and hence satisfy the anticommutation relations),then there is a NONSINGULAR MATRIX "S" such that
$$\gamma_{\mu}\prime = S\gamma_{\mu}S^-1$$,and that this matrix is unique,except for an arbitrary multiplicative factor.
To quote Jauch,Rohrlich,Appendix A2:
"The proof of the main theorem is greatly facilitated by the powerful lemma of Schur (I.Schur,<<Neue Begruendung der Theorie der Gruppencharaktere",Sitzungsber.Preuss.Akad.,1905,p.406) which,for our purpose,may be formulated as follows:Let $$\gamma_{r}$$ and $$\gamma_{r}\prime$$ two irreductible representations of degree n,n' ($$n\leq n\prime$$) and let S be a matrix with n' rows and n columns which connects the two representations by
$$\gamma_{r}\prime S = S\gamma_{r}$$.Then S is either the null matrix (the matrix which consists only of zeros) or it is nonsingular.In the latter case,n=n'"
And then a demonstration of Schur's lemma is given.
It's also the Schur's lemma that enebles us to prove Burnside's theorem:"The matrices of an irreductible n-dimensional representation of any group contain $$n^2$$ LINIARLY INDEPENDENT MATRICES".I quoted from Francis D.Murnaghan's book:" The theory of group representations",The John Hopkins Press,Baltimore,1938,p51 (for a reference to Schur's lemma,v p.47).
To conclude:irreductible matrix representations of the Clifford algebra constructed as the liniar space of complex n*n matrices together with the anticommutation relation cannot have odd number of lines and columns.It follows that n can be only even.For n=2 you find the Pauli matrices+unit matrix.For n=4,you have the Dirac matrices,etc.
The anticommutation relation enebles us to find EXACTLY 16 liniarly independent elements of the Clifford algebra,and hence,using Burnsides theorem to find that for Clifford algebra given by $$\gamma_{(\mu} \gamma_{(\nu} = 2g_{\mu\nu} I_n$$ "n" MUST be 4.
10. Oct 28, 2004
dextercioby
sorry
that should have been of course:
$$\gamma_{(\mu} \gamma_{\nu)} = 2g_{\mu\nu} I_n$$
I'm still a novice in editing TEX :tongue2:
11. Oct 28, 2004
Staff Emeritus
Sure, Shur. Shur's lemma is ordinarily taught as a part of modern undergraduate courses in linear algebra. Not always proved, but stated and used.
12. Oct 28, 2004
Kane O'Donnell
Thankyou to everyone that replied! This quote above is precisely what I was after, as I realised that if we could pass to a diagonal matrix we wouldn't have a problem - the problem of course is that since the gamma matrices are not self-adjoint they don't have a basis of eigenvectors and hence there was (to my knowledge) no diagonal representation of each of the gammas as linear transformations. If there *is* a way to pass to a diagonal form, then my issue is resolved
Thanks again,
Kane
13. Nov 2, 2004
Kane O'Donnell
Ok. I have been looking at this problem quite a bit in order to find a good balance between simplicity and rigour in making this argument. So, my question is - is the following correct?
1. The operators $$\gamma_{\mu}$$ together with their anticommuting property have the structure of a Clifford algebra. We assume the Clifford algebra is finite dimensional and over some complex space (can't justify it, don't know enough about Clifford algebras - help?). Hence it is isomorphic to a matrix algebra over R, C or H, and we have a matrix representation. For each gamma operator it's matrix must be traceless (sum of eigenvalues is zero).
2. From the four operators you can (by multiplication) construct exactly 16 operators. Using the traceless property one can show the 16 operators are linearly independent. As such the dimension of the Clifford algebra is 16. This implies the underlying vector space has dimension 4 (16 = 2^4). Therefore our Clifford algebra is over a 4D vector space and (since n is even) is isomorphic to the 4x4 complex matrix algebra.
The reason I've taken this approach is that I still can't justify to myself that the gammas are diagonalisable directly, hence I can't use the simple trace/eigenvalue pair argument.
Kane
Last edited: Nov 2, 2004
14. Nov 3, 2004
dextercioby
Actually Burnside's theorem implies:$$4=\sqrt{16}$$.Of course,u don't need Bunside's theorem to tell u that $$4=\sqrt{16}$$,it's just that it makes the connection between the dimension of an irreductible representation and the number of (linearly independent) generators of the Clifford algebra.
15. Nov 3, 2004
Kane O'Donnell
http://en.wikipedia.org/wiki/Clifford_algebra
Under Bases and Dimension it gives the dimension of a Clifford algebra to be 2^n, where n is the basis of the underlying vector space.
Of course, wiki isn't the greatest source in the world. The point is, to use Burnside's theorem convincingly (to myself) I would have to know a lot more about the underlying maths, which unfortunately I don't (although that will be fixed over the Dec/Feb summer hols).
Cheerio,
Kane
Similar Discussions: Dirac Gamma Matrices | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613913893699646, "perplexity": 724.7336190749357}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805708.41/warc/CC-MAIN-20171119172232-20171119192232-00663.warc.gz"} |
https://www.physicsforums.com/threads/need-help-applying-kirchhoffs-voltage-law.673709/ | # Need help applying Kirchhoff's voltage law
1. Feb 22, 2013
### InvalidID
I made a circuit that matches the attached figure. Then, I measured values of voltage and current and I ended up with the attached table. Now I'm applying KVL to both loops.
For the first loop:
$$-23.9+13.981+10.120≅0\\ 0.201≅0$$
For the second loop:
$$10.120+4.118+6≅0$$
What am I doing wrong for the second loop?
#### Attached Files:
File size:
27.4 KB
Views:
178
• ###### Data.png
File size:
3.4 KB
Views:
153
2. Feb 22, 2013
### Staff: Mentor
Mark the polarities of the voltages you measured on the circuit diagram. When you do a "KVL walk" around a loop, take into account these polarities; does the potential drop or rise when you "walk over" a given component along your path?
3. Feb 22, 2013
### tiny-tim
Hi InvalidID!
You haven't drawn any arrows on your diagram, to show the direction of the current.
So how do you know whether to use plus or minus for the voltage drops across each resistor?
Apply KCL to node 2
4. Feb 22, 2013
### InvalidID
I thought resistors don't have polarities?
I assumed that the current flows in clockwise direction.
KCL applied to node 2:
0.601+4.594=5.195
5.195=5.195
Edit: I think I might be mixing up mesh analysis with KVL. You can only apply KVL to the large loop, right? You can't apply it to the individual meshes, correct?
Last edited: Feb 22, 2013
5. Feb 22, 2013
### tiny-tim
Hi InvalidID!
You can apply KVL to any loop.
In this case, you can apply KVL to all three loops (the outside one, and the two small ones), but the KVL equation for the two small ones will add up to the KVL equation for the large one, so you only have two independent KVL equations.
ok, now do KVL for two loops, and draw in those arrows!!
6. Feb 22, 2013
### Staff: Mentor
Resistors themselves do not have polarities. However, when a current flows through a resistor, the potential drops in the direction of current flow. You may have noticed when you were measuring voltages across the resistors that you would see a positive value with the meter leads placed in one orientation, and a negative value (same magnitude) if the leads were reversed. So when you measure the voltage across a resistor, you should take note of the polarity you see since that will also tell you the direction that the current is flowing.
KVL can be applied around any closed path.
7. Feb 23, 2013
### InvalidID
I've applied KVL to all 3 loops but the equations of the smaller loops don't add up to the the equation of the larger loop.
#### Attached Files:
• ###### Untitled.png
File size:
30.3 KB
Views:
107
8. Feb 23, 2013
### Staff: Mentor
KVL states that the sum of the potential changes around a closed path (a loop) is zero. It doesn't say anything about a sum of the equations derived from this property.
9. Feb 23, 2013
### InvalidID
So I suppose you are disagreeing with tiny-tim? Did I setup the equations correctly?
10. Feb 23, 2013
### Staff: Mentor
I don't think I'm disagreeing with tiny-tim; the circuit admits two independent loops, since two loops (chosen appropriately) are sufficient to include every component of the circuit at least once. This means that if you use a third loop, its equation will be linearly dependent (mathematically speaking) on the other two. A straight sum of terms from two of the equations usually will not result in the third equation; Some scaling of the equations might be required (multiplication by constant values).
Last edited: Feb 23, 2013
11. Feb 23, 2013
### InvalidID
Alright, but if I input the values into the KVL equation for the loop on the right, I get:
$$10.120+4.118+6≅0$$
which isn't correct. :S
12. Feb 23, 2013
### Staff: Mentor
Then you have a sign issue with the terms. Did you mark in the polarities of the potential drops due to the currents and take them into account when you wrote the KVL expression?
13. Feb 23, 2013
### tiny-tim
Hi InvalidID!
No you haven't, you've written equations like E - R1 - R2 = 0.
Sorry, but that is nothing like Kirchhoff's law.
KVL requires you to add the potential drops across all the components in the loop (and the emf).
The potential drop is IR, not R, and you multiply it by 1 or -1 depending on the direction of the current.
Write i1 i2 and i3 on your diagram, with arrows specifying a direction for each current, then write out the three loop equations.
With 3 loops, the sum or difference of 2 KVL equations will always equal the third.
(so long as you don't multiply one by a factor for no particular reason)
14. Feb 23, 2013
### InvalidID
I think I got it now. Is this correct?
#### Attached Files:
• ###### Untitled.png
File size:
17.2 KB
Views:
101
15. Feb 23, 2013
### tiny-tim
yes, that looks ok now
always draw the arrows for KVL like that!
(i think you were getting confused with mesh analysis, where there's one circular arrow for the whole of each loop)
16. Feb 23, 2013
### InvalidID
Kind of embarrassing question, but I forget why VE is negative in the main loop.
17. Feb 23, 2013
### tiny-tim
why shouldn't it be?
if you go clockwise round both the left and the main loop, why would you make the two VEs different?
18. Feb 23, 2013
### InvalidID
Well, we would want to be consistent so they would either both be positive or negative. We're going from the negative end to the positive end of the battery but why does that give negative voltage? Because the positive end is 15V higher than the negative end? So going from negative to positive, it would be -15V?
Also, VR2 is negative in the right loop because we're going in the clockwise direction when doing KVL which is in the opposite direction of i2, right?
19. Feb 23, 2013
### tiny-tim
i can never remember which way round the battery works!
but i'm not taking the exams, so i don't need to!
you'll just have to remember it
(btw, you can use the X2 button twice: VR2: isn't that cute? )
yes, i2 is positive in one loop but negative in the other loop, because the arrows go opposite ways
(you get the same thing in mesh analysis: the arrows always go different ways for any section that's in two loops)
Similar Discussions: Need help applying Kirchhoff's voltage law | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539584875106812, "perplexity": 1285.6260058724497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00844.warc.gz"} |
https://www.physicsforums.com/threads/newtonian-mechanics-question.166982/ | # Newtonian Mechanics Question
1. Apr 22, 2007
### robhlee
Hello,
Say you have a frictionless setting. In this setting are two skateboards. One has a waterwheel (or any wheel with fins) propped up on beams so that it is on the skateboard and can freely turn. On the other skateboard is a person standing on it. If the skateboards are one behind the other (like train carts) and the person punches the wheel on a fin so that the wheel spins, what will happen?
In the instant of physical interaction (when the person punches the wheel) the fist/person is exerting a force on the fin, and, according to Newton's Third Law, the fin is exerting a force on the fist/person.
Correct?
If so, since the fin is part of a wheel, the force exerted by the person turns into centripetal acceleration and there is no net force on the waterwheel-skateboard apparatus. Meanwhile, there is a net force being exerted toward the person.
Correct?
If so, the person will be moved and the waterwheel-skateboard will remain stationary relative to its environment, but the water wheel will spin.
Please post any flaws or fallacies in theory you see in this situation.
:rofl: :rofl: :surprised :tongue2: :tongue:
2. Apr 22, 2007
### Staff: Mentor
No. You are missing the force that the skateboard exerts on the person to keep the person from moving. The skateboard does not move.
3. Apr 22, 2007
### robhlee
The only force the skateboard could exert would be friction. This is frictionless. Unless you meant inertia, but thats not force. Force overcomes inertia.
Last edited: Apr 22, 2007
4. Apr 22, 2007
### robhlee
so, russ_watters, your saying if someone flatout pushes you, you wont move because the skateboard is exerting a force back? is that what youre saying?
As far as the situation above is concerned, the person and the skateboard are one thing. If you dont agree, then say a mutant human with wheels on their feet replaces the person on the skateboard.
5. Apr 22, 2007
### Staff: Mentor
By "frictionless" I assume you mean no friction between wheels and floor.
Right! They both accelerate in opposite directions.
Huh? You just pushed on the waterwheel-skateboard--of course there's a net force on it.
The same net force is exerted on both.
Nope.
The tricky thing is that it will be difficult to exert a force on that spinning wheel, but if you do, it will accelerate.
6. Apr 22, 2007
### Crosson
I guess that:
is the crux of your discussion. It sounds similar to various (fallacious) "get rid of the force" ideas I used to have before getting formal training in physics.
I think the misunderstanding occurs in the difference between force and energy; energy is conserved and force is not. If somehow all of the energy of the punch went into the wheel, there would be no kinetic energy to move the person, but the punch is a net force and so it has to move the COM of the person attached to the flywheel.
7. Apr 22, 2007
### robhlee
Crosson, what do you mean by "COM"?
8. Apr 22, 2007
### robhlee
Hey guys,
What I meant by "no net force on the waterwheel-skateboard apparatus" was that there is no net force relative to environment. There is a force, but it is converted to centripetal force, since the fin is part of a wheel, I think.
Doc Al, I know there is a net force exerted on the fin and the fist, HOWEVER, the fin is part of a wheel and thus centripetal acceleration occurs when force is applied to the fin, so looking at the waterwheel/skateboard AS A WHOLE, there is no net DIRECTIONAL force.
Last edited: Apr 22, 2007
9. Apr 22, 2007
### robhlee
Doc Al, the punch is nearly instantaneous, just one punch. (In response to your last comment on your last post)
10. Apr 22, 2007
### robhlee
Wait, wait: "PERSON ATTACHED TO FLYWHEEL"? What do you mean Crosson?
11. Apr 22, 2007
### robhlee
Oh and Crosson, there is no force disappearing. Opposite and equal reactions on fin and person. The main question is, is the reaction onto the fin "converted" into a non-directional centripetal force?
12. Apr 22, 2007
### robhlee
Doc Al, by "frictionless", I mean "frictionless".
13. Apr 22, 2007
### robhlee
By "accelerate", do you mean the wheel's spinning or the movement of the waterwheel-skateboard?
14. Apr 22, 2007
### robhlee
sorry for the numerous posts, guys. Also I appreciate all your help.
15. Apr 22, 2007
### Staff: Mentor
Sure there's a net horizontal force on both. And forces don't "convert". True, if the wheel spins there'll be a centripetal force acting on all parts of it--but that's not the force that you hit it with.
Nope. Did you exert a horizontal force on it? Yes. Then there's a net force on it AS A WHOLE. Regardless of whether the wheel spins or not.
So?
16. Apr 22, 2007
### Staff: Mentor
I mean that the center of mass (COM) of the system will accelerate--the whole waterwheel-skateboard system.
17. Apr 22, 2007
### robhlee
Doc Al,
I know forces dont "convert", I felt iffy typing that word :), but the centripetal force is the result of the force on the fin.
Say the wheel was locked, it didnt spin. Comparing locked wheel punching and unlocked wheel punching, would you get the same result?
Oh, I said the punch was instantaneous just for your clarification. It seemed my first description was not clear enough.
18. Apr 22, 2007
### Staff: Mentor
The spinning is the result of the impulse you delivered with your punch.
Sure, if you delivered the same impulse--the same force for the same time. But you'd most likely be able to exert more force on the locked wheel.
Regardless: However you did it, if you delivered a horizontal impulse to the waterwheel-skateboard system, its total momentum will change--it's center of mass will move. No way around it.
19. Apr 22, 2007
### robhlee
Hey thanks. I dont wanna waste your time anymore, so could you tell me where I can get more information related to this on center of mass?
20. Apr 22, 2007
### robhlee
Wait one second...if you say the horizontal force of a locked and unlocked wheel would result in the same effect, in the unlocked wheel at any instant of movement there is an opposite side with equal momentum acting the opposite direction..so is there a net momentum? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863489627838135, "perplexity": 1837.6865545587596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542765.40/warc/CC-MAIN-20161202170902-00253-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://cdn-macroaxis.netdna-ssl.com/invest/technicalIndicator/QADB.BE/Standard-Deviation | # Standard Deviation
The Standard Deviation is a measure of how spread out the prices or returns of an asset are on average. It is the most widely used risk indicator in the field of investing and finance. Standard Deviation is commonly used to measure confidence in statistical conclusions regarding certain equity instruments or portfolios of equities.
## Standard Deviation = SQRT(V)
Standard deviation is applied to the annual rate of return of an investment to measure the investment's volatility. Standard deviation is also known as historical volatility and is used by investors as a gauge for the amount of expected market volatility. A large standard deviation usually indicates that the data points are far from the mean and a small standard deviation indicates that they are clustered closely around the mean.
## Standard Deviation In A Nutshell
The more volatile a given equity instrumet, the larger its standard deviation. Standard deviation helps money managers to capture volatility of the portfolio into a single number. For most traded equities, future monthly returns are usually destributed within one standard deviation of its average return (68% of the time), and within two standard deviations 95% of the time.
The standard deviation is one of the main statistical indicators commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. In finance and investing Standard Deviation is usually used to measure risk.
## Closer Look at Standard Deviation
Other deviation levels to watch out for are the 1.5 and 2 standard deviation level. At 2 standard deviations, the likely hood that your data point occurs within 2 standard deviations increases to roughly 95%. Again, just like any tool, this may not be 100% accurate, but it certainly have proven true more times than not. Using standard deviation is simple statistics and it takes emotion out of the picture. Another way people use standard deviation is to incorporate volume, which takes a little time to master the equation, but is certainly possible. Identifying what tools to use for you investing needs can take time, but a standard deviation tool is one to keep your eye on. It is reliable compared to the others and has proven to be one of the more useful out of the many that exist. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8596211671829224, "perplexity": 643.6037054973323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00354.warc.gz"} |
https://www.thejournal.club/c/paper/354372/ | #### Fast and More Powerful Selective Inference for Sparse High-order Interaction Model
##### Diptesh Das, Vo Nguyen Le Duy, Hiroyuki Hanada, Koji Tsuda, Ichiro Takeuchi
Automated high-stake decision-making such as medical diagnosis requires models with high interpretability and reliability. As one of the interpretable and reliable models with good prediction ability, we consider Sparse High-order Interaction Model (SHIM) in this study. However, finding statistically significant high-order interactions is challenging due to the intrinsic high dimensionality of the combinatorial effects. Another problem in data-driven modeling is the effect of "cherry-picking" a.k.a. selection bias. Our main contribution is to extend the recently developed parametric programming approach for selective inference to high-order interaction models. Exhaustive search over the cherry tree (all possible interactions) can be daunting and impractical even for a small-sized problem. We introduced an efficient pruning strategy and demonstrated the computational efficiency and statistical power of the proposed method using both synthetic and real data.
arrow_drop_up | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9159969091415405, "perplexity": 2592.6103285845584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00124.warc.gz"} |
https://www.iaa.es/seminars/so-iaa-colloquium-neutral-and-molecular-gas-outflows-tracers-impact-radio-jets | # SO-IAA Colloquium: Neutral and molecular gas outflows as tracers of the impact of radio jets
Our view of the gas and its physical conditions in the central region of AGN has been enriched by the discover of fast and massive outflows of HI and molecular gas. These outflows can be driven by radiation/winds but also by the interaction of the radio plasma with the ISM. Understanding the origin and quantifying their impact requires to trace their location and derive their physical conditions (density of the gas, mass, mass outflow rate and kinetic energy of the outflow etc.). Particularly interesting has been the finding that in the first phase of their life, jet in radio galaxies can be particularly effective in driving such outflows. This crucial phase is at the heart of the idea of feedback, therefore particularly relevant for studying feedback in action.
In this talk, I will present some of the results we have obtained to trace jet-driven HI and molecular gas outflows down to scales ranging from hundred to tens of pc. The impact of low-power radio jets will be discussed and the comparison with the predictions from numerical simulations will also be presented.
Outflows of up to few hundred Msun/yr have been found in molecular gas using ALMA while the HI observed with VLBI is showing that the outflowing gas is clumpy as also predicted from numerical simulations. I will describe the kinematics of the gas and its conditions and the relevance they may have for feedback.
Fecha:
31/10/2019 - 12:30
Conferenciante:
Dr. Raffaella Morganti
Filiación:
ASTRON, the Netherlands Institute for Radio Astronomy and Kapteyn Astronomical Institute, University of Groningen | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290959000587463, "perplexity": 1214.595475654493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00537.warc.gz"} |
http://clay6.com/qa/7684/find-the-magnitude-of-overrightarrow-times-overrightarrow-if-overrightarrow | # Find the magnitude of $\overrightarrow{a} \times \overrightarrow{b}$ if $\overrightarrow{a}=\overrightarrow{2i}+\overrightarrow{k}, \overrightarrow{b}=\overrightarrow{i}+\overrightarrow{j}+\overrightarrow{k}$
Toolbox:
• If $\overrightarrow a = a_1\overrightarrow i+a_2\overrightarrow j+a_3\overrightarrow k, \: \overrightarrow b = b_1\overrightarrow i+b_2\overrightarrow j+b_3\overrightarrow k$ then $\overrightarrow a$ x $\overrightarrow b = \begin{vmatrix} \overrightarrow i & \overrightarrow j & \overrightarrow k \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}$
$\overrightarrow a$ x $\overrightarrow b = \begin{vmatrix} \overrightarrow i & \overrightarrow j & \overrightarrow k \\ 2 & 0 & 1 \\ 1 & 1 & 1 \end{vmatrix} = (0-1)\overrightarrow i-(2-1)\overrightarrow j+(2-0)\overrightarrow k$
$= -\overrightarrow i-\overrightarrow j+2\overrightarrow k$
$|\overrightarrow a$ x $\overrightarrow b| = \sqrt{1+1+4}= \sqrt 6$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9834139943122864, "perplexity": 894.6024073801566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863570.21/warc/CC-MAIN-20180520131723-20180520151723-00258.warc.gz"} |
https://www.physicsforums.com/threads/dirac-proves-0-1.122063/ | # Dirac Proves 0 =1
1. May 26, 2006
### George Jones
Staff Emeritus
Dirac Proves 0 = 1
Suppose $A$ is an observable, i.e., a self-adjoint operator, with real eigenvalue $a$ and normalized eigenket $\left| a \right>$. In other words,
$$A \left| a \right> = a \left| a \right>, \hspace{.5 in} \left< a | a \right> = 1.$$
Suppose further that $A$ and $B$ are canonically conjugate observables, so
$$\left[ A , B \right] = i \hbar I,$$
where $I$ is the identity operator. Compute, with respect to $\left| a \right>$, the matrix elements of this equation divided by $i \hbar$:
$$\begin{equation*} \begin{split} \frac{1}{i \hbar} \left< a | \left[ A , B \right] | a \right> &= \left< a | I | a \right>\\ \frac{1}{i \hbar} \left( \left< a | AB | a \right> - \left<a | BA | a \right> \right) &= <a|a>. \end{split} \end{equation*}$$
In the first term, let $A$ act on the bra; in the second, let $A$ act on the ket:
$$\frac{1}{i \hbar} \left( a \left< a | B | a \right> - a \left<a | B | a \right> \right)= <a|a>.$$
Thus,
$$0 = 1.$$
This is my favourite "proof" of the well-known equation $0 = 1$.
What gives?
In order not spoil other people's fun, it might be best to put "spoiler" at the top of any post that explains what's happening.
Regards,
George
Last edited: Feb 28, 2012
2. May 26, 2006
I don't think you can do that because A and B don't commute?
3. May 26, 2006
### George Jones
Staff Emeritus
That step is OK.
One way to see this is to take |b> = A|a> and |c> = B|a>, and then to consider <b|c>.
Any is to to look at (AB)^* = B^* A^* = B A, which takes care of the order of the operators.
Regards,
George
4. May 26, 2006
Staff Emeritus
Isn't this the one about the domains of the operators?
5. May 26, 2006
### George Jones
Staff Emeritus
I don't think the problem is with domains. I think it is possible for the intersection of the domains of A, B, and [A , B] to be dense, and to still have the proof be "true".
Regards,
George
6. May 26, 2006
### waht
Interesting, but the proof is based on an assumption that A and B are canonically conjugate observables. Therefore 0=1 is constrained to that condition.
how does <a|[A,B]|a> = <a|AB|a> - <a|BA|a>
btw?
7. May 26, 2006
### Physics Monkey
Spoiler Below!
What a wonderful proof! I have never seen this one before, George. My discussion is below.
***SPOILER***
Think about the real line where we can represent the algebra by the usual quantum mechanical operators X and P. The key is to realize that X and P have no normalizable eigenvectors! The usual "normalization" for position "eigenstates" (lots of scare quotes) is $$\langle x | x' \rangle = \delta(x-x')$$, so let's have some fun with this formula. Since X and P are canonically conjugate we have that $$[X,P] = i \hbar$$, and we can take matrix elements of both sides. The right side is $$\langle x | i \hbar | x' \rangle = i \hbar \delta(x-x')$$. The left side is $$(x - x')( - i \hbar \frac{d}{dx} \delta(x-x'))$$ where I have used $$\langle x | P = - i \hbar \frac{d}{dx} \langle x |$$. Thus we appear to have stumbled onto the rather cute identity $$- x \frac{d}{dx} \delta(x) = \delta(x)$$. Go ahead, try it under an integral, it actually works! I love such silly little formulae between wildly singular objects.
A further amusing challenge:
It isn't always true that the derivative operator has no eigenstates. Suppose you look at the derivative operator on a finite interval. It turns out that the Neumann indices are (1,1), and thus self adjoint extensions exist which are parameterized by a phase (the boundary condition). One can now find proper eigenfunctions and eigenvalues for a given self adjoint extension of the derivative operator. Are we therefore back to proving that 0 = 1 or what?
Last edited: May 26, 2006
8. May 26, 2006
### mikeu
Except you can always find such A and B, so you can always find 0=1..... :)
It's just the definition of the commutator and linearity of the inner product:
$$\langle a | [A,B] | a\rangle = \langle a | (AB-BA) | a \rangle = \langle a | AB | a \rangle - \langle a | BA | a \rangle$$.
*** SPOILER cont. ***
I suspected (based on X and P ) that delta distributions would enter into it, since we end up with $$\frac{1}{i\hbar}(a-a)\langle a|B|a\rangle=1$$ so it is clear that $$\langle a|B|a\rangle$$ must be ill-defined (i.e. infinite) to get something like "$$0\cdot\infty=1$$." Recovering the definition of the derivative of the delta was neat. What I still don't see though is what the flaw in the proof is in the case of discrete operators...?
George, I thought of another 'interpretation' of the 'proof' too: you could prove 0=ih => h=0 => things aren't quantized
9. May 26, 2006
### George Jones
Staff Emeritus
So, you want to take A = P and B = X for the Hilbert space of square-integrable functions on the closed interval [0 , 1], say.
SPOILER for Physic Monkey's Challenge.
It looks like, appropriately, selfAdjoint was right - domains are important. For the operator PX, operating by X on an eigenfunction of P results in a function that is not in the domain of selfadjointness for P, so P cannot be moved left while remaining to be P.
Easy direct calculations in this example reveal a lot.
As I said in another thread, if A and B satisfy [A , B] = ihbar, then at least one of A and B must be unbounded. In example of functions on the whole real line, both X and P are unbounded, while for functions on [0 ,1], X is bounded and P is unbounded.
Regards,
George
10. May 27, 2006
### George Jones
Staff Emeritus
Time to come clean!
I lifted (and addded a liitle elaboration) this example from the Chris Isham's nice little book Lectures on Quantum Theory: Mathematical and Structural Foundations.
Very interesting discussion!
Regards,
George
11. May 28, 2006
### Physics Monkey
Very good, George. The commutator is indeed ill defined on the momentum eigenstates.
Well then, I think I might have to take a look at Isham's book.
Thanks for the interesting post!
P.S. To all you readers out there, I can't resist telling about some nice physical applications of such ideas. It turns out that the self adjoint extensions of the momentum operator on a finite interval describe physically the problem of a particle on a ring with a magnetic field through the ring. This is in turn equivalent to imposing a 'twisted' boundary condition $$\psi(x+L) = e^{i \alpha} \psi(x)$$ on the wavefunction for a particle on a ring with no magnetic field. But there's more! Impurities in a metal can localize electronic states and cause a metal to become an insulator. One way to tell if you have localized states is to look at how sensitive such states are to the boundary conditions of your sample. The above ideas can then be applied, and you can relate the question of localization to the behavior of the system under an applied magnetic field (a problem which can be attacked with perturbation theory). And you thought self adjoint extensions were dull! Shame on you. :tongue:
12. May 30, 2006
### reilly
First, if "0 = 1" is true then QM completely falls apart, sorta like proof by contradiction, and "0=1" is certainly a contradiction. That tells me that the various proofs must be incorrect, or most physicists have been living like Alice in Wonderland.
The problem is that P X | x> is not equal to P|x> x. As in, go to an x position representation in which P = -i d/dx. That is,
P X |x> = -i d/dx x |x> = (-i + {-i x d/dx})|x>
Delta functions and domaines are not at issue
Sometimes abstraction can lead even the best astray.
Think about Wick's Thrm, which would not hold if "0 =1" were true, nor would many standard manipulations of creation and destruction operators be legitimate. .
(For the abstract truth about momentum operators see Hille and Phillips, Functional Analysis and Semi Groups, Chap XIX, which discusses translation operators (d/dx) in great and highly rigorous detail. The authors demonstrate that there really is not a problem with such operators.
Again, if "0=1" then QM is inherently mathematically trustworthy, which seems to me to be a completely absurd idea.
Regards,
Reilly Atkinson
13. May 30, 2006
### Hurkyl
Staff Emeritus
Any notation can lead people astray. But abstraction has the advantage that there are fewer messy details, which means less opportunities to make mistakes, and less possibility for those mistakes to be obscured.
Avoiding abstraction certainly doesn't prevent one from making mistakes...
such as overworking your variables. The x in d/dx is not the same as the x as in |x>; the former is the coordinate variable of the position representation, and the latter is a constant denoting which position eigenstate we've selected.
If I relabel the variables so x is no longer being overworked, we're looking at -i d/dx x |a>. (And don't forget that x |a> = a |a>)
You could rewrite George's entire post in the A-representation (so that A = x, and B = -ih d/dx), but that doesn't resolve the paradox: you still wind up with 0 = 1.
That's not accurate: if 0=1 were true, then everything is true. (And simultaneously false)
I'm completely confused by this.
Last edited: May 30, 2006
14. May 31, 2006
### Gokul43201
Staff Emeritus
Can we go over this again, slowly ? This is something that has bothered me for a little while.
Is L the circumference of the ring ? Does this not destroy the single-valuedness of $\psi(x)$? Or is that what is being probed ?
I think I've drunk too deep from the cup of Periodic BCs, what with all the goodies like flux quantization in SCs and Brillouin zones in crystals that it has thrown up like so many marshmallows!
Let's start with a simple case : the Anderson hamiltonian for non-interacting electrons in a cubic lattice.
The Hamiltonian consists of your favorite on-site disorder potential and the usual hopping term (nn, say). You then apply the above boundary condition to the single-particle eigenfunction in one or more directions. Ignoring what this means for now, this allows you to Taylor expand the eigenvalues $E_i(\alpha)$ and look at the coefficients of higher order terms in $\alpha$. The deviations from 0 of these coefficients is what you call the phase sensitivity? If that's true, how exactly is this a "measure" of localization? Is the point to extract a dimensionless number (like T/U) and looking for a scaling law? And if not, what happens next?
Last edited: May 31, 2006
15. May 31, 2006
### lalbatros
George Jones,
Physics Monkey,
Would that "0=1" contradiction be a proof that no finite-dimentional matrix could satisfy the commutation relation $$\left[ A , B \right] = i \hbar I$$ ?
Would it possible to see that easily for two-dimentional matrices?
Michel
16. May 31, 2006
### Haelfix
The problem is indeed one of domains of definition, its the last step in the sequence that is erroneous. Ask yourselves, what *is* the operator AB or BA and where and what are they defined on?
Most of this is easily demystified if you recall the spectral theorem. For general operators, you usually are confronted not just with discrete or continous spectra, but instead you have that + a bunch of other stuff, often called the residual spectrum. All bets are off when confronted with this, you can't just use naive physicist language of functional analysis in those cases.
17. May 31, 2006
### mikeu
It does appear to be a proof by contradiction, at least for observables. If A is not Hermitean then $$\langle a|A=(A^\dagger|a\rangle)^\dagger\neq (A|a\rangle)^\dagger$$, so acting A to the left in the term $$\langle a|AB|a\rangle$$ doesn't yield $$a\langle a|B|a\rangle$$, as required to obtain the contradiction 0=1. Your conclusion is correct anyway, it's just not proven by this example (unless I missed something else, it is late...).
For 2D matrices, if A and B are completely arbitrary then
$$A=\left(\begin{array}{cc} a & b \\ c & d\end{array}\right),\ B=\left(\begin{array}{cc} w & x \\ y & z\end{array}\right),\ AB-BA=\left(\begin{array}{cc} bz-cy & b(w-x)+y(a-d) \\ c(x-w)+z(d-a) & -(bz-cy)\end{array}\right).$$
Since the 1-1 entry is the negative of the 2-2 entry, this can never be proportional to the identity matrix.
18. May 31, 2006
### Physics Monkey
The proofs that 0 = 1 are certainly incorrect!
I'm afraid this isn't true. It doesn't matter what P is, if X hits the state $$|x\rangle$$ first, then you can replace X with x.
No, these things really are the relevant issues.
19. May 31, 2006
### Physics Monkey
Hi Gokul,
Do you want to hear just the story about the application to localization, or the whole story including the explanation of the paradox? I'm just gonna talk about localization for the moment, but I'm happy to say something else if you want.
To start with, the physical system is a piece of material in d dimensions of typical size L. I'll talk in one dimensional terms because this is easiest to understand, but the theory generalizes easily. The physical geometry is not periodic, although what we will eventually imagine is putting lots of these intervals of length L next to each other. As I indicated above, it is a technical fact that the momentum operator $$P = - i \frac{d}{dx}$$ is not self adjoint on such a finite interval. This is easy to understand from the fact that the equations $$P \psi = \pm i k \psi$$ have perfectly good solutions in the Hilbert space. As an aside, notice how this situation is modified if the interval is infinite $$(-\infty, \infty)$$. In this case, neither equation has a solution in the Hilbert space (of square integrable functions), and the momentum operator on the real line is called essentially self adjoint.
There is then some mathematical procedure for fixing the momentum operator up by defining what is called a self adjoint extension. This extension is characterized by a phase which can be identified with the boundary condition of your sample $$\psi(x+L) = e^{i \alpha} \psi(x)$$. In other words, your new fixed up momentum operator only makes sense on functions that satisfy this property. The physical geometry isn't periodic so there really aren't any issues about multivaluedness here. My original description is somewhat confusing on this point, so sorry about that. You can think of this phase as more like the Bloch factor $$e^{i k a}$$ that obtains from translating a Bloch state by one lattice spacing. The important physical realization is that this weird phase factor can be mapped via a gauge transformation to the problem of a particle on a ring with periodic boundary conditions and a magnetic field. The wavefunction can be thought of as being multivalued, with branches labeled by the winding number, but you are protected from unpleasantness by gauge invariance. Mmmmm marshmallows.
Now for some physics. The Anderson model is a good place to start, and your understanding is quite right. This twisted boundary condition is imposed with the physical idea that somehow sensitivity to boundary conditions will tell you whether states are extended or localized. You then map the problem to the equivalent system on a ring with a magnetic field. The vector potential is propotional to $$\alpha$$, and it makes sense to do perturbation theory in order to understand how the twisted boundary condition effects states. You would be interested in comparing something like the variance of $$\frac{\partial^2 E_i(\alpha)}{\partial \alpha^2}$$ to the typical level spacing $$\Delta$$. For a given realization of disorder, you can work out something like $$\frac{\partial^2 E_i(\alpha)}{\partial \alpha^2} \sim \sum_{j \neq i} \frac{1}{L^2} \frac{|\langle i | P/m | j \rangle|^2}{E_i - E_j}$$ plus some constant term you don't really care about.
To estimate the variance we can replace the energy denominator with the level spacing and the velocity matrix elements with some kind of typical velocity scale. Such matrix elements also enter into the Kubo formula for conductivity, so you can use the conductivity to estimate the typical matrix element. The Einstein formula for conductivity is $$\sigma = 2 e^2 N(0) D / \Omega$$ where D is the diffusion constant, and with the help of the Kubo formula you can easily estimate $$v^2 \sim D / N(0) = D \Delta$$ where $$N(0)$$ is the density of states per unit energy at the Fermi surface. The variance is then simply $$D/L^2$$ which is called the Thouless energy $$E_T$$. The sensitivity to boundary conditions is given in terms of the ratio $$\frac{E_T}{\Delta}$$.
Actually, this ratio has a very direct physical meaning. Go back to the Einstein formula for the conductivity. Freshman physics says the conductance is related to the conductivity by $$G(L) = \sigma L^{d-2}$$. We can define a dimensionless conductance $$g(L)$$ by multiplying the conductance by the resistance quantum $$\sim 1/e^2$$. The result is that the dimensionless conductance is given by $$g(L) \sim \frac{N(0)}{L^d} D L^{d-2} = \frac{D}{L^2} \frac{1}{\Delta} = \frac{E_T}{\Delta}$$. So the sensitivity to boundary conditions is just determined by the dimensionless conductance $$g$$! The physical picture is now quite nice. If we have localized states then the system is an insulator and g should be very small which we interpret as saying the states are insensitive to boundary conditions. If we have extended states then the system is a metal and g should be large which we interpret as pronounced sensitivity to boundary conditions.
With the understanding that localized and extended states can be characterized in terms of the dimensionless conductance, we can now build our scaling theory and get all the usual fun results. The added bonus here is that you can solve the Anderson model easily enough and directly compute the Thouless energy and level spacing. It's especially easy in d = 1, and you can verify the prediction of the scaling theory that g always goes to zero as L grows large. And all this can be phrased in the language of self adjoint extensions!
Refs:
Thouless has a number of papers on this sensitivity to boundary conditions idea. The book by Imry also has something about this in it as I recall.
Last edited: May 31, 2006
20. Jun 1, 2006
### Gokul43201
Staff Emeritus
Thanks for the response, PM.
I've only gotten half-way through it, and won't likely find more time until later tonight, but I wanted to let you know that I've seen this.
I'll reply later. And if this is distracting (for others) from the rest of the thread, I could request that it be split off into a new thread.
PS : Most everything Thouless has written in this field involves the RG! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912422776222229, "perplexity": 568.4899331565404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00393-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=1955_AHSME_Problems&oldid=121999 | 1955 AHSME Problems
1955 AHSC (Answer Key)Printable version: Wiki | AoPS Resources • PDF Instructions This is a 50-question, multiple choice test. Each question is followed by answers marked A, B, C, D and E. Only one of these is correct. You will receive ? points for each correct answer, ? points for each problem left unanswered, and ? points for each incorrect answer. No aids are permitted other than scratch paper, graph paper, ruler, compass, protractor and erasers. Figures are not necessarily drawn to scale. You will have ? minutes working time to complete the test. 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 • 26 • 27 • 28 • 29 • 30 • 31 • 32 • 33 • 34 • 35 • 36 • 37 • 38 • 39 • 40 • 41 • 42 • 43 • 44 • 45 • 46 • 47 • 48 • 49 • 50
Problem 1
Which one of the following is not equivalent to ?
Problem 2
The smaller angle between the hands of a clock at p.m. is:
Problem 3
If each number in a set of ten numbers is increased by , the arithmetic mean (average) of the ten numbers:
Problem 4
The equality is satisfied by:
Problem 5
varies inversely as the square of . When . When equals:
Problem 6
A merchant buys a number of oranges at for cents and an equal number at for cents. To "break even" he must sell all at:
Problem 7
If a worker receives a % cut in wages, he may regain his original pay exactly by obtaining a raise of:
The graph of :
Problem 9
A circle is inscribed in a triangle with sides , and . The radius of the circle is:
Problem 10
How many hours does it take a train traveling at an average rate of 40 mph between stops to travel a miles it makes n stops of m minutes each?
Problem 11
The negation of the statement "No slow learners attend this school" is:
Problem 12
The solution of is:
Problem 13
The fraction is equal to:
Problem 14
The length of rectangle is % more than the side of square . The width of the rectangle is % less than the side of the square. The ratio of the areas, , is:
Problem 15
The ratio of the areas of two concentric circles is . If the radius of the smaller is , then the difference between the radii is best approximated by:
Problem 16
The value of when and is:
Problem 17
If , then equals:
Problem 18
The discriminant of the equation is zero. Hence, its roots are:
Problem 19
Two numbers whose sum is and the absolute value of whose difference is are roots of the equation:
Problem 20
The expression equals zero for:
Problem 21
Represent the hypotenuse of a right triangle by and the area by . The altitude on the hypotenuse is:
Problem 22
On a \textdollar{10000} order a merchant has a choice between three successive discounts of %, %, and % and three successive discounts of %, %, and %. By choosing the better offer, he can save:
Problem 23
In checking the petty cash a clerk counts quarters, dimes, nickels, and cents. Later he discovers that of the nickels were counted as quarters and of the dimes were counted as cents. To correct the total obtained the clerk must:
The function :
Problem 25
One of the factors of is:
Problem 26
Mr. A owns a house worth . He sells it to Mr. at % profit. Mr. sells the house back to Mr. at a % loss. Then:
Problem 27
If and are the roots of , then equals:
Problem 28
On the same set of axes are drawn the graph of and the graph of the equation obtained by replacing by in the given equation. If and these two graphs intersect:
Problem 29
In the figure, is tangent to semicircle ; is tangent to semicircle ; is a straight line; the arcs are indicated in the figure. is measured by:
Problem 30
Each of the equations has:
Problem 31
An equilateral triangle whose side is is divided into a triangle and a trapezoid by a line drawn parallel to one of its sides. If the area of the trapezoid equals one-half of the area of the original triangle, the length of the median of the trapezoid is:
Problem 32
If the discriminant of is zero, then another true statement about , and is that:
Problem 33
Henry starts a trip when the hands of the clock are together between a.m. and a.m. He arrives at his destination between p.m. and p.m. when the hands of the clock are exactly apart. The trip takes:
Problem 34
A -inch and -inch diameter pole are placed together and bound together with wire. The length of the shortest wire that will go around them is:
Problem 35
Three boys agree to divide a bag of marbles in the following manner. The first boy takes one more than half the marbles. The second takes a third of the number remaining. The third boy finds that he is left with twice as many marbles as the second boy. The original number of marbles:
Problem 36
A cylindrical oil tank, lying horizontally, has an interior length of feet and an interior diameter of feet. If the rectangular surface of the oil has an area of square feet, the depth of the oil is:
Problem 37
A three-digit number has, from left to right, the digits , and , with . When the number with the digits reversed is subtracted from the original number, the units' digit in the difference of r. The next two digits, from right to left, are:
Problem 38
Four positive integers are given. Select any three of these integers, find their arithmetic average, and add this result to the fourth integer. Thus the numbers , and are obtained. One of the original integers is:
Problem 39
If , then if the least possible value of is zero is equal to:
Problem 40
The fractions and are unequal if:
Problem 41
A train traveling from Aytown to Beetown meets with an accident after hr. It is stopped for hr., after which it proceeds at four-fifths of its usual rate, arriving at Beetown hr. late. If the train had covered miles more before the accident, it would have been just hr. late. The usual rate of the train is:
Problem 42
If , and are positive integers, the radicals and are equal when and only when:
Problem 43
The pairs of values of and that are the common solutions of the equations and are:
Problem 44
In circle chord is produced so that equals a radius of the circle. is drawn and extended to . is drawn. Which of the following expresses the relationship between and ?
Problem 45
Given a geometric sequence with the first term and and an arithmetic sequence with the first term . A third sequence is formed by adding corresponding terms of the two given sequences. The sum of the first ten terms of the third sequence is:
Problem 46
The graphs of , and intersect in:
Problem 47
The expressions and are:
Problem 48
Given with medians ; parallel and equal to ; are drawn; extended meets in . Which one of the following statements is not necessarily correct?
Problem 49
The graphs of and intersect in:
Problem 50
In order to pass going mph on a two-lane highway, , going mph, must gain feet. Meantime, feet from , is headed toward him at mph. If and maintain their speeds, then, in order to pass safely, must increase his speed by: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8239964842796326, "perplexity": 1382.7308906057795}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00150.warc.gz"} |
http://www.perimeterinstitute.ca/fr/seminar/black-hole-accretion-flows | Black Hole Accretion Flows
Two nearby, slowly accreting
black holes have angular size large enough to be resolved by submillimeter wavelength interferometry. This motivated our development of ab initio
dynamical and radiative models of the plasma surrounding the event horizon. I will describe the state of the art in these models, and in particular what is known about tilted disks, in which the black hole spin angular momentum is misaligned with the orbital angular momentum of the accreting plasma.
Collection/Series:
Event Type:
Seminar
Scientific Area(s):
Speaker(s):
Event Date:
Jeudi, Novembre 29, 2012 - 13:00 to 14:30
Location:
The Space Room (400)
Room #:
400 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449894785881042, "perplexity": 1876.9062749355871}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867364.94/warc/CC-MAIN-20180625014226-20180625034226-00575.warc.gz"} |
https://math.stackexchange.com/questions/1770414/are-all-normal-subgroups-abelian | # Are all normal subgroups Abelian?
If $H \subset G$ is a normal subgroup of G,
=> $xHx^{-1} = H$ or $xH = Hx$ for all $x \epsilon G$
=> $xH = Hx$ for all $x \epsilon H$
Hence, all normal subgroups of a group are themselves Abelian?
Also, does that mean that all normal towers of subgroups are also Abelian?
• An easy counterexample is $A_4 \trianglelefteq S_4$ but $A_4$ is not abelian. – lokodiz May 3 '16 at 21:27
• One thing it think about is that H is a set, so it may be confusing if you are thinking of elements in G or H. – wesssg May 4 '16 at 3:26
It is quite a common misunderstanding that $xH=Hx$ means that $xh=hx$ for any $h\in H$. This is generally false.
The assertion $xH=Hx$ means that
1. for every $h\in H$, there exists $h_1\in H$ with $xh=h_1x$
2. for every $h\in H$, there exists $h_2\in H$ with $hx=xh_2$
Consider the group $G=S_4$ and its normal subgroup $H=A_4$ (the even permutations). Since $[S_4:A_4]=2$, the subgroup $A_4$ is normal. However, taking $x=(12)$ and $h=(123)$, we have $$(12)(123)=(23)\ne(123)(12)=(13)$$ However, $(132)(12)=(23)$, so in this case $h_1=(132)\ne h$.
We can also find two elements in $A_4$ that don't commute: $$(123)(124)=(13)(24) \\ (124)(123)=(14)(23)$$
Note: the convention about function composition is the standard functional one, that is, on the left (think to $\circ$ between two cycles).
• That is exactly what I thought, but then realized that $xyx^{-1} = z$ for $y,z \epsilon H$ does not imply $y = z$, rather $xyx^{-1} \epsilon H$ – ixaxaar May 3 '16 at 21:42
• What notation is that? It doesn't seem to work if it's permutation cycle notation. – Mateen Ulhaq Jun 22 '17 at 8:39
• @MateenUlhaq I use composition in the standard direction, so $f=(123)$, $g=(124)$ and $f\circ g(1)=f(2)=3$ and so on. – egreg Jun 22 '17 at 8:42
• @egreg Ah OK, thanks. Is that a typical convention in group theory? – Mateen Ulhaq Jun 22 '17 at 8:48
• @MateenUlhaq Depending on the textbook it can be reversed. – egreg Jun 22 '17 at 8:49
No. $G$ is a normal subgroup of itself for any group $G$. Commutativity is a local property of a group (all the way down to the elements), whereas normality is more of a mid-scale property of a group (collections of elements, but not necessarily the whole group). Just because you permute the elements around the same way if you multiply on the left and right (in regards to normality) does not mean it is abelian.
• Got it, thanks! – ixaxaar May 3 '16 at 21:33
To get other counterexamples, $A$ and $B$ are normal subgroups in the direct product $A \times B$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192066788673401, "perplexity": 309.12517466331354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00084.warc.gz"} |
https://proceedings.mlr.press/v167/su22a.html | # Faster Rates of Private Stochastic Convex Optimization
Jinyan Su, Lijie Hu, Di Wang
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:995-1002, 2022.
#### Abstract
In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) and provide excess population risks for some special classes of functions that are faster than the previous results of general convex and strongly convex functions. In the first part of the paper, we study the case where the population risk function satisfies the Tysbakov Noise Condition (TNC) with some parameter $\theta>1$. Specifically, we first show that under some mild assumptions on the loss functions, there is an algorithm whose output could achieve an upper bound of $\tilde{O}((\frac{1}{\sqrt{n}}+\frac{d}{n\epsilon})^\frac{\theta}{\theta-1})$ and $\tilde{O}((\frac{1}{\sqrt{n}}+\frac{\sqrt{d\log(1/\delta)}}{n\epsilon})^\frac{\theta}{\theta-1})$ for $\epsilon$-DP and $(\epsilon, \delta)$-DP, respectively when $\theta\geq 2$, here $n$ is the sample size and $d$ is the dimension of the space. Then we address the inefficiency issue, improve the upper bounds by $\text{Poly}(\log n)$ factors and extend to the case where $\theta\geq \bar{\theta}>1$ for some known $\bar{\theta}$. Next we show that the excess population risk of population functions satisfying TNC with parameter $\theta\geq 2$ is always lower bounded by $\Omega((\frac{d}{n\epsilon})^\frac{\theta}{\theta-1})$ and $\Omega((\frac{\sqrt{d\log(1/\delta)}}{n\epsilon})^\frac{\theta}{\theta-1})$ for $\epsilon$-DP and $(\epsilon, \delta)$-DP, respectively, which matches our upper bounds. In the second part, we focus on a special case where the population risk function is strongly convex. Unlike the previous studies, here we assume the loss function is non-negative and the optimal value of population risk is sufficiently small. With these additional assumptions, we propose a new method whose output could achieve an upper bound of $O(\frac{d\log(1/\delta)}{n^2\epsilon^2}+\frac{1}{n^{\tau}})$ and $O(\frac{d^2}{n^2\epsilon^2}+\frac{1}{n^{\tau}})$ for any $\tau> 1$ in $(\epsilon,\delta)$-DP and $\epsilon$-DP model respectively if the sample size $n$ is sufficiently large. These results circumvent their corresponding lower bounds in (Feldman et al., 2020) for general strongly convex functions. Finally, we conduct experiments of our new methods on real world data. Experimental results also provide new insights into established theories. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462729692459106, "perplexity": 165.05477304524425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00359.warc.gz"} |
https://math.stackexchange.com/questions/1172119/how-to-prove-x3-y3-x-yx2xyy2-without-expand-the-right-side | # How to prove $x^3-y^3 = (x-y)(x^2+xy+y^2)$ without expand the right side?
I can prove that $x^3-y^3 = (x-y)(x^2+xy+y^2)$ by expanding the right side.
1. $x^3-y^3 = (x-y)x^2 + (x-y)(xy) + (x-y)y^2$
2. $\implies x^3 - x^2y + x^2y -xy^2 + xy^2 - y^3$
3. $\implies x^3 - y^3$
I was wondering what are other ways to prove that $x^3-y^3 = (x-y)(x^2+xy+y^2)$
Let $\omega$ be a complex cube root of unity. Then $x^{3} - y^{3} = (x-y)(x- \omega y)(x-\omega^{2}y)$ since both sides vanish when $x \in \{y,\omega y,\omega^{2}y \}$ and the degrees are right. Since $1 + \omega + \omega^{2} = 0$ we have $\omega + \omega^{2} = -1.$ We also have $\omega \omega^{2} = 1$, so we have $(x - \omega y)(x-\omega^{2}y) = x^{2}+xy + y^{2}.$
Divide $x^3-y^3$ by $x-y$ as polynomials
First, notice that for all $u$, \begin{align*}(1-u)(1 + u + u^2) &= 1 + u + u^2 - (u - u^2 - u^3)\\ &= 1 + (u-u) + (u^2-u^2) - u^3\\ &= 1 - u^3.\end{align*} Now, take $u = \frac{y}{x}$ and multiply by $x^3$.
They agree at $\,x = 0,\pm y\,$ so their difference is a quadratic in $\,x\,$ with $3$ roots, hence zero.
• Here the coefficient ring is the integral domain $\,\Bbb Z[y]\,$ where $\,0,y,-y\,$ are distinct. Recall that a nonzero polynomial over a domain has no more roots than its degree. – Bill Dubuque Mar 2 '15 at 18:32
Since $x=y$ is clearly a root, one may divide $x^3-y^3$ by $x-y$ directly.
Well there are two ways that come to mind.
It is clear that when $x=y$ we have $x^3-y^3=0$. Then use long division to divide $x^3-y^3$ by $x-y$ and the result will be the equation on the right.
Another way would be to write:
$$\left(\frac{x}{y}\right)^3 - 1$$
Now we wish to find the zeros of this polynomial. These correspond to $\frac{x}{y} = 1$, $\frac{x}{y} = e^{i\frac{2\pi}{3}}$ and $\frac{x}{y} = e^{i\frac{4\pi}{3}}$.
Then we can factor the polynomial as:
$$\left(\frac{x}{y}\right)^3 - 1 = \left( \frac{x}{y} - 1 \right) \left(\frac{x}{y} - e^{i\frac{2\pi}{3}}\right) \left( \frac{x}{y} - e^{i\frac{4\pi}{3}} \right)$$
If we multiply the last two factors together we find:
$$\left(\frac{x}{y}\right)^3 - 1 = \left( \frac{x}{y} - 1 \right) \left(\frac{x^2}{y^2} - \frac{x}{y} \left(e^{i \frac{2\pi}{3}} + e^{i \frac{4 \pi}{3}}\right) + 1 \right)$$ $$=\left( \frac{x}{y} - 1 \right) \left(\frac{x^2}{y^2} - \frac{x}{y} \left( 2 \cos(2\pi/3) \right) + 1 \right) = \left( \frac{x}{y} - 1 \right) \left(\frac{x^2}{y^2} + \frac{x}{y} + 1 \right).$$
Thus $$\left(\frac{x}{y}\right)^3 - 1 = \left( \frac{x}{y} - 1 \right) \left(\frac{x^2}{y^2} + \frac{x}{y} + 1 \right).$$ Multiplying by $y^3$ on both sides gives the result.
Another way is as follows: $$(x - y)^3 = x^3 - 3x^2y + 3xy^2 - y^3 = (x^3 - y^3) -3xy(x - y) \quad \Rightarrow$$ $$x^3 - y^3 = (x - y)^3 + 3xy(x - y) = (x - y)[(x - y)^2 + 3xy] \quad \Rightarrow$$ $$x^3 - y^3 = (x - y)(x^2 + xy + y^2)$$
$x^3-y^3=x^2(x-y)+yx^2-y^3=x^2(x-y)+yx(x-y)+xy^2-y^3=...$ (Simly insert $x^2y-yx^2$, then insert $y^2x-xy^2$,...)
You can rearrange it into two possible ways.
Show, by left side, that $$\frac{x^3-y^3}{x-y} = x^2+xy+y^2,$$ or $$\frac{x^3-y^3}{x^2+xy+y^2} = x-y.$$
You could do the enclidian division of $X^3 - Y^3$ by $X-Y$ in the ring $A[X]$, where $A = \mathbf{Z}[Y]$, has $X-Y$ has unit leading coefficient.
You can use "Long Division of Polynomials"
$\frac {1-(\frac y x)^3}{1-(\frac y x)}=1+\frac y x+(\frac y x)^2$
Multiply both sides by $x^3$
$\frac {x^3-y^3}{1-(\frac y x)}=x(x^2+xy+y^2)$
Then you have
$x^3-y^3=(x-y)(x^2+xy+y^2)$
• What's the point of using $\frac yx$ in the first place? Why can't you just perform your long division on $1-x^3$? – BigbearZzz Aug 8 '16 at 8:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891151785850525, "perplexity": 184.45534942879735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00064.warc.gz"} |
https://nrhstat.org/publication/2002-geometric-drift/ | # Establishing geometric drift via the Laplace transform of symmetric measures
Publication
Statistics & Probability Letters, 60(3), 289-295 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808304309844971, "perplexity": 1984.74599872566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00203.warc.gz"} |
http://jakobschwichtenberg.com/larger-symmetries/ | # Larger Symmetries
“Further progress lies in the direction of making our equations invariant under wider and still wider transformations.”
These prophetic lines were written in 1930 by P. A. M. Dirac in his famous book “The Principles of Quantum Mechanics”. In the following centuries, tremendous progress was made exactly as he predicted.
Weak interactions were described perfectly using $SU(2)$ symmetry, strong interactions using $SU(3)$ symmetry and it is well known that electrodynamics can be derived from $U(1)$ symmetry. Other aspects of elementary particles, like their spin, can be understood using the symmetry of special relativity.
A symmetry is a transformation that leaves our equations invariant, i.e. that does not change the equations. A set of symmetry transformations is called a group and, for example, the set of transformations that leaves the equations of special relativity invariant is called the Poincare group.
By making our equations invariant under the quite large set of transformations:
$$\text{Poincare Group} \times U(1) \times SU(2) \times SU(3) ,$$
we are able to describe all known interactions of elementary particles, except for gravity. This symmetry is the core of the standard model of modern physics, which is approximately 40 years old. Since then it has been confirmed many times, for example, through the discovery of the Higgs boson. Just as Dirac predicted, we gained incredible insights into the inner workings of nature, by making the symmetry of our equations larger and larger.
Unfortunately, since the completion of the standard model $\sim 40$ years ago, there was no further progress in this direction. No further symmetry of nature was revealed by experiments. (At least that’s the standard view, but I don’t think it’s true. More on that later). In 2017 our equations are still simply invariant under $\text{Poincare Group} \times U(1) \times SU(2) \times SU(3) ,$ but no larger symmety.
I’m a big believer in Dirac’s mantra. Despite the lack of new experimental insights, I do think there are many great ideas for how symmetries could guide us towards the correct theory beyond the standard model.
Before, we can discuss some of these ideas, there is one additional thing that should be noted. Although the four groups $\text{Poincare Group} \times U(1) \times SU(2) \times SU(3)$ are written equally next to each other, they aren’t treated equally in the standard model. The Poincare group is a spacetime symmetry, whereas all other groups describe inner symmetries of quantum fields. Therefore, we must divide the quest for a larger symmetry into two parts. On the one hand, we can enlarge the spacetime symmetry and on the other hand, we can enlarge the inner symmetry. In addition to these two approaches, we can also try to treat the symmetries equally and enlarge them at the same time.
## Enlargement of the Spacetime Symmetry
The symmetry group of special relativity is the set of transformations that describe transformations between iinertialframes of reference and leave the speed of light invariant. As already noted, this set of transformations is called the Poincare group.
Before Einstein discovered special relativity, people used a spacetime symmetry that is called the Galilean group. The Galilean group also describes transformations between inertial frames of reference, but does not care about the speed of light.
The effects of special relativity are only important for objects that are moving fast. For everything that moves slowly compared to the speed of light, the Galilean group is sufficient. The Galilean group is an approximate symmetry when objects move slowly. Mathematically this means that the Galilean group is the contraction of the Poincare group in the limit where the speed of light goes to infinity. For an infinite speed of light, nothing can move with a speed close to the speed of light and thus the Galilean group would be the correct symmetry group.
It is natural to wonder if the Poincare group is an approximate symmetry, too.
One hint in this direction is that the Poincare group is an “ugly” group. The Poincare group is the semi-direct product of the group of translations and the Lorentz group, which described rotations and boosts. Therefore the Poincare group not a simple group. The simply groups are the “atoms of groups” that can be used to construct all other group from. However, the spacetime symmetry group that we use in the standard model is not one of these truly fundamental groups.
Already in 1967 Monique Levy‐Nahas studied the question which groups could yield the Poincare group as a limit, analogous to how the Poincare group yields the Galilean group as a limit.
The answer she found was stunningly simple: “the only groups which can be contracted in the Poincaré group are $SO(4, 1)$ and $SO(3, 2)$”. These groups are called the de Sitter and the anti de Sitter group.
They consist of transformations that describe transformations between inertial frames of reference, leave the speed of light invariant and leave additionally an energy scale invariant. The de Sitter group leaves a positive energy scale invariant, whereas the anti deSitter group leaves a negative energy scale invariant. Both contract to the Poincare group in the limit where the invariant energy scale goes to zero.
Levy‐Nahas’ discovery is great news. There isn’t some large pool of symmetries that we can choose from, but only two. In addition, the groups she found are simple groups and therefore much “prettier” than the Poincare group.
Following Dirac’s mantra and remembering the fact that the deformation: Galilean Group $\to$ Poincare Group led to incredible progress, we should take the idea of replacing the Poincare group with the de Sitter or anti de Sitter group seriously. This point was already emphasized in 1972 by Freeman J. Dyson in his famous talk “Missed opportunities”.
Nevertheless, I didn’t hear about the de Sitter groups in any particle physics lecture or read about them in any particle physics book. Maybe because the de Sitter symmetry is not a symmetry of nature? Because there are no experimental evidence?
To answer these questions, we must first answer the question: what is the energy scale that is left invariant?
The answer is: it’s the cosmological constant!
The present experimental status is that the cosmological constant is tiny, but nonzero and positive: $\Lambda \approx 10^{-12}$ eV! This smallness explains why the Poincare group works so well. Nevertheless, the correct spacetime symmetry group is the de Sitter group. I’m a bit confused why this isn’t mentioned in the textbooks or lectures. If you have an idea, please let me know!
Can we enlarge the spacetime symmetry even further?
Yes, we can. But as we know from Levy‐Nahas’ paper, only a different kind of symmetry enlargement is possible. There isn’t any other symmetry that could be more exact and yield the de Sitter group in some limit. Instead, we can think about the question, if there could be a larger broken spacetime symmetry.
Nowadays the idea of a broken symmetry is well known and already an important part of the standard model. In the standard model the Higgs field triggers the breaking $SU(2) \times U(1) \to U(1)$.
Something similar could’ve been happend to a spacetime symmetry in the early universe. A good candidate for such a broken spacetime symmetry is the conformal group $SO(4,2)$.
The temperature in the early universe was incredibly high and “[i]t is an old idea in particle physics that, in some sense, at sufficiently high energies the masses of the elementary particles should become unimportant” (Sidney Coleman in Aspects of Symmetry). In the massless limit our equations become invariant under the conformal group (source) . The de Sitter group and the Poincare group are subgroups of the conformal group. Therefore it is possible that the conformal group was broken to the de Sitter group in the early universe.
This idea is interesting for a different reason, too. The only parameter in the standard model that breaks conformal symmetry at tree level is the Higgs mass parameter. This parameter is the most problematic aspect of the standard model and possibly the Higgs mass fine-tuning problem can be solved with the help of the conformal group. (See: On naturalness in the standard model by William A. Bardeen.)
## Enlargement of the Inner Symmetry
The inner symmetry group of the standard model $U(1) \times SU(2) \times SU(3)$ is quite ugly, too. Like the Poincare group it is not a simple group.
There is an old idea by Howard Georgi and Sheldon Glashow that instead of $U(1) \times SU(2) \times SU(3)$ we use a larger, simple group $G_{GUT}$ . These kind of theories are called Grand Unified Theories (GUTs).
While GUTs have problems, they are certainly beautiful. On obvious “problem” is that in present day colliders, we do not observe effects of a $G_{GUT}$ structure and thus we assume the unified gauge symmetry is broken at some high energy scale:
\label{eq:schematicgutbreaking}
G_{GUT} \stackrel{M_{GUT}}{\rightarrow} \ldots \stackrel{M_I}{\rightarrow} G_{SM} \stackrel{M_Z}{\rightarrow} SU(3)_C \times U(1)_Q \, ,
where the dots indicate possible intermediate scales between $G_{GUT}$ and $G_{SM}$. In the following, we discuss some of the “mysteries” of the standard model that can be resolved by a GUT.
### Quantization of Electric Charge
In the standard model the electric charges of the various particles must be put in by hand and there is no reason why there should be any relation between the electron and proton charge. However from experiments it is known that $Q_{\text{proton}}+Q_{\text{electron}}= \mathcal{O}(10^{-20})$. In GUTs one multiplet of $G_{GUT}$ contains quarks and leptons. This way, GUTs provide an elegant explanation for the experimental fact of charge quantization. For example in $SU(5)$ GUTs the conjugate $5$-dimensional representation contains the down quark and the lepton doublet
\bar{5} = \begin{pmatrix} \nu_L \\ e_L \\ (d_R^c)_{\text{red}} \\ (d_R^c)_{\text{blue}} \, .\\ (d_R^c)_{\text{green}} \end{pmatrix}
The standard model generators must correspond to generators of $G_{GUT}$. Thus the electric charge generator must correspond to one Cartan generator of $G_{GUT}$ (The eigenvalues of the Cartan generators of a given gauge group correspond to the quantum numbers commonly used in particle physics.). In $SU(5)$ the Cartan generators can be written as diagonal $5\times 5$ matrices with trace zero. (In $SU(5)$ is the set of $5 \times 5$ matrices $U$ with determinant $1$ that fulfil $U^\dagger U = 1$. For the generators $T_a$ this means $\text{det}(e^{i \alpha_a T_a})=e^{i \alpha_a \text{Tr}(T_a)} \stackrel{!}{=}1$. Therefore $Tr(T_a) \stackrel{!}{=} 0$) Therefore we have
\begin{align}
\text{Tr}(Q)&= \text{Tr} \begin{pmatrix} Q(\nu_L) & 0 & 0 & 0 &0 \\ 0 & Q(e_L) & 0 & 0 &0 \\ 0 & 0 & Q((d_R^c)_{\text{red}}) & 0 &0\\ 0 & 0 & 0 & Q((d_R^c)_{\text{blue}})&0\\ 0 & 0 & 0 & 0 &Q((d_R^c)_{\text{green}}) \end{pmatrix} \stackrel{!}{=} 0 \notag \\
&\rightarrow Q(\nu_L) + Q(e_L) + 3Q(d_R^c) \stackrel{!}{=} 0 \notag \\
&\rightarrow Q(d_R^c) \stackrel{!}{=} -\frac{1}{3} Q(e_L) \, .
\end{align}
Analogously, we can derive a relation between $e_R^c$, $u_L$ and $u_R^c$. Thus $Q_{\text{proton}}+Q_{\text{electron}}= \mathcal{O}(10^{-20})$ is no longer a miracle, but rather a direct consequence of of the embedding of $G_{SM}$ in an enlarged gauge symmetry.
### Coupling Strengths
The standard model contains three gauge couplings, which are very different in strength. Again, this is not a real problem of the standard model, because we can simply put these values in by hand. However, GUTs provide a beautiful explanation for this difference in strength. A simple group $G_{GUT}$ implies that we have only one gauge coupling as long as $G_{GUT}$ is unbroken. The gauge symmetry $G_{GUT}$ is broken at some high energy scale in the early universe. Afterwards, we have three distinct gauge couplings with approximately equal strength. The gauge couplings are not constant, but depend on the energy scale. This is described by the renormalization group equations (RGEs). The RGEs for a gauge coupling depend on the number of particles that carry the corresponding charge. Gauge bosons have the effect that a given gauge coupling becomes stronger at lower energies and fermions have the opposite effect. The adjoint of $SU(3)$ is $8$-dimensional and therefore we have $8$ corresponding gauge bosons. In contrast the adjoint of $SU(2)$ is $3$-dimensional and thus we have $3$ gauge bosons. For $U(1)$ there is only one gauge boson. As a result for $SU(3)$ the gauge boson effect dominates and the corresponding gauge coupling becomes stronger at lower energies. For $SU(2)$ the fermion and boson effect almost cancel each other and thus the corresponding gauge coupling is approximately constant. For $U(1)$ the fermions dominate and the $U(1)$ gauge coupling becomes much weaker at low energies. This is shown schematically in the figure below. This way GUTs provide an explanation why strong interactions are strong and weak interactions are weak.
Another interesting aspect of the renormalization group evolution of the gauge couplings is that there is a close between the GUT scale and the proton lifetime . Thus proton decay experiments yield directly a bound on the GUT scale $M_{GUT} \gtrsim 10^{15}$ GeV. On the other hand we can use the measured values of the gauge couplings and the standard model particle content to compute how the three standard model gauge couplings change with energy. Thus we can approximate the GUT scale as the energy scale at which the couplings become approximately equal. The exact scale depends on the details of the GUT model, but the general result is a very high scale, which is surprisingly close to the value from proton decay experiments. This is not a foregone conclusion. With a different particle content or different measured values of the gauge coupling this calculation could yield a much lower scale and this would be a strong argument against GUTs. In addition, the gauge couplings could run in the “wrong direction” as shown in the figure. The fact that the gauge coupling run sufficiently slow and become approximately equal at high energies are therefore hints in favor of the GUT idea.
### Further Postdictions
In addition to the “classical” GUT postdictions described in the last two sections, I want to mention two additional postdictions:
• A quite generic implication of grand unification small neutrino masses through the type-1 seesaw mechanism. Models based on the popular $SO(10)$ or $E_6$ groups contain automatically a right-handed neutrino $\nu_R$. As a result of the breaking chain this standard model singlet $\nu_R$ gets a superheavy mass $M$. After the last breaking step $G_{SM}\rightarrow SU(3)_C \times U(1)_Y$ the right-handed and left-handed neutrinos mix. This yields a surpressed mass of the left-handed neutrino of order $\frac{m^2}{M}$, where $m$ denotes a typical standard model mass.
• GUTs provide a natural framework to explain the observed matter-antimatter asymmetry in the universe. As already noted above a general implication of GUTs is that protons are no longer stable. Formulated differently, GUTs allow baryon number-violating interactions. This is one of three central ingredients, known as Sakharov condition, needed to produce more baryons than antibaryons in the early universe. Thus, as D. V. Nanopoulos put it, “if the proton was stable it would not exist”.
## What’s next?
While the unification of spacetime symmetries was already confirmed by the measurement of the cosmological constant, so far, there is no experimental evidence for the correctness of the GUT idea. Thus the unification of internal symmetries still has to wait. However, proton decay could be detected anytime soon. When Hyper-Kamiokande will start operating the limits on proton lifetime will become one order of magnitude better and this means there is a realistic chance that we finally find evidence for Grand Unification.
This however, would by no means be the end of the road.
Arguably, it would be awesome if we could unify spacetime and internal symmetries into one large symmetry. However, there is one no-go theorem that blocked progress in this direction: the famous Coleman-Mandula theorem.
Nevertheless, no-go theorem in physics never really mean that something is impossible, only that it isn’t as trivial as one might think. There are several loopholes in the theorem, that potentially allow the unification of spacetime and internal symmetries.
At least to me it seems as Dirac was right and larger symmetries is the way to go. However, so far, we don’t know which way we should follow.
P.S. I wrote a textbook, which is, in some sense, the book I wished had existed when I started my journey in physics. It's called "Physics from Symmetry" and you can buy it, for example, at Amazon. And I'm now on Twitter too if you'd like to get updates about what I'm recently up to. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662048816680908, "perplexity": 345.08139655405347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00159.warc.gz"} |
https://mom6.readthedocs.io/en/dev-gfdl/api/generated/pages/Vertical_Diffusion.html | # Vertical Diffusion¶
Vertical diffusion of tracers
The MOM6 tracer registry takes care of the tracer advection as well as horizontal diffusion, but it is up to each individual tracer package to define its own vertical diffusion. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425368070602417, "perplexity": 4122.647223239242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00373.warc.gz"} |
http://mathoverflow.net/questions/14805/when-do-equivariant-sheaves-on-a-formal-neighborhood-extend | # When do equivariant sheaves on a formal neighborhood extend?
Suppose that $X$ is a variety (in char 0) with an action of an affine algebraic group $G$. Let $Y \subset X$ be a subvariety fixed by $G$--the action map agrees with projection upon restriction to $Y$. Let $\widehat{Y}$ be the formal completion of $X$ along $Y$. Furthermore let $\widehat{G}$ be the the completion of $G$ at the identity, viewed as a formal group. There is a restriction functor $j^*$ from the $Qcoh^G(X)$, the category of $G$-equivariant quasicoherent sheaves on $X$, to $Qcoh^{\widehat{G}}(\widehat{Y})$, the category of $\widehat{G}$-equivariant quasicoherent sheaves on $\widehat{Y}$.
1) Is this situation considered in the literature? Where?
2) What tools are available to control this functor? How might one describe the essential image?
Although curious about this general package, I specifically care about the case $G =\mathbb{G}_m$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775699377059937, "perplexity": 177.64576411134126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997878518.58/warc/CC-MAIN-20140722025758-00084-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/161138-word-problem-help.html | 1. ## Word problem help,
The slurry that leaves the reactor on a plant manufacturing phosphoric acid contains 20% w/w solids together with a solution of phosphoric and sulphuric acids
In water. The solution consists of 38% w/w H3PO4 , 2.5% w/w H2SO4 , the
Remainder being water.
The slurry is filtered and all the solids are removed in the filter cake. The filter
cake that is produced contains 50% w/w solids and 50% w/w liquid. The filter
Cake is then re-slurried with water and subjected to a second filtration. The second
Filter cake contains 55% w/w solids and 45% w/w liquid. In this case also, all the
Solids are present in the filter cake.
If 1% of the phosphoric acid present in the initial slurry is retained in the liquid
Trapped in the second filter cake then, using a basis of 1000 kg initial slurry,
Calculate:
(a) The composition of the first filter cake,
(b) The composition of the second fillter cake,
(c) The amount of water used to reslurry the first filter cake.
I have drawn a diagram, and I still not sure how to calculate the composition of the first filtration,
From the question it says that the slurry contains 20% solids, so of the 1000kg, 200kg is solid and the rest must be liquid, of which 38% is H3PO4 and 2.5% is H2SO4. Which means 59.9% must be water.
Attached Thumbnails
2. of the 1000kg, 200kg is solid and the rest must be liquid, of which 38% is H3PO4 and 2.5% is H2SO4. Which means 59.9% must be water.
It's 59.5% that is the percentage of water.
First, you can calculate the mass of H3PO4 in the original 1000kg slurry and thus find 1% of the phosphoric acid present in the initial slurry. This will be used later.
It is not clear from the problem text in what proportions the components of the solution are retained by the filtration. I mean, there is 2.5% w/w H2SO4 in the solution, but filtration theoretically may retain all H2SO4 and some water, but none of H3PO4. I don't think it is possible to calculate anything without this information.
Most likely, however, filtration preserves the ratio of the solution components. Since the solids account for 50% of the first cake, the cake's mass is 400kg, and the solution's mass is also 200kg. Using the proportions above, you can find the mass of each solution component.
After that let x kg of water has been added. Then the concentrations of the three components of the liquid solution in the new slurry can be expressed through x.
In the second cake, the same 200kg of solids constitute 55%, so you know the mass of the cake and the mass of its liquid part. Therefore, using the concentration of H3PO4 you found, you can calculate the mass of H3PO4 as an expression of x. Finally, it is equated to the 1% of the H3PO4in the initial slurry that you found in the beginning to form an equation for x.
3. Originally Posted by emakarov
It's 59.5% that is the percentage of water.
First, you can calculate the mass of H3PO4 in the original 1000kg slurry and thus find 1% of the phosphoric acid present in the initial slurry. This will be used later.
It is not clear from the problem text in what proportions the components of the solution are retained by the filtration. I mean, there is 2.5% w/w H2SO4 in the solution, but filtration theoretically may retain all H2SO4 and some water, but none of H3PO4. I don't think it is possible to calculate anything without this information.
Most likely, however, filtration preserves the ratio of the solution components. Since the solids account for 50% of the first cake, the cake's mass is 400kg, and the solution's mass is also 200kg. Using the proportions above, you can find the mass of each solution component.
After that let x kg of water has been added. Then the concentrations of the three components of the liquid solution in the new slurry can be expressed through x.
In the second cake, the same 200kg of solids constitute 55%, so you know the mass of the cake and the mass of its liquid part. Therefore, using the concentration of H3PO4 you found, you can calculate the mass of H3PO4 as an expression of x. Finally, it is equated to the 1% of the H3PO4in the initial slurry that you found in the beginning to form an equation for x.
Thank you,
I also quite confused with the wording of the problem, because when it says all of the solids is removed, does that mean, the filtration contains 800kg of the liquid, which is than separated into 50% solid 50% liquid? which would mean, there is still the original 200kg of solids that is not in the filter.
before filtration:
mass of solid = 0.20 x 1000 = 200kg
mass of liquid = 0.80x1000= 800kg
of that 800kg liquid:
304kg H3PO4
20kg-H2S04
479.2kg-H20
right, now how do I work out the proportions that is in the first filtration?
The correct answers for part a is: 200kg solids, 76kg H3PO4, 5kg H2S04, and 119kg of H20
Is it half of 800kg is liquid and half solid?
800 x 0.5 = 400kg than work out the percentages of acid in that amount of liquid, which does not give the correct answer.
How did you work out that the cake's mass is 400kg and that the solution's mass is 200kg?
4. I also quite confused with the wording of the problem, because when it says all of the solids is removed, does that mean, the filtration contains 800kg of the liquid, which is than separated into 50% solid 50% liquid?
The first cake contains all the solids and some of the liquid. Namely, 200kg liquid of the original 800kg (see below).
of that 800kg liquid:
304kg H3PO4
20kg-H2S04
479.2kg-H20
Again, there is 59.5% of water, which is 476kg.
Is it half of 800kg is liquid and half solid?
No, 800kg is completely liquid.
How did you work out that the cake's mass is 400kg and that the solution's mass is 200kg?
Since all of solids (200kg) are in the first cake and they constitute 50% of the cake, the cake is 400kg and its liquid part is 400 - 200 = 200kg.
right, now how do I work out the proportions that is in the first filtration?
Of the original 800kg liquid, 200kg is still in the first cake. This is 1/4 of the liquid in the original slurry. If the concentrations of H2S04, H3PO4 and H2O in the liquid part of the slurry and in the liquid part of the cake are the same, you can multiply 304, 20 and 476 by 1/4 to find the component masses of the liquid part of the cake. This is the same as the answer you have.
5. Since all of solids (200kg) are in the first cake and they constitute 50% of the cake, the cake is 400kg and its liquid part is 400 - 200 = 200kg.
Sorry, still a little confused, Is that 400kg of liquid and solid? So the mass of the first filter cake is 400kg? How do you know this?
to work out the amount of solid present in the first filter cake; you could just set up this equation.
200kg = 0.5S
solving for s = 400kg
wont you do the same for the liquid?
800kg = 0.5L
L = 1600kg
I know this does not work, because than the total is more than 1000kg, but as the question says 50% of liquid and solid, why are you not just taking 50% of the original masses?
Thank you
6. Actually, I understand now why is it 200kg, liquid and 200kg solid. For the second part, we know that 55% is the orignal 200kg of solid, but from that how would I know the mass of the liquid? is it 0.45 x 800 = 360 ?
7. Let C2 be the mass of the second cake. We know that 55% of it is 200kg (solids). Therefore, 0.55 * C2 = 200, from where C2 = 200 / 0.55 = 364 kg (approximately). So, the second cake has 364 - 200 = 164 kg of liquid.
Now suppose that x kg of water was added during re-slurrying. Then the mass of liquid in the second slurry is (200 + x) kg. Since there was 76kg of H3PO4 in the second cake, the concentration of H3PO4 in the liquid part of the second slurry is 76 / (200 + x). H3PO4 has the same concentration in the liquid part of the second cake, so the second cake has 164 * 76 / (200 + x) kg of H3PO4. Finally, we are told that this is 1% of the amount of H3PO4 in the original slurry, or 304kg. From here, x can be found.
8. Originally Posted by emakarov
Let C2 be the mass of the second cake. We know that 55% of it is 200kg (solids). Therefore, 0.55 * C2 = 200, from where C2 = 200 / 0.55 = 364 kg (approximately). So, the second cake has 364 - 200 = 164 kg of liquid.
Now suppose that x kg of water was added during re-slurrying. Then the mass of liquid in the second slurry is (200 + x) kg. Since there was 76kg of H3PO4 in the second cake, the concentration of H3PO4 in the liquid part of the second slurry is 76 / (200 + x). H3PO4 has the same concentration in the liquid part of the second cake, so the second cake has 164 * 76 / (200 + x) kg of H3PO4. Finally, we are told that this is 1% of the amount of H3PO4 in the original slurry, or 304kg. From here, x can be found.
Hello,
Can you please post a full solution for this problem, as I am not able to follow through. I don't quite understand, why do you not multiply by the amount that the question has given?
if you could please start from the beginning.
Thank you. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9019865393638611, "perplexity": 1100.6537600491008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983001995.81/warc/CC-MAIN-20160823201001-00088-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://firewords.net/definitions/moisture_of_extinction.htm | # Moisture of extinction [Extinction moisture content]
n. The dead fuel moisture content at which the Rothermel’s (1972) surface fire spread model predicts spread rate will fall to zero.
• Discussion
Moisture of extinction is a parameter of fire behavior fuel models. The use of the term in the Rothermel (1972) surface fire model is misleading because it is not designed to predict extinction; rather, it is included as a way to predict the effect of moisture content on fire behavior (through the moisture damping coefficient). The greater the difference between actual moisture content and the moisture of extinction, the smaller the moisture damping coefficient and therefore the greater the spread rate.
•
Fuel moisture content is most commonly expressed as the mass of water as a fraction or percentage of oven-dry mass. The term moisture fraction is sometimes used when moisture content is expressed as a fraction. Multiply moisture fraction by 100 to get moisture content as a percentage.
•
• Rothermel, R.C. 1972. A mathematical model for predicting fire spread in wildland fuels. Res. Pap. INT-115. Ogden, UT: U.S. Department of Agriculture, Forest Service, Intermountain Forest and Range Experiment Station. 42 p.
• Notes
• Joe Scott, Research Forester
Systems for Environmental Management
• August 2007 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8427004814147949, "perplexity": 3217.7450354747016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540486979.4/warc/CC-MAIN-20191206073120-20191206101120-00506.warc.gz"} |
https://www.physicsforums.com/threads/standard-error.333378/ | Homework Help: Standard Error
1. Aug 29, 2009
needhelp83
Weight of turkeys is normally distributed with a standard deviation of 9 pounds. Farmer Jones samples 25 turkeys from his farm in order to estimate their mean weight. What is the probability that his sampling error will not exceed 2 pounds?
I hate this because I can't figure out what formula to use? I found one for the standard error of the mean where
$$SE = \frac{s}{\sqrt{n}}$$
This gives me the answer of 1.8. Now if I figure out the probability of exceeding 2 lbs how do I determine this with my given answer?
2. Aug 29, 2009
You know the weights are normally distributed, correct? You know the value of $$\sigma$$ but not $$\mu$$, so using the normal distribution is out of the question.
Here's a hint masquerading as a question: what do you know about the distribution of the sample variance when you sample from a normal distribution?
3. Aug 29, 2009
needhelp83
I am not sure if I am answering exactly what you are looking for, but if you know the standard deviation you will know the standard variance since it is sd squared. What this exactly tells me about the distribution I am unsure.
4. Aug 30, 2009
needhelp83
Ok...sorry, but I am still not understanding what exactly I am supposed to see with this problem. Any help?
5. Aug 31, 2009
needhelp83
Any help, anybody? More than happy to figure it out if just given a push in right direction. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498496055603027, "perplexity": 315.3988862637536}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861981.50/warc/CC-MAIN-20180619080121-20180619100121-00136.warc.gz"} |
http://mathoverflow.net/feeds/question/103152 | Determinant of integer lattice basis of $L=\{(x_1,\ldots,x_n): a_1x_1+\cdots+a_nx_n=0\}$ - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T07:01:44Z http://mathoverflow.net/feeds/question/103152 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/103152/determinant-of-integer-lattice-basis-of-l-x-1-ldots-x-n-a-1x-1-cdotsa-n Determinant of integer lattice basis of $L=\{(x_1,\ldots,x_n): a_1x_1+\cdots+a_nx_n=0\}$ Victor Wang 2012-07-26T03:44:45Z 2012-07-26T03:44:45Z <blockquote> <p><strong>Question:</strong> Suppose <code>$\{v_1,\ldots,v_{n-1}\}$</code> is an integer basis for the lattice <code>$$L=\{(x_1,\ldots,x_n)\in\mathbb{Z}^n: a_1x_1+\cdots+a_nx_n=0\},$$</code> where the $a_i$ are fixed nonzero integers. Is the volume <code>$V(P)=\det(L)$</code> (see <a href="http://numbertheoryreadinggroup.wordpress.com/2008/04/24/geometry-of-numbers-lecture-2-determinant-of-the-lattice-and-the-fundamental-parallelepiped-lee/" rel="nofollow">this</a> for a proof that they are equal) of its fundamental parallelotope <code>$P=\{t_1v_1+\cdots+t_{n-1}v_{n-1} \mid t_i\in[0,1)\}$</code> necessarily equal to <code>$$\frac{\sqrt{a_1^2+\cdots+a_n^2}}{\gcd(a_1,\ldots,a_n)}?$$</code></p> </blockquote> <p>I used the case $n=3$ along with <a href="http://en.wikipedia.org/wiki/Minkowski%27s_theorem" rel="nofollow">Minkowski's theorem</a> (in the <a href="http://en.wikipedia.org/wiki/Geometry_of_numbers" rel="nofollow">geometry of numbers</a>) to solve the following <a href="http://www.math.u-szeged.hu/~mmaroti/schweitzer/schweitzer-2000-eng.pdf" rel="nofollow">Miklos problem</a> from 2000:</p> <blockquote> <p>Let <code>$a<b<c$</code> be positive integers. Prove that there exist integers <code>$x,y,z$</code>, not all zero, such that <code>$ax+by+cz=0$</code> and <code>$\max(|x|,|y|,|z|)\le 1+\frac{2}{\sqrt3}\sqrt{c}$</code>, and show that the constant <code>$\frac{2}{\sqrt3}$</code> cannot be improved.</p> </blockquote> <p>However, I was only able to find a brute force proof for this special case (see lemma 1 in my AoPS post <a href="http://www.artofproblemsolving.com/Forum/viewtopic.php?p=2471417#p2471417" rel="nofollow">here</a>), and I'm not sure if it's as easy for larger values of $n$.</p> <p>But I'm pretty sure this should be true in general (I've tried several cases for $n=4$ and $n=5$), so I would appreciate it if someone could give a (clean?) proof, reference, or counterexample. Thanks!</p> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714845418930054, "perplexity": 2975.5604026259757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699755211/warc/CC-MAIN-20130516102235-00036-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://nlpforhackers.io/tf-idf/ | # Weighting words using Tf-Idf
If I ask you “Do you remember the article about electrons in NY Times?” there’s a better chance you will remember it than if I asked you “Do you remember the article about electrons in the Physics books?”. Here’s why: an article about electrons in NY Times is far less common than in a collection of physics books. It is less likely to stumble upon the “electron” concept in NY Times than in a physics book.
Let’s consider now the scenario of a single article. Suppose you read an article and you’re asked to rank the concepts found in the article by importance. The chances are you’ll basically order the concepts by frequency. The reason is simply that important stuff would be mentioned repeatedly because the narrative gravitates around them.
Combining the 2 insights, given a term, a document and a collection of documents we can loosely say that:
`importance ~ appearances(term, document) / count(documents containing term in collection)`
This technique is called Tf-IdfTerm Frequency – Inverse Document Frequency. Here’s how the measure is defined:
• `tf = count(word, document) / len(document)` – term frequency
• `idf = log( len(collection) / count(document_containing_term, collection)` – inverse document frequency )
• `tf-idf = tf * idf` – term frequency – inverse document frequency
Let’s test this theory on some data. We’re going to use the Reuters dataset bundles inside NLTK.
Let’s build a tokenizer that ignores punctuation and stopwords:
We now need to know all the words inside the collection
Let’s compute the Idf for every word in the vocabulary:
Let’s write, as an exercise, the numpy parallelized version of the Idf computation:
Since Idf doesn’t depend on the current document but only on the collection we can preprocess the results as we did above. Here’s the code for the final computation:
Let’s run a few computations:
Notice how I sneakily computed the words in the order of the Tf-Idf score.
That’s how we compute the Tf-Idf ourselves. Let’s also use some libraries to make the job a bit easier. Note that the scores might be different but the order should be the same. The difference is probably due to different smoothing strategies.
Here’s the code for computing the Tf-Idf score using scikit-learn:
Here’s the code for computing the Tf-Idf score using gensim:
## Conclusions
• TfIdf is a really popular technique for weighting the importance of the terms inside a collection of documents
• It is used in Information Retrieval to rank results
• It is used for extracting keywords on web pages
Shares | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104553818702698, "perplexity": 1278.4299160736246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889567.48/warc/CC-MAIN-20180120102905-20180120122905-00390.warc.gz"} |
https://math.stackexchange.com/questions/2222589/is-fracdydx-fracdycdx | # Is $\frac{dy}{dx} = \frac{d(y+c)}{dx}$?
Is the following true?
$$\frac{dy}{dx} = \frac{d(y+c)}{dx}$$
where $c$ is an arbitrary real constant.
I believe it is true, and my reasoning goes like this:
$dy$ is an infinitesimal, so the addition of another constant would still be an infinitesimal. I do not know if my reasoning is correct.
Do note that I'm not familiar with epilson delta and university calculus. I would appreciate it if someone could explain the above simply.
EDIT: I couldn't see why $d(y+c)= dy+dc$. What is $d$? Is it a number, or a function?
• You conclusion is correct. As you are reasoning by "non-standard" analysis, it is tough to call it "correct" but your intuition is leading you to the right places. Apr 7, 2017 at 16:24
• @DougM Inasmuch as the OP is unfamiliar with "university calculus," it is a safe bet that the OP is equally unfamiliar with non-standard analysis. ;-)) Apr 7, 2017 at 16:26
• I have some knowledge about the difference between standard and non-standard calculus analysis. It would be great if someone could explain this through simple, non-standard perspective. Apr 7, 2017 at 16:29
• How about this... differentiation is a "linear operation" that is $\frac {d}{dx} (f(x) + g(x)) = \frac {df}{dx} + \frac {dg}{dx}$ and since $c$ is constant $\frac {d}{dx} (y + c) = \frac {dy}{dx}$ Apr 7, 2017 at 16:31
• I'm also just learning about the non-standard approach, but I think the story is the same in the standard and non-standard approaches. Since $c$ is constant, an infinitesimal change in the "argument" of $c$ changes nothing. If $c$ were a function, then we would argue that $d(y+c) = dy + dc$, since an infinitesimal change in the argument produces an infinitesimal change in both terms. Apr 7, 2017 at 16:34
$$\mathrm{d}(y+c) = \mathrm{d}y + \mathrm{d} c$$
However, if $c$ is a constant, then
$$\mathrm{d} c = 0$$
so we get
$$\mathrm{d}(y+c) = \mathrm{d}y$$
Consequently, if one side of the following makes sense, then both sides do and they are equal:
$$\frac{\mathrm{d}(y+c)}{\mathrm{d}x} = \frac{\mathrm{d}y}{\mathrm{d}x}$$
• I couldn't see why $d(y + c) = dy + dc$. What is $d$? Is it a number, or a function? Apr 8, 2017 at 8:20
let $g(x) = y(x) + c\;$ $\forall x \in D_y$
\begin{align} & \frac{d(y+c)}{dx} = \frac{dg}{dx} = g'(x)= \lim_{h\to0} \frac{g(x+h)-g(x)}{h} =\lim_{h\to0} \frac{y(x+h)+c-y(x)-c}{h} \\[10pt] = {} & \lim_{h\to0} \frac{y(x+h)-y(x)}{h} = y'(x) = \frac{dy}{dx} \end{align} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8885623216629028, "perplexity": 351.3041232150712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662578939.73/warc/CC-MAIN-20220525023952-20220525053952-00127.warc.gz"} |
http://math.stackexchange.com/questions/285310/name-of-trigonometric-identity | Name of trigonometric identity
Is there a name of this trigonometric identity: $$\cos(a+b) \cos(a+c+b) \equiv \frac{1}{2} \left[\cos(c) + \cos(2a+2b+c) \right]$$
Bsaically we are "changing" a product of cosines into a sum of cosines.
-
– Fabian Jan 23 '13 at 20:33
$\cos(a+b) = \cos(a)\cos(b)-\sin(a)\sin(b)$
$\cos(a-b) = \cos(a)\cos(b)+\sin(a)\sin(b)$
$\cos(a)\cos(b) = \frac{1}{2}(\cos(a+b)+\cos(a-b))$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504920244216919, "perplexity": 489.89745567861854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461390.54/warc/CC-MAIN-20151124205421-00245-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://crackexams99.com/free-fall-equation-with-definition/ | # Free Fall Equation With Definition
Free Fall Equation With Definition: In Newtonian physics, free fall is any motion of a body where gravity is the only force acting upon it. Or we can say that acceleration due to gravity. In the context of general relativity, where gravitation is reduced to a space-time curvature, a body in free fall has no force acting on it.
## Free Fall Equation With Explanation
1. h= 1/2gt²
2. v²= 2gh
3. v=gt
where, h = height traveled
v = final velocity
g = acceleration due to gravity
t = time taken
The free fall equation can be derived from the equations of motion.
• s= ut+1/2at²
• v² =u²+ 2as
• v=u+at
initial velocity u=0,
acceleration, a=g.
distance traveled, s = h,
After placing these values we get.
Free Fall Equations, i.e.
• h= 1/2gt²
• v² =2gh
• v=gt
Share on: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111394882202148, "perplexity": 4732.62587466341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00323.warc.gz"} |
http://mathhelpforum.com/algebra/162031-sum-squares-all-numbers-between-10-70-divisible-4-a-print.html | # sum of squares of all numbers between 10 and 70 and which are divisible by 4.
• Nov 3rd 2010, 07:26 PM
rickrishav
sum of squares of all numbers between 10 and 70 and which are divisible by 4.
Find the sum of squares of all numbers between 10 and 70 and which are divisible by 4.
• Nov 4th 2010, 08:19 AM
Wilmer
All evens are divisible by 4; so "sum of even squares":
n = 70/2 = 35; f = (10-2)/2 = 4
2n(n+1)(2n+1)/3 - 2f(f+1)(2f+1)/3 = 59640 - 120 = 59520
"See" that?
• Nov 4th 2010, 10:13 AM
Soroban
Hello, rickrishav!
I hope I interpreted the problem correctly . . .
Quote:
Find the sum of squares of all integers between 10 and 70 which are divisible by 4.
The integers between 10 and 70 which are multiple of 4 are:
. . $\{12,16,20,24,\,\hdots\,68\} \;=\;\{3\!\cdot\!4,\:4\!\cdot\!4,\:5\!\cdot\!4,\:6 \!\cdot\!6,\:\hdots\:17\!\cdot\!4\}$
The sum of their squares is:
. . $S \;=\;3^2\!\cdot\!4^2 + 4^2\!\cdot\!4^2 + 5^2\!\cdot\!4^2 + 5^2\!\cdot4^2 + \hdots + 17^2\!\cdot\!4^2$
. . . . $=\;4^2\left(3^2 + 4^2 + 5^2 + \hdots + 17^2\right)$
. . . . $\displaystyle = \;4^2\left(\sum^{17}_{k=1} k^2 - \sum^2_{k=1}k^2\right)$
. . . . $-\; 4^2\left(\dfrac{17\!\cdot\!18\!\cdot\!35}{6} - \dfrac{2\!\cdot\!3\!\cdot\!5}{6}\right)$ .**
. . . . $=\;16\cdot 1780$
. . . . $=\;28,\!480$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
**
The sum of the first $\,n$ squares is:
. . $\displaystyle \sum^n_{k+1} k^2 \;=\; 1^2 + 2^2 + 3^2 + \hdots + n^2 \;=\;\frac{n(n+1)(2n+1)}{6}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375421404838562, "perplexity": 226.64115731954257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423486.26/warc/CC-MAIN-20170720201925-20170720221925-00660.warc.gz"} |
https://beesbuzz.biz/blog/5206-Thirds | ## Thirds
Kitt wrote an entry about splitting a pastry in thirds, which has a few different solutions. I hashed out what I thought was a correct solution in the comments but I’d actually made a pretty big mistake that came from me not actually drawing a diagram. So here’s a version with diagrams.
## The problem
Let’s say you have a pastry that’s X units wide and Y units tall, and you want to split it into even thirds.
## Trivial: trisection
One simple solution is to bisect the pastry 1/3 across, then to bisect the larger segment:
This gives you three sections, $$A$$, $$B$$, and $$C$$, which each have an area of $$\frac{XY}{3}$$.
But this has a problem: inequal distribution of crust! $$A$$’s crust length is $$\frac{2X}{3} + Y$$ whereas $$B$$ and $$C$$ have a crust length of $$\frac{2X}{3} + \frac{Y}{2}$$, meaning $$A$$ will always get $$\frac{Y}{2}$$ more crust than $$B$$ and $$C$$. This is clearly unfair!
How else can we solve this problem?
## Trangular cut
Another approach is to make $$A$$ a triangular cut out of the side, and $$B$$ and $$C$$ into trapezoids:
This gives us areas of $$A=\frac{2X}{3}\frac{Y}{2}$$ (i.e. $$A=\frac{XY}{3}$$) and $$B=C=\frac{X+\frac{X}{3}}{2}\frac{Y}{2}$$ which, as you might expect, also simplifies to $$\frac{XY}{3}$$. The crust allotments are now $$A=Y$$ and $$B=C=X+\frac{Y}{2}$$. So we can work out the appropriate pastry size to get equal area and crust:
$Y=X+\frac{Y}{2} \\ \frac{Y}{2}=X$
Or, to check our math a bit differently, we want the $$Y$$ edge to be 1/3 of the total edge length $$2(X+Y)$$, so:
$Y=\frac{2X+2Y}{3} \\ 3Y=2X+2Y \\ Y=2X$
which says exactly the same thing.
So if the pastry is twice as tall as it is wide, we’re done! So in this particular pastry we’d actually want the cuts to look more like this:
Where in the above situation, the areas and the crusts are equal for all three pieces.
However, this isn’t good enough! We want to generalize a solution to all possible pastry aspect ratios!
## Generalized solution
So, first, let’s orient the pastry so that $$X > Y$$, i.e. it’s a “lansdcape” aspect, as above.
Now, we know that if $$Y=\frac{X}{2}$$, both constraints are satisfied. But the crust allocation for $$A$$ is always $$X$$ and $$B=C=Y+\frac{X}{2}$$. Which means that if the aspect of the pastry tends more square, then $$A$$ gets less crust proportionally, and if the rectangular gets more elongated, $$A$$ gets proportionally more crust.
So we need to work out two solutions, one for pastries which are less than a 2:1 aspect, and ones which are more than 2:1.
### When $$X < 2Y$$
We add two offsets, $$a$$ and $$b$$, which affect the pastry division thusly:
In this case, section $$A$$’s crust amount is $$X+2a$$, and so we can solve for $$a$$:
$X+2a=\frac{2X + 2Y}{3} \\ 3X+6a=2X+2Y \\ 6a=2Y-X \\ a=\frac{2Y - X}{6}$
So, from this we can see that $$a$$ is only a sensible (i.e. non-negative) value if $$X \leq 2Y$$, so this setup continues to make sense with our preconditions.
Anyway, now that we’ve solved for $$a$$, we can solve for $$b$$:
$\frac{X}{2} \times \frac{b + Y - a}{2} = \frac{XY}{3} \\ \frac{X(b + Y - a)}{4} = \frac{XY}{3} \\ 3X(b + Y - a) = 4XY \\ b + Y - a = \frac{4Y}{3} \\ b = \frac{Y}{3} + a$
So let’s take a look at Kitt’s pastry conundrum and work out how it could have been split into perfect thirds. We can do an approximate measurement on the image; let’s just use pixels as the size unit here and assume that perspective more or less averages out:
Just to keep life simple let’s say that it’s $$X=270$$ and $$Y=240$$, which happens to be an aspect ratio of 9:8 although that doesn’t really matter right now. If we plug those values into the equation, we get $$a=35$$ and $$b=115$$, so this trisection of the pastry should produce equal area and edge:
We should, of course, double-check our math here.
The total edge of this pastry is $$2(270+240) = 1020$$, so each section should have an edge of 340. The edge of section $$A$$ is $$270 + 2 \times 35 = 340$$, so that checks out. (For that matter, it would be even easier to see that $$B$$ and $$C$$ both have $$135 + 205 = 340$$ crust units.)
The total area of this pastry is $$270 \times 240 = 64800$$, so each section should have an area of 21600. The area of section $$B$$ is $$135 \times \frac{240 - 35 + 115}{2} = 21600$$. So at least for this pastry, the math works out!
The theoretical limit for this setup (since we want to restrict ourselves to orienting the pastry such that $$X \geq Y$$) is a width and a height of 1. In that case, we’d have $$a = \frac{1}{6}$$ and $$b = \frac{1}{3} + \frac{1}{6} = \frac{1}{2}$$. Or in other words, it would look like this:
Theoretically the pastry could continue to get narrower as long as $$0 \leq a \leq Y$$ and $$0 \leq b \leq Y$$. We know $$a \geq 0$$ as long as $$X \leq 2Y$$, and we know $$b \geq 0$$ as long as
$\frac{Y}{3} + \frac{2Y - X}{6} \geq 0 \\ \frac{4Y - X}{6} \geq 0 \\$
or, in other words, $$4Y \geq X$$ – which is always true given the above. But what about the other conditions? First, we check that $$a \leq Y$$:
$\frac{2Y - X}{6} \leq Y \\ 2Y - X \leq 6Y \\ {-X} \leq 4Y 0 \leq 4Y + X$
Well, since $$X$$ and $$Y$$ are always positive, it’s safe to say that this condition is true. What about $$b \leq Y$$?
$\frac{Y}{3} + \frac{2Y - X}{6} \leq Y \\ 2Y + 2Y - X \leq 6Y \\ 0 \leq 2Y + X$
Which is, again, always true. So, this setup can always be used for any pastry; if it’s more than twice as wide than it is tall, simply turn it sideways. But this might make for some very slim cuts, so we still want a general solution that works for an extra-wide pastry. Also, it would get weird if we ever have a situation where $$Y - a < b$$. Does that ever happen?
$Y - a < b \\ Y - a < \frac{Y}{3} + a \\ \frac{2Y}{3} < 2a \\ \frac{2Y}{3} < \frac{2Y - X}{3} \\ 2Y < 2Y - X \\ 0 < -X \\ X < 0$
Okay, so we never actually get into that situation, at least.
### When $$X > 2Y$$
So how can we solve for pastries which are wider than a 2:1 aspect? Let’s try this setup:
In this case, section $$A$$’s crust amount is $$X-2a$$, so we again solve for $$a$$:
$X - 2a = \frac{2X + 2Y}{3} \\ 3X - 6a = 2X + 2Y \\ -6a = 2Y - X \\ a = \frac{X - 2Y}{6}$
And now $$a$$ is only sensible (i.e. non-negative) if $$X \geq 2Y$$, so this setup once again makes sense with our preconditions.
So again, now we solve for $$b$$; this time we can use the area of section $$A$$ as our guide, as it’s much more straightforward to compute:
$\frac{(X - 2a)(Y - b)}{2} = \frac{XY}{3} \\ 3(X - 2a)(Y - b) = 2XY \\ 3(XY - 2aY - bX + 2ab) = 2XY \\ 3XY - 6aY - 3bX + 6ab = 2XY \\ 6ab - 3Xb = 6aY - XY \\ b = \frac{6aY - XY}{6a - 3X}\\ b = \frac{(X - 2Y)Y - XY}{X - 2Y - 3X} \\ b = \frac{2Y^2}{2X + 2Y} \\ b = \frac{Y^2}{X + Y}$
So, hey, that simplifies pretty nicely. So, let’s say we have a pastry that’s, say, 3 units wide and 1 unit tall. In this case, $$a=\frac{1}{6}$$ and $$b = \frac{1}{4}$$. Let’s verify that this makes sense!
This theoretical pastry would have a total area of 3, and a total edge of 8. So we’d need section $$A$$ to have 1 unit of area and $$\frac{8}{3}$$ units of edge. How does this work out?
Edge length is $$X-2a = 3 - \frac{2}{6} = \frac{8}{3}$$. Phew!
And the area is $$\frac{(X - 2a)(Y - b)}{2} = \frac{\frac{8}{3} \times \frac{3}{4}}{2} = 1$$. Alright!
Are there any limits to how wide the pastry can be? As always, we want $$a \geq 0$$ and $$b \geq 0$$. We already know this to be true for $$a$$ as long as $$X \geq 2Y$$ (which is the situation we have this setup for anyway), and it’s pretty obvious that $$b \geq 0$$ as long as $$X$$ and $$Y$$ are non-negative real numbers. This will not hold for imaginary pastry heights, though.
We also need $$2a \leq X$$ and $$b \leq Y$$. First thing first:
$2a \leq X \\ \frac{X - 2Y}{3} \leq X \\ X - 2Y \leq 3X \\ {-2Y} \leq 2X \\ 0 \leq X + Y$
Which is always true. And now the other thing:
$\frac{Y^2}{X + Y} \leq Y \\ Y^2 \leq XY + Y^2 \\ 0 \leq XY$
which is also always true.
So, now we know how to always trisect a pastry and keep the same amount of edge and area between all three parts!
## Left as an exercise for the reader
• Figuring out the volume of filling
• Accounting for the width of the crimping | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916869759559631, "perplexity": 419.8973474620777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00255.warc.gz"} |
https://www.mi.fu-berlin.de/math/groups/discgeom/projects/ERC/Publications_old2/Preprints/Equivariant_topology_of_configuration_spaces/index.html | Springe direkt zu Inhalt
# Equivariant topology of configuration spaces
## Pavle V. M. Blagojević, Wolfgang Lück, Günter M. Ziegler – 2012
Focus Area 3: Topological connectivity and diameter of Discrete Structures We study the Fadell-Husseini index of the configuration space F(R^d,n) with respect to different subgroups of the symmetric group S_n. For p prime and d>0, we completely determine Index_{Z/p}(F(R^d,p);F_p) and partially describe Index{(Z/p)^k}(F(R^d,p^k);F_p). In this process we obtain results of independent interest, including: (1) an extended equivariant Goresky-MacPherson formula, (2) a complete description of the top homology of the partition lattice Pi_p as an F_p[Z_p]-module, and (3) a generalized Dold theorem for elementary abelian groups. The results on the Fadell-Husseini index yield a new proof of the Nandakumar & Ramana Rao conjecture for a prime. For n=p^k a prime power, we compute the Lusternik-Schnirelmann category cat(F(R^d,n)/S_n)=(d-1)(n-1), and for spheres obtain the bounds (d-1)(n-1)\le cat(F(S^d,n)/S_n)\le (d-1)(n-1)+1. Moreover, we extend coincidence results related to the Borsuk-Ulam theorem, as obtained by Cohen & Connett, Cohen & Lusk, and Karasev & Volovikov.
Titel
Equivariant topology of configuration spaces
Verfasser
Pavle V. M. Blagojević, Wolfgang Lück, Günter M. Ziegler
Datum
2012-07
Art
Text
Größe oder Länge
40 pages | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91259765625, "perplexity": 3949.686699965992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00507.warc.gz"} |
https://homework.cpm.org/category/CON_FOUND/textbook/gc/chapter/11/lesson/11.1.3/problem/11-36 | ### Home > GC > Chapter 11 > Lesson 11.1.3 > Problem11-36
11-36.
Find the volume and surface area of a square-based right pyramid if the base edge has length 6 units and the height of the pyramid is $4$ units. Assume the diagram at right is not to scale.
$\text{Volume }= \frac{1}{3} (\text{volume of the corresponding prism})$
$\frac{1}{3} (\text{number of cubes in the bottom layer)(number of layers)}$
$\frac{1}{3}(l)(w)(h)$
$\frac{1}{3}(6^2\cdot4)$
$\text{Surface Area} = 4\text{(area of triangle) + (area of base)}$
You will need to find the slant height of the pyramid.
$96$ square units | {"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9208832383155823, "perplexity": 612.0464211303965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703511903.11/warc/CC-MAIN-20210117081748-20210117111748-00625.warc.gz"} |
https://physics.stackexchange.com/questions/625977/transformation-law-of-vector-fields-on-mathbbrn | # Transformation law of vector fields on $\mathbb{R}^n$
So suppose we have a function $$F$$ from $$\mathbb R^2$$ to $$\mathbb R^2$$ defined by $$F(x,y) = (g(x,y),h(x,y))$$ where $$g$$ and $$h$$ represent temperate and pressure respectively (the point is, they are both scalar fields). From the viewpoint of differential geometry the above function can be seen as a (coordinate representation of a) vector field (i.e. a map from the manifold to the tangent bundle); and here's where my confusion lies: Mathematically, since this is a section of the projection map it is forced to obey the vector transformation law, yet physically it's intuitively clear that this is not a vector field but rather just a bunch of scalars so do we implement a vector transformation law or not (under a change of coordinates); does it even make sense to talk about? I'm having trouble putting the rigorous definitions of differential geometry into physical context.
More generally if we have an n-dimensional smooth manifold $$\cal M$$ (so think of a Riemannian 4-manifold for instance) and a function $$F$$ from $$\cal M$$ to $$\mathbb R^n$$, will the physical nature of this function (i.e. depending on what physical quantity it represents) govern its transformation behavior? If yes, where is this (if at all) taken into account in the mathematical framework of differential geometry?
• The tangent space of a point on manifold needs to be defined before you can talk about vectors. This can be done with directional derivatives. This is also why the dimension of the vector space is the dimension of the manifold. The pressure and temperature have no relationship to tangent space. You could define these on a 1D manifold, but the vector space of a 1D manifold must have dimension 1. Mar 31 at 2:37
• The transformation law of a vector field on a manifold is that of a contravariant vector (field). Mar 31 at 2:37
• @Jbag1212 what do you mean the tangent space needs to be defined? It is defined in the obvious way for $R^2$. And I am aware of all that, what i'm saying is this: I have a function as defined above and it can be easily shown to be a section of the projection map from $TR^2$ to $R^2$, so mathematically it satisfies the definition of a vector field (which is just a section of the projection map), so give me a reason why it shouldn't obey the vector transformation law. Mar 31 at 2:43
• Things obey the vector transformation law if they are vectors which are elements of the tangent space at each point. In $\mathbb{R}^2$ suppose we have a basis for the tangent space, and physically we call it "north-south" and "east-west." Physically speaking, the temperature and pressure do not depend the direction of "north-south" and "east-west." So this vector of temperature and pressure is not in the tangent space. Mar 31 at 2:52
• In physics we look for physical quantities which transform like tensors. If you were just given a function that you don't actually know what it physically represents, then of course you wouldn't be able to proceed. Just because something has multiple indices doesn't mean that it is a tensor. Spinors are one example, And in answer to your question, yes. Mar 31 at 3:07
TL;DR - I suspect your confusion lies in the Physics 101 example that e.g. the ordered pair ("temperature","pressure") does not define a vector because when we change our coordinates, temperature and pressure don't transform. However, if we are working in cartesian coordinates, the object (temperature)$$\hat x$$ + (pressure)$$\hat y$$ is a linear combination of our basis vectors, and therefore does transform appropriately. This is just as well-defined as the vector $$\mathbf V = 3\hat x + 4\hat y$$.
In other words, if you write down a physically meaningless (but perfectly well-defined) vector field in some coordinate system, then it will change in the usual way when you move to a different coordinate system.
So suppose we have a function $$F$$ from $$\mathbb R^2$$ to $$\mathbb R^2$$ defined by $$F(x,y)=(g(x,y),h(x,y))$$ where $$g$$ and $$h$$ represent temperature and pressure respectively (the point is, they are both scalar fields).
Okay, subtlety number one: Is the domain of $$F$$ the manifold $$\mathcal M =\mathbb R^2$$, or the image of $$\mathcal M$$ under a cartesian coordinate chart? $$x :\mathcal M \rightarrow \mathbb R^2$$ $$(a,b) \mapsto (a,b)$$
Mathematical Interlude
Despite being one of the simplest possible manifolds, $$\mathbb R^2$$ is actually terrible from a pedagogical point of view precisely because it's so easy to get confused on this issue. The manifold $$\mathcal M = \mathbb R^2$$ is abstract; points $$p\in \mathbb R^2$$ consist of ordered pairs of real numbers $$(a,b)$$, but those numbers are not coordinates for $$p$$. We can introduce coordinates by defining a coordinate chart on some open neighborhood of $$p$$. For example, we might coordinatize the upper half-plane via the polar coordinate chart: $$\pi : \mathbb R_+^2 \rightarrow \mathbb R \times(0,\pi)$$ $$(a,b) \mapsto \left(\sqrt{a^2+b^2},\sin^{-1}\left(\frac{b}{\sqrt{a^2+b^2}}\right)\right)$$ where the first coordinate is interpreted as the radial coordinate and the second as the angular coordinate. Any function which is defined at the manifold level - e.g. some $$f:\mathcal M \rightarrow \mathbb R$$ - has a corresponding expression in each coordinate chart. For example, let $$f:\mathcal M \rightarrow \mathbb R$$ be defined by $$(a,b)\mapsto a$$. If we descend into the polar coordinate chart, we could consider the function $$f_\pi: \mathbb R\times (0,\pi)$$ $$(r,\theta) \mapsto (f\circ \pi^{-1})(r,\theta) = f\big(r\cos(\theta),r\sin(\theta)\big) = r\cos(\theta)$$ $$f_\pi$$ is the expression of the (manifold-level) function $$f$$ in the $$\pi$$ coordinate chart. Changing to a different chart entails mapping points back to $$\mathcal M$$ via $$\pi^{-1}$$, then applying the new coordinate chart. For example, if we wanted to use the cartesian chart defined above, we would have $$f_x = f\circ x^{-1} = f\circ \pi^{-1} \circ \pi \circ x^{-1} = f_\pi \circ (\pi\circ x^{-1})$$ The map $$\pi \circ x^{-1}$$ is called the chart transition map between the cartesian chart $$x$$ and the polar chart $$\pi$$; it is easily seen to be $$\pi \circ x^{-1}: \mathbb R_+^2 \rightarrow \mathbb R\times(0,\pi)$$ $$(a,b) \mapsto \left(\sqrt{a^2+b^2},\sin^{-1}\left(\frac{b}{\sqrt{a^2+b^2}}\right)\right)$$ and so $$f_x : (r,\theta)\in \mathbb R\times(0,\pi) \mapsto r\cos(\theta)$$ as anticipated.
End Interlude
The point of that somewhat length example is that when you say $$F:\mathbb R^2\rightarrow \mathbb R^2$$, its not clear whether you are defining an express at the manifold level - in which case there are no coordinates being used at all, and no transformations to consider - or at the level of (presumably cartesian) coordinates, in which case your $$F$$ is really $$F_x \equiv F \circ x^{-1}$$, and a change of chart is effected by simply inserting a chart transition map, e.g. $$F_\pi \equiv F_x \circ (x\circ \pi^{-1})$$.
From the viewpoint of differential geometry the above function can be seen as a (coordinate representation of a) vector field (i.e. a map from the manifold to the tangent bundle)
Okay. Based on this, I will assume we are working in cartesian coordinates. You are defining a vector field $$\mathbf V$$ on $$\mathbb R^2$$ whose $$x$$-component is the temperature and whose $$y$$-component is the pressure. That defines a little directional derivative which sits at each point, with the vector field being
$$\mathbf V = g(x,y)\frac{\partial}{\partial x} + h(x,y) \frac{\partial}{\partial y}$$
Mathematically, since this is a section of the projection map it is forced to obey the vector transformation law, yet physically it's intuitively clear that this is not a vector field but rather just a bunch of scalars so do we implement a vector transformation law or not (under a change of coordinates); does it even make sense to talk about?
I don't know what you mean here. It is a perfectly well-defined vector field. It doesn't have any physical significance, as far as I can tell, but that doesn't mean it isn't a vector field.
A change of coordinates induces a change of basis, so you're asking whether the components of $$\mathbf V$$ change when we go from the cartesian basis $$\left\{\frac{\partial}{\partial x},\frac{\partial}{\partial y}\right\}$$ to e.g. the polar basis $$\left\{\frac{\partial}{\partial r}, \frac{\partial}{\partial \theta}\right\}$$, and the answer is obviously yes - the polar coordinate unit vectors generally point in different directions than the cartesian unit vectors, after all. If you replace $$\frac{\partial}{\partial x}\mapsto \frac{\partial r}{\partial x} \frac{\partial}{\partial r} + \frac{\partial \theta}{\partial x} \frac{\partial}{\partial \theta}$$ $$\frac{\partial}{\partial y}\mapsto \frac{\partial r}{\partial y} \frac{\partial}{\partial r} + \frac{\partial \theta}{\partial y} \frac{\partial}{\partial \theta}$$ in our original expression for $$\mathbf V$$, then when the dust settles you'll have something of the form
$$\mathbf V = V^r \frac{\partial}{\partial r} + V^\theta \frac{\partial}{\partial \theta}$$
where $$V^{r}$$ and $$V^\theta$$ are some (position-dependent) linear combinations of $$g$$ and $$h$$. That's all the vector transformation rule is - expressing the same vector field using different basis vectors requires different components, which should be fairly obvious.
Actually, this isn't quite right - your functions $$g$$ and $$h$$ are really $$g_x=g\circ x^{-1}$$ and $$h_x = h\circ x^{-1}$$, so under change of chart they would also be replaced by $$g_\pi = g_x \circ (x\circ \pi^{-1})$$ and $$h_\pi = h_x \circ (x\circ \pi^{-1})$$.
More generally if we have an $$n$$-dimensional smooth manifold $$\mathcal M$$ (so think of a Riemannian 4-manifold for instance) and a function $$F$$ from $$M$$ to $$\mathbb R^n$$, will the physical nature of this function (i.e. depending on what physical quantity it represents) govern its transformation behavior?
No. As long as you have a well-defined function at the manifold level, that immediately translates into an expression in whatever coordinate chart you wish to work in. There is nothing transformative about this idea - if you have a point $$p$$ which is being mapped to some other space and you label $$p$$ by some coordinates, then you get a function which eats those coordinates. If you change coordinates, you change the function.
In this case, that other space was the tangent bundle, and we performed a corresponding chart transformation on that (i.e. a change of basis) which was induced by the chart transformation on the manifold $$\mathcal M=\mathbb R^2$$, which added a layer of complexity.
• I posted 4 pages from Chapter 4 of "Flanders" on imgur.com/gallery/CZSR28w . Every point is attached to a local orthogonal frame $\mathbf{e}_i$ where for each $i=1,2,3$ is a smooth vector field. Flanders writes: "What we shall do is express everything in terms of the $\mathbf {e}_i$, ... apply$d$, and $d\mathbf{x}=\sigma_1 \mathbf{e}_1+\sigma_2 \mathbf{e}_2+\sigma_3 \mathbf{e}_3$" Mar 31 at 13:49
• How can we apply "$d$" to a vector field when it is defined in Chapter 3 for forms but not tangent vectors? What sort of creature $d\mathbf{x}$ is: a covariant or a contravariant vector? Mar 31 at 13:56
• @hyportnex Flanders does the following. The underlying set for Euclidean space is $\mathbb R^3$; consider a cartesian chart $x$ as per my answer, which eat a point $p$ and spit out its cartesian coordinates, $x(p)=(x^1(p),x^2(p),x^3(p))$. Each $x^i$ is a function from $\mathcal M\rightarrow \mathbb R$, so we can define cooresponding 1-forms $\mathrm dx^i$; we then throw these together into a one-form valued vector $\mathrm d\mathbf x= dx^i \frac{\partial}{\partial x^i}$. Mar 31 at 14:10
• @hyportnex More formally, $\mathrm d\mathbf x$ is a $(1,1)$-tensor whose cartesian coordinate form is $\mathrm d\mathbf x = \mathrm dx^i \otimes \frac{\partial}{\partial x^i}$, i.e. its components are $(\mathrm d\mathbf x)^i_{\ \ j}= \delta^i_j$. In a different coordinate system, its components will of course be different. Mar 31 at 14:14
• Thank you for clearing this up. Would you also say that what Flanders does is a bit of a stealthy underhanded way by which he introduces (mixed) tensors? Mar 31 at 14:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 83, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462875127792358, "perplexity": 172.2546397459159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00060.warc.gz"} |
http://mathhelpforum.com/statistics/195334-help-how-much-allocate-varying-probabilities-get-specific-probability.html | # Math Help - Help with how much to allocate to varying probabilities to get specific probability?
1. ## Help with how much to allocate to varying probabilities to get specific probability?
Problem:
I must spend $1000. I have 3 strategies that have different Probabilities Of Success (POS). Strategy A POS = 70% Strategy B POS = 50% Strategy C POS = 25% How much do I spend on each strategy (must spend total of$1000) to have an overall POS of 65%?
4. ## Re: Help with how much to allocate to varying probabilities to get specific probabili
If X is success, then the probability of success $P(X)=0.65 = 0.70 \cdot P(A) + 0.50 \cdot P(B) + 0.25 \cdot P(C)$
If you let $b=0$ then you end up with $0.7a + 0.50 \cdot 0 + 0.25 \cdot (1000-a)=650$
Law of total probability - Wikipedia, the free encyclopedia
5. ## Re: Help with how much to allocate to varying probabilities to get specific probabili
Originally Posted by dmbeas12
.65 = .70x + .5x + .25x
This is where you went wrong. Your solution assumes that you bet the same amount on each betting strategy. The percentages you've posted can be interpreted as conditional probabilities. Given a betting strategy A, B or C, what is the probability X that it success. This is where the law of total probability comes into the picture. If you have a sample space where the probability of all events add up to 1, then you can write the probability for an arbitrary event A as:
$P(A)=P(A|H_1)P(H_1) + P(A|H_2)P(H_2)+ ... + P(A|H_N)P(H_N)$, $P(H_1)+P(H_2)+...+P(H_N)=1$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194179177284241, "perplexity": 559.2391662375059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399455473.15/warc/CC-MAIN-20151124211055-00293-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/447715/history-question-on-continued-fractions | # History Question on Continued Fractions
I worked out the periodicity of some infinite continued fractions last night by hand. (Don't ask me why)
For example, $\sqrt{13}= [3,1,1,1,1,6,1,1,1,1,6,\ldots]$. Last night I worked out the first period of this continued fraction and the algebra was a little meh. I was wondering, what is the largest continued fraction period ever worked out by hand before?
For example: $\sqrt{D}$ may have the continued fraction expansion: $[\text{repeat}(a_1,a_2,a_3,\ldots, a_n)]$. Define the "first period worked out by hand" to be:
The discovery of the first $a_1,a_2,a_3,\ldots,a_n$ of the infinite continued fraction $\sqrt{D}$ using nothing but pencil, and paper.
Any stories for me?
• No story, but there are efficient ways to compute the continued fraction of square roots. – André Nicolas Jul 19 '13 at 23:08
• How is that related to math history? – lhf Jul 20 '13 at 1:35
Lagrange's method uses just integer arithmetic and is suitable for use by hand. See How to detect when continued fractions period terminates
If you need more detail let me know.
Note that I used precisely that in Minimum of $n$? $123456789x^2 - 987654321y^2 =n$ ($x$,$y$ and $n$ are positive integers) although it was by computer.
Not by the way, if you are primarily interested in the square root of positive a integer $D,$ then the triple indicating a first form in the cycle is given by finding $$a_0 = \lfloor \sqrt D \rfloor$$ and then forming the triple $$\langle 1, 2 a_0, a_0^2 - D \rangle$$
Here, the triple $\langle a, b, c \rangle$ refers to the quadratic form $$f(x,y) = a x^2 + b x y + c y^2.$$ The form is "reduced" if both $ac <0$ and $b > |a+c|.$
jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$./Pell Input n for Pell 991 0 form 1 62 -30 delta -2 1 form -30 58 5 delta 12 2 form 5 62 -6 delta -10 3 form -6 58 25 delta 2 4 form 25 42 -22 delta -2 5 form -22 46 21 delta 2 6 form 21 38 -30 delta -1 7 form -30 22 29 delta 1 8 form 29 36 -23 delta -2 9 form -23 56 9 delta 6 10 form 9 52 -35 delta -1 11 form -35 18 26 delta 1 12 form 26 34 -27 delta -1 13 form -27 20 33 delta 1 14 form 33 46 -14 delta -3 15 form -14 38 45 delta 1 16 form 45 52 -7 delta -8 17 form -7 60 13 delta 4 18 form 13 44 -39 delta -1 19 form -39 34 18 delta 2 20 form 18 38 -35 delta -1 21 form -35 32 21 delta 2 22 form 21 52 -15 delta -3 23 form -15 38 42 delta 1 24 form 42 46 -11 delta -4 25 form -11 42 50 delta 1 26 form 50 58 -3 delta -20 27 form -3 62 10 delta 6 28 form 10 58 -15 delta -4 29 form -15 62 2 delta 31 30 form 2 62 -15 delta -4 31 form -15 58 10 delta 6 32 form 10 62 -3 delta -20 33 form -3 58 50 delta 1 34 form 50 42 -11 delta -4 35 form -11 46 42 delta 1 36 form 42 38 -15 delta -3 37 form -15 52 21 delta 2 38 form 21 32 -35 delta -1 39 form -35 38 18 delta 2 40 form 18 34 -39 delta -1 41 form -39 44 13 delta 4 42 form 13 60 -7 delta -8 43 form -7 52 45 delta 1 44 form 45 38 -14 delta -3 45 form -14 46 33 delta 1 46 form 33 20 -27 delta -1 47 form -27 34 26 delta 1 48 form 26 18 -35 delta -1 49 form -35 52 9 delta 6 50 form 9 56 -23 delta -2 51 form -23 36 29 delta 1 52 form 29 22 -30 delta -1 53 form -30 38 21 delta 2 54 form 21 46 -22 delta -2 55 form -22 42 25 delta 2 56 form 25 58 -6 delta -10 57 form -6 62 5 delta 12 58 form 5 58 -30 delta -2 59 form -30 62 1 delta 62 60 form 1 62 -30 disc 3964 Automorph, written on right of Gram matrix: 5788591406539787767296194303 361672073709940783423276163010 12055735790331359447442538767 753244210407084073508733597857 Pell automorph 379516400906811930638014896080 11947234168218377212415555918097 12055735790331359447442538767 379516400906811930638014896080 Pell unit 379516400906811930638014896080^2 - 991 * 12055735790331359447442538767^2 = 1 ========================================= 991 991 jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ date
Fri Jul 19 16:34:12 PDT 2013
jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus\$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8340834975242615, "perplexity": 482.77469648231624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00335.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2008_iTest_Problems/Problem_83&diff=prev&oldid=98939 | # Difference between revisions of "2008 iTest Problems/Problem 83"
## Problem
Find the greatest natural number such that and is a perfect square.
## Solution
Notice that , so Thus, . In order for the expression to be a perfect square, must be a perfect square.
By using the Euclidean Algorithm, . Thus, the GCD of and must be factors of 6. Now, split the factors as different casework. Note that the quadratic residues of 7 are 0, 1, 2, and 4.
• If , then . Let , so . Since 6 is divided out of and , and are relatively prime, so and must be perfect squares. However, since 6 is not a quadratic residue of 7, the GCD of and can not be 6.
• If , then . Let , so . Since 3 is divided out of and , and are relatively prime, so and must be perfect squares. However, since 5 is not a quadratic residue of 7, the GCD of and can not be 3.
• If , then . Let , so . Since 2 is divided out of and , and are relatively prime, so and must be perfect squares. We also know that and do not share a factor of 3, so . That means , so . After trying values of that are one less than a perfect square, we find that the largest value that makes a perfect square is . That means .
• If , then (to avoid common factors that are factors of 6), so . After trying values of that are one less than a perfect square, we find that the largest value that makes a perfect square is (we could also stop searching once gets below 1921).
From the casework, the largest natural number that makes is a perfect square is . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476747512817383, "perplexity": 291.9135519373579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088264.43/warc/CC-MAIN-20210415222106-20210416012106-00367.warc.gz"} |
https://www.physicsforums.com/threads/b-mode-plots-spherical-harmonics-fundamental-modes.744349/ | # B-mode plots, spherical harmonics?, fundamental modes?
1. Mar 20, 2014
### Spinnor
If the B-mode sky plots could be Fourier transformed what would be a plot of the lowest order B-mode harmonic plotted on a sphere look like?
I guess we need two functions of spherical coordinates, one function for amplitude at points on a sphere and one function for the orientation at the same points on a sphere?
Is there a hypothetical "gravitational wave" that gives rise to this lowest order harmonic?
Thanks for any help!
Last edited: Mar 20, 2014
2. Mar 20, 2014 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9585728645324707, "perplexity": 2225.9882326926254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00263.warc.gz"} |
http://math.stackexchange.com/users/60079/easy?tab=activity | Easy
less info
reputation
314
bio website location age member for 1 year, 5 months seen 6 hours ago profile views 259
517 Actions
Jul2 awarded Curious May24 revised Does every function with $f_x,f_y>0,f_{xx},f_{yy}<0$ with particular condition have to satisfy $f_{xy}/f_{xx} = -x/y$? deleted 3 characters in body May24 comment Does every function with $f_x,f_y>0,f_{xx},f_{yy}<0$ with particular condition have to satisfy $f_{xy}/f_{xx} = -x/y$? When you say $\infty$, it is $+\infty$ or $-\infty$? May21 answered Inverse of a sum of positive definite matrices May20 answered Proving $A+2B+3C+4D < 2.5$ with given conditions May20 comment If $f'(x) = 0$ for all $x \in \mathbb{Q}$, is $f$ constant? You may want to consider the fundamental theorem of Lebesgue integral calculus, which requires $f'(x)=0$ almost everywhere to get to your result. However, $\mathbb{Q}$ is far away from almost everywhere.. Feb12 awarded Nice Answer Jan29 awarded Yearling Sep12 accepted Maximal abelian subgroups in a $p$-group are always normal? Sep12 comment Maximal abelian subgroups in a $p$-group are always normal? thanks, I didn't realise it was so simple. Sep11 comment Maximal abelian subgroups in a $p$-group are always normal? @DonAntonio, yeah, I am assuming the $p$-group is finite. But I am talking about maximal among abelian subgroups, not just maximal among any subgroups. Sep11 asked Maximal abelian subgroups in a $p$-group are always normal? Aug5 comment $\frac {dy}{dx} \sin y = (1-x\cos y)\cos y$ Hint: $\sin ydy=-d\cos y$. Jul31 asked How to choose the adjacency set Jul22 comment Are these 2 graphs isomorphic? @MarkMcClure, do you know this kind of animation is available by tex coding or not? Jul22 answered Group Theory Normal Subgroups Jul21 comment Solving $x^2-7[x]+5=0.$ to find values of $x$. Hint: find the intersection of $y=x^2+5$ and $y=7[x]$. Jul15 comment Splitting an electricity bill I suggest to assume every equal length period consumes the same amount of electricity. We know there are a lot of electric facilities consume the same amount of electricity regardless of the number of tenancy. For example, fridge, tv, heater, etc. More people just means more usage on the light bulbs in their own rooms, maybe? Jul9 comment What is the structure of the Coxeter groups of type $\text{D}_n$ @JyrkiLahtonen, sorry, I still don't quite get it. For example, where $\sigma_2\circ\sigma_n$ is mapped to? And why $|ker f|=2^{n-1}$ but not $2^{n\choose 2}$? Jul9 comment What is the structure of the Coxeter groups of type $\text{D}_n$ @TobiasKildetoft, by the base group I meant $\mathbb{Z}_2^{n-1}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872861266136169, "perplexity": 997.6510584451056}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997859240.8/warc/CC-MAIN-20140722025739-00232-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://www.zora.uzh.ch/58318/ | # Measurement of photon production in the very forward direction in deep-inelastic scattering at HERA
H1 Collaboration; Aaron, F D; Alexa, C; Andreev, V; Müller, K; Robmann, P; Straumann, U; Truöl, P (2011). Measurement of photon production in the very forward direction in deep-inelastic scattering at HERA. European Physical Journal C - Particles and Fields, 71(10):1771.
## Abstract
The production of photons at very small angles with respect to the proton beam direction is studied in deep-inelastic positron–proton scattering at HERA. The data are taken with the H1 detector in the years 2006 and 2007 and correspond to an integrated luminosity of 126 pb−1. The analysis covers the range of negative four momentum transfer squared at the positron vertex 6<Q 2<100 GeV2 and inelasticity 0.05<y<0.6. Cross sections are measured for the most energetic photon with pseudorapidity η>7.9
as a function of its transverse momentum pTlead and longitudinal momentum fraction of the incoming proton xLlead. In addition, the cross sections are studied as a function of the sum of the longitudinal momentum fraction xLsum of all photons in the pseudorapidity range η>7.9. The cross sections are normalised to the inclusive deep-inelastic scattering cross section and compared to the predictions of models of deep-inelastic scattering and models of the hadronic interactions of high energy cosmic rays.
The production of photons at very small angles with respect to the proton beam direction is studied in deep-inelastic positron–proton scattering at HERA. The data are taken with the H1 detector in the years 2006 and 2007 and correspond to an integrated luminosity of 126 pb−1. The analysis covers the range of negative four momentum transfer squared at the positron vertex 6<Q 2<100 GeV2 and inelasticity 0.05<y<0.6. Cross sections are measured for the most energetic photon with pseudorapidity η>7.9
as a function of its transverse momentum pTlead and longitudinal momentum fraction of the incoming proton xLlead. In addition, the cross sections are studied as a function of the sum of the longitudinal momentum fraction xLsum of all photons in the pseudorapidity range η>7.9. The cross sections are normalised to the inclusive deep-inelastic scattering cross section and compared to the predictions of models of deep-inelastic scattering and models of the hadronic interactions of high energy cosmic rays.
## Citations
5 citations in Web of Science®
2 citations in Scopus®
## Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2011 12 Feb 2012 14:05 05 Apr 2016 15:34 Springer 1434-6044 (P) 1434-6052 (E) The original publication is available at www.springerlink.com 10.1140/epjc/s10052-011-1771-6 http://arxiv.org/abs/1106.5944
Permanent URL: http://doi.org/10.5167/uzh-58318
Preview
Content: Published Version
Filetype: PDF
Size: 551kB
View at publisher
Preview
Content: Accepted Version
Filetype: PDF
Size: 190kB
## TrendTerms
TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192545413970947, "perplexity": 2109.1202046254966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661289.41/warc/CC-MAIN-20160924173741-00049-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/69433/an-action-of-a-group-on-a-covering-space | # An action of a group on a covering space
We see $S_3$ as the quotient of the free group on two elements and the normal subgroup $R$ generated by $\langle\sigma^3,\tau^2,\sigma\tau\sigma\tau\rangle$ where $\sigma$ and $\tau$ are the generators of the free group. The covering space corresponding to $R$ of the bouquet of 2 circle sould be the following:
Now $S_3$ acts on this covering space, and the action should have two orbits. Could you explain me how is this action? (I mean, what are the images of the single edges?)
-
Let us write $t=(12)$ and $s=(123)$, two elements of the symmetric group $S_3$ of degree $3$.
Construct a directed graph $\Gamma$ as follows:
• the vertices are the elements of $S_3$,
• if $g\in S_3$ is a vertex, there are two edges coming out of $g$ in $\Gamma$: one going from $g$ to $gt$ and the other going from $g$ to $gs$.
In other words, the set of edges is $$E=\{(g,gt)\in S_3\times S_3:g\in S_3\}\cup \{(g,gs)\in S_3\times S_3:g\in S_3\}.$$
We can draw a picture:
There is an action of $S_3$ on $\Gamma$ as follows: if $h\in S_3$, then
• the action of $h$ on the vertices of $\Gamma$ is by left multiplication by $h$: that is, a vertex $g\in S_3$ is mapped to $hg$;
• on the other hand, the action of $h$ on the edges is the induced one: if $(g_1,g_2)$ is one of the edges, then $h\cdot(g_1,g_2)=(hg_1,hg_2)$. It is easy to see that this latter element is, indeed, an edge of $\Gamma$.
It is very easy to see that the action of $S_3$ on the vertices of $\Gamma$ is simply transitive, so that the quotient graph $\Gamma/S_3$ has exactly one vertex, and that the action of $S_3$ on the edges of $\Gamma$ has exactly two orbits. It thus follows that $\Gamma/S_3$ is a two-leaved rose.
$$♦ ♦ ♦$$
Can you see how to go from this action of $S_3$ on $\Gamma$ to an action of $S_3$ on a CW-complex of dimension $1$, which is what you want?
-
It might help to remember that $S_3$ is the same thing as $D_3$. So let's look for a "triangle-ish" shape for $D_3$ to act on. As you've drawn your picture, you can almost see it: the upper and lower three-cycles form a pair of triangles; in this way the whole graph is best visualized as a sort of triangular prism. Now $\sigma$ will act as a rotation, and $\tau$ will swap the upper and lower triangles. From this way of looking at things, it's perhaps easier to see that $S_3$ acts transitively on triangle edges (the non-vertical edges in your picture) and also acts transitively on the vertical edges. So these are your two orbits.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150217771530151, "perplexity": 73.51810929217481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00262-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://www.drumtom.com/q/please-help-find-the-absolute-maximum-and-absolute-minimum-values-of-f-on-the-given-interval-f-x-x-3-7x-8-0-3 | # Please help! Find the absolute maximum and absolute minimum values of f on the given interval. f(x) = x^3 − 7x + 8, [0, 3]?
• Please help! Find the absolute maximum and absolute minimum values of f on the given interval. f(x) = x^3 − 7x + 8, [0, 3]?
Minimum and Maximum Values ... Finding Absolute Extrema of f(x) on ... Here we are really asking for the absolute extrema of A(t) on the interval [0,10].
Positive: 24 %
If we replace the interval $[0,\frac{3\pi ... and absolute minimum values of f on the given interval.$f ... to find the absolute maximum of \$f(x) = ...
Positive: 21 %
### More resources
Determine absolute maximum value of: f(x ... Determine absolute maximum value of: f(x) = 4 sin x - 3 ... graph is given by 6 10.0 points 001 1 Find all the ...
Positive: 24 %
find edges. Gaussian blur with ... Please check your URL and try again. ... Help Wolfram|Alpha grow! We respond to as many substantial suggestions as possible.
Positive: 19 %
Critical Points Definition of a ... has a local minimum at x 0. If f '(x 0) = 0 and ... in a closed interval I, then the absolute maximum of f(x) in I is ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614809513092041, "perplexity": 776.5123359505985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00157-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://www.gradesaver.com/textbooks/math/precalculus/precalculus-mathematics-for-calculus-7th-edition/chapter-1-section-1-6-complex-numbers-1-6-exercises-page-64/14 | ## Precalculus: Mathematics for Calculus, 7th Edition
The real part of the complex number $i\sqrt 3$ is 0 and the imaginary part is $\sqrt 3$.
Complex numbers are written in the form a+bi, with a being the real number and b being the imaginary number. Since there is no "a" in this complex number a must be equal to zero. Therefore, b=$\sqrt 3$, so the real part is 0 and the imaginary part is $\sqrt 3$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9103220105171204, "perplexity": 183.95308468139189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00748.warc.gz"} |
https://www.physicsforums.com/threads/quick-question-with-the-antederivative.148033/ | # Quick question with the antederivative
1. Dec 13, 2006
### ddr
hi!
who help me to find the antiderivative of:
(x^3 + 3x^2)/(x^2 + 4x + 4)
thanks
2. Dec 13, 2006
### HallsofIvy
Staff Emeritus
You sure this is not homework?
First divide so that you have p(x)+ (Ax+ B)/x^2+ 4x+ 4): a polynomial plus a linear term over x^2+ 4x+ 4. You should be able to integrate the polynomial easily. Now use the fact that x^2+ 4x+ 4= (x+2)^2 and partial fractions.
(I just did the division myself: Ax+ B is simple enough that you don't need partial fractions!)
Last edited: Dec 13, 2006 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9438806772232056, "perplexity": 4245.812921188646}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190754.6/warc/CC-MAIN-20170322212950-00447-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://appliedprobability.blog/2017/05/18/weighted-majority-algorithm/ | # Weighted Majority Algorithm
The Weighted Majority Algorithm is a randomized rule used to learn the best action amongst a fixed reference set.
We consider the following setting
• There are a fixed set of actions $i=1,...,N$ which one may choose. There are a set of outcomes $y\in{\mathcal Y}$.
• After choosing an action $i$, an outcome $y$ occurs and you receives a reward $r(i,y)\in [0,1]$. Over time a policy $\pi$ chooses actions $\pi_t$ $t=1,...,T$ and outcomes $y_t$, $t=1,...,T$ occur.
• It is assumed that the function $r:\{1,...,N\}\times {\mathcal Y} \rightarrow [0,1]$ is known by the policy.
• The accumulated a net reward is
We are interested in how our dynamic policy $\pi$ performs in comparison to each fixed policy $i=1,...,N$, that is a policy that chooses the same action $i$ at each time. In particular, we are interested in how this compares to the fixed policy which is retrospectively the best.
• For this reason, we consider the regret of policy $\pi$
• We can only retrospectively find the best fixed policy, whilst our policy $\pi$ must behave adaptively based on historical information. Thus $R(\pi,T)$ quantifies how we regret not having had the information to have chosen the best fixed policy.
• Note a ‘good’ policy would have low regret. For instance, we might hope our dynamic policy is as good as the best fixed policy, i.e. $R(\pi,T)\leq 0$.
Using Blackwell’s Approachability Theorem, we saw that low regret policies exist. See Section [Blackwell]Theorem [blackwell:regret]. One of the key advantages of such algorithms is that is does not place statistical assumptions on the outcome sequence $y_1,...,y_T$. The Weighted Majority Algorithm is a further example of an algorithm that has asymptotically low regret and does not require statistical assumptions to be placed on the input sequence.
Weighted Majority Algorithm
For parameter $\eta>0$, the \emph{(exponentially) weighted majority algorithm} is a randomized policy which chooses a policy $\latex i=1,…,N$ at time $t$ with probability $P(i,N)=\frac{1}{N}$ for $t=1$ and for $t\geq 1$
$P(i,t)=\frac{w_{i,t}}{W(t)},\qquad\text{where}\qquad w_{i,t}=e^{\eta\rho(i,t-1)} \quad\text{and}\quad W(t)=\sum_{i=1}^N e^{\eta \rho(i,t)}.$
We can derived the following regret bound
Theorem: If policy $\pi$ is the the weighted majority algorithm
(1) With $\eta$ fixed,
for choice $\eta=\sqrt{2T^{-1}\log(N)}$, the above bound states
(2) For $\eta$ fixed the policy,
(3) For $\eta_t$ varrying over time and increasing, we have
for choice $\eta_t=\sqrt{4t^{-1}\log(N)}$, the above bound states
Proof:
(1) Observe
In the second inequaltiy, we apply the Azuma-Hoeffding Inequality, see Section [Azuma]. Taking logs and rearranging, we gain the required expression . Substituting $\eta=\sqrt{2T^{-1}\log(N)}$, we gain expression .
(2) We note the following inequality holds
In the first inequality, we bound with the line segment above the exponential. In the second inequality, we apply the bound $1+x\leq e^x$. Applying bound to inequality , we have
Taking logs and rearranging gives the required bound.
(3) The crux of the proof (1) was that the function $N^{-1} W(T) \exp\{-\eta {\mathbb E} \rho(\pi,T) - \eta^2 T^2/2\}$ is decreasing and thus bounded above by $1$. We apply the same prinicple, but we have to take account of the change in parameter $\eta_t$.
We can express our weights differently, notice,
$S(t)^{\eta_t^{-1}}$ will play a similar role to that of $N^{-1} W(T) \exp\{-\eta {\mathbb E} \rho(\pi,T) - \eta^2 T^2/2\}$ in the proof of (1):
Since $S(0)=1$, the sequence is decreasing and bounded above by $1$. Thus, as $\frac{1}{N}s_{i,T}\leq S(T)\leq 1$, we get the result. $\square$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711071252822876, "perplexity": 440.5710737247074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480622.9/warc/CC-MAIN-20190216145907-20190216171907-00036.warc.gz"} |
https://www.imath.kiev.ua/~sigma/2019/058/ | ### Symmetry, Integrability and Geometry: Methods and Applications (SIGMA)
SIGMA 15 (2019), 058, 15 pages arXiv:1901.09951 https://doi.org/10.3842/SIGMA.2019.058
Contribution to the Special Issue on Algebraic Methods in Dynamical Systems
### Linear Differential Systems with Small Coefficients: Various Types of Solvability and their Verification
Moulay A. Barkatou a and Renat R. Gontsov bc
a) Laboratoire XLIM (CNRS UMR 72 52), Département Mathématiques-Informatique, Université de Limoges, Faculté des Sciences et Techniques, 123 avenue Albert Thomas, F-87060 LIMOGES Cedex, France
b) Institute for Information Transmission Problems RAS, Bolshoy Karetny per. 19, build. 1, Moscow 127051, Russia
c) Moscow Power Engineering Institute, Krasnokazarmennaya 14, Moscow 111250, Russia
Received January 30, 2019, in final form July 31, 2019; Published online August 09, 2019
Abstract
We study the problem of solvability of linear differential systems with small coefficients in the Liouvillian sense (or, by generalized quadratures). For a general system, this problem is equivalent to that of solvability of the Lie algebra of the differential Galois group of the system. However, dependence of this Lie algebra on the system coefficients remains unknown. We show that for the particular class of systems with non-resonant irregular singular points that have sufficiently small coefficient matrices, the problem is reduced to that of solvability of the explicit Lie algebra generated by the coefficient matrices. This extends the corresponding Ilyashenko-Khovanskii theorem obtained for linear differential systems with Fuchsian singular points. We also give some examples illustrating the practical verification of the presented criteria of solvability by using general procedures implemented in Maple.
Key words: linear differential system; non-resonant irregular singularity; formal exponents; solvability by generalized quadratures; triangularizability of a set of matrices.
pdf (373 kb) tex (20 kb)
References
1. Barkatou M.A., An algorithm to compute the exponential part of a formal fundamental matrix solution of a linear differential system, Appl. Algebra Engrg. Comm. Comput. 8 (1997), 1-23.
2. Barkatou M.A., Cluzeau T., On simultaneous triangularization of a set of matrices, in preparation.
3. Barkatou M.A., Cluzeau T., Weil J.A., Di Vizio L., Computing the Lie algebra of the differential Galois group of a linear differential system, in Proceedings of the 2016 ACM International Symposium on Symbolic and Algebraic Computation, ACM, New York, 2016, 63-70.
4. Barkatou M.A., Pflügel E., ISOLDE: a Maple package for solving systems of linear ODEs (1996), available at http://sourceforge.net/projects/isolde/.
5. Compoint E., Singer M.F., Computing Galois groups of completely reducible differential equations, J. Symbolic Comput. 28 (1999), 473-494.
6. Feng R., Hrushovski's algorithm for computing the Galois group of a linear differential equation, Adv. in Appl. Math. 65 (2015), 1-37, arXiv:1312.5029.
7. Gontsov R.R., On the dimension of the subspace of Liouvillian solutions of a Fuchsian system, Math. Notes 102 (2017), 149-155.
8. Gontsov R.R., Vyugin I.V., Solvability of linear differential systems with small exponents in the Liouvillian sense, Arnold Math. J. 1 (2015), 445-471.
9. Hrushovski E., Computing the Galois group of a linear differential equation, in Differential Galois Theory (Bęedlewo, 2001), Banach Center Publ., Vol. 58, Polish Acad. Sci. Inst. Math., Warsaw, 2002, 97-138.
10. Humphreys J.E., Introduction to Lie algebras and representation theory, Graduate Texts in Mathematics, Vol. 9, Springer-Verlag, New York - Berlin, 1972.
11. Kaplansky I., An introduction to differential algebra, Actualités Sci. Ind., Vol. 1251, Hermann, Paris, 1957.
12. Khovanskii A.G., On solvability and unsolvability of equations in explicit form, Russian Math. Surveys 59 (2004), 661-736.
13. Khovanskii A.G., Topological Galois theory: solvability and unsolvability of equations in finite terms, Springer Monographs in Mathematics, Springer, Heidelberg, 2014.
14. Kimura T., On Riemann's equations which are solvable by quadratures, Funkcial. Ekvac. 12 (1970), 269-281.
15. Kolchin E.R., Algebraic matric groups and the Picard-Vessiot theory of homogeneous linear ordinary differential equations, Ann. of Math. 49 (1948), 1-42.
16. Maciejewski A., Moulin-Ollagnier J., Nowicki A., Simple quadratic derivations in two variables, Comm. Algebra 29 (2001), 5095-5113.
17. Morales Ruiz J.J., Differential Galois theory and non-integrability of Hamiltonian systems, Progress in Mathematics, Vol. 179, Birkhäuser Verlag, Basel, 1999.
18. van der Hoeven J., Around the numeric-symbolic computation of differential Galois groups, J. Symbolic Comput. 42 (2007), 236-264.
19. Vidunas R., Differential equations of order two with one singular point, J. Symbolic Comput. 28 (1999), 495-520.
20. Vyugin I.V., Gontsov R.R., On the question of solubility of Fuchsian systems by quadratures, Russian Math. Surveys 67 (2012), 585-587.
21. Wasow W., Asymptotic expansions for ordinary differential equations, Pure and Applied Mathematics, Vol. 14, Interscience Publishers John Wiley & Sons, Inc., New York - London - Sydney, 1965.
22. Żoładek H., Polynomial Riccati equations with algebraic solutions, in Differential Galois Theory (Będlewo, 2001), Banach Center Publ., Vol. 58, Polish Acad. Sci. Inst. Math., Warsaw, 2002, 219-231.
23. Żoładek H., The monodromy group, Monografie Matematyczne, Vol. 67, Birkhäuser Verlag, Basel, 2006. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710881471633911, "perplexity": 1991.783437949928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00181.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php/User:Temperal/The_Problem_Solver%27s_Resource6 | # User:Temperal/The Problem Solver's Resource6
Introduction | Other Tips and Tricks | Methods of Proof | You are currently viewing page 6.
## Beginner/Intermediate Number Theory
This section covers number theory, specifically Fermat's Little Theorem, Wilson's Theorem,Euler's Totient Theorem, Quadratic residues, and the Euclidean algorithm.
To use this page, we recommend knowing the basics of Linear congruence, Modular arithmetic, and have a grasp of basic number theory needed for the AMC 10 and 12.
## Definitions
• if is the remainder when is divided by to give an integral amount. Also, this means b divides (n-a).
• (or divides ) if for some integer .
• is the greek letter phi. is the number of integers less than or equal to m that are at the same time relatively prime to n. If the prime factorization of n is , .
## Special Notation
Occasionally, if two equivalent expressions are both modulated by the same number, the entire equation will be followed by the modulo.
refers to the greatest common factor of and refers to the lowest common multiple of .
## Properties
For any number there will be only one congruent number modulo between and .
If and , then .
• , where is a positive integer that divides and .
### Fundamental Theorem of Arithmetic
The Fundamenal Theorem of Arithmetic is fairly clear, yet is extremely important. It states that any integer n greater than one has a unique representation as a product of primes. It has a very interesting proof; attempt to prove it using contradiction.
### Fermat's Little Theorem
For a prime and a number such that , . A frequently used result of this is .
#### Example Problem 1
Find all primes p such that .
##### Solution
Firstly, p=2 clearly does not work. Now, as all other primes are odd, and hence . After adding one, we have since p divides . However, that means p must divide 3, so the only prime possible is 3. Indeed, is a multiple of 3.
### Wilson's Theorem
For a prime , .
#### Example Problem 2
Let be an integer such that . Find the remainder when is divided by .
##### Solution
After multiplying through by , we know that every term on the left-hand-side will be divisible by 13 except for . We wish to find the remainder when is divided by 13. From Wilson's Theorem, we know that so we consider (mod 13). Thus, the remainder is which comes out to be 7. Thus, our answer is 7.
### Euler's Phi Theorem
If , then , where is the number of relatively prime numbers lower than . This is mostly a generalization of Fermat's Little Theorem, although much more useful.
An integer n is a quadratic residue (mod m) if and only if there exists an integer p such that . Some useful facts are that all quadratic residues are or and , , or . All cubic residues (mod 9) are 0, 1, or -1.
#### Example Problem 3
Does there exist an integer such that its cube is equal to , where n is an integer? (IMO longlist 1967)
##### Solution
Consider (mod 9), and n (mod 3). If n is divisible by 3, is clearly divisible by 9. If n is congruent to 1 (mod 3), is congruent to 6 (mod 9). If n is congruent to 2 (mod 3), then . As n+1 is divisible by 3, it is congruent to 0 (mod 9). Hence, is either 7 or 4 (mod 9). However, all cubes are 0,1, or -1 (mod 9), so there does not exist such an integer.
### Solving Linear Congruences
As mentioned at the top, you should at least know how to solve simple linear congruences, with just one linear congruence. However, solving with two or more congruences is more complex, and many times there is not even a solution. The Chinese Remainder Theorem shows when the congruences do have a unique solution. *to be continued* | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9349793791770935, "perplexity": 403.3489951442834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401582033.88/warc/CC-MAIN-20200927215009-20200928005009-00644.warc.gz"} |
http://water.huji.ac.il/publications/year/2012 | ## פרסומים של בוגרי התוכנית
רבים מבוגרי התוכנית פרסמו מאמרים אקדמים בתחומי הידרולוגיה שונים.
# Publications
2012
Peleg, N. & Morin, E., 2012. Convective rain cells: Radar-derived spatiotemporal characteristics and synoptic patterns over the eastern Mediterranean. Journal of Geophysical Research , 117 , 'עמ. D15116. Publisher's Versionתקציר
This paper examines the spatiotemporal characteristics of convective rain cells over the eastern Mediterranean (northern Israel) and their relationship to synoptic patterns. Information on rain cell features was extracted from high-resolution weather radar data. The radar-gauge adjustment, validation, cell segmentation and tracking techniques are discussed at length at the beginning of the paper. Convective rain cells were clustered into three synoptic types (two winter lows—deep Cyprus lows and shallow lows—and one tropical intrusion, Active Red Sea Trough) using several NCEP/NCAR parameters, and empirical distributions were computed for their spatial and temporal features. In the study region, it was found that the Active Red Sea Trough rain cells are larger, live for less time and possess lower rain intensities than the rain cells generated by the winter lows. The Cyprus low rain cells were found to be less intense and slightly larger on average than the shallow low rain cells. It was further discovered that the preferential orientation of the rain cells is associated with the direction and velocity of the wind. The effect of distance from the coastline was also examined. An increase in the number and area of the rain cells near the coastline was observed, presumably due to the sea breeze convection. The mean rainfall intensity was found to peak near the shore and decrease with distance inland. This information is of great importance for understanding rain patterns and can be further applied in exploring the hydrological responses of the basins in this region. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901508450508118, "perplexity": 4205.497114542467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323842.29/warc/CC-MAIN-20170629015021-20170629035021-00451.warc.gz"} |
https://www.physicsforums.com/threads/find-velocity-of-a-long-jumper.438570/ | Find Velocity of a long jumper
• Start date
• #1
5
0
Homework Statement
An athlete executing a long jump leaves the ground at an angle of 30 degrees and travels 8.90m. What was the take-off speed?
Homework Equations
d(vertical)=(vi)(t) + 0.5at^2
The Attempt at a Solution
0=Vsin30t + (0.5)(-9.8)(t^2)
I cannot solve for two separate variables, how can I solve this problem?
• #2
2,745
22
EDIT: I see what you've done.
Right, in your attempt, d does not equal 0. There has to be some vertical height.
For horizontal you know:
d = 8.9m
For vertical you know:
a = -9.8 for the ascent and 9.8 for the descent.
You know vf for the ascent = 0 and vi for the descent = 0.
Last edited:
• #3
5
0
that is the vertical, there is no horizontal acceleration
Vertical:
Vi=?
Vf=?
a=-9.8m/s^2
t=?
d=0
Horizontal:
Vave=?
t=?
d=8.90m
• #4
2,745
22
that is the vertical, there is no horizontal acceleration
Vertical:
d=0
d does not equal 0. There has to be vertical height.
The vertical phase is split in two. See previous post for details.
Unless you know the flight time, this question all comes down to the vertical components.
• Last Post
Replies
3
Views
2K
• Last Post
Replies
3
Views
4K
• Last Post
Replies
1
Views
6K
• Last Post
Replies
20
Views
4K
• Last Post
Replies
1
Views
5K
• Last Post
Replies
7
Views
6K
• Last Post
Replies
3
Views
1K
• Last Post
Replies
5
Views
3K
• Last Post
Replies
3
Views
3K
• Last Post
Replies
3
Views
2K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409395456314087, "perplexity": 4492.210057882065}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00626.warc.gz"} |
http://mathhelpforum.com/calculus/202933-limit-proof-required-special-case-chain-rule-print.html | # Limit proof required for special case of chain rule
• September 4th 2012, 06:06 PM
lamp23
Limit proof required for special case of chain rule
I am trying to figure out how to prove the equality I circled below in red. I have figured out how to prove the text in blue but don't know how to use that to prove the equality I circled in red.
Below I will post the givens I'm trying to use and my guess of how to prove it.
P.S. I understand this only proves the chain rule in the special case where $\Delta u \neq 0$. This is from Stewart's Calculus and he does mention that this is not a full proof but I'm very curious how to prove this special case anyway.
http://i900.photobucket.com/albums/a...amp23/calc.jpg
I'm not sure if I'm using the right givens below.
In their blue form it looks like I will be able to use the transitive property of implication $(a \rightarrow b \wedge b \rightarrow c) \rightarrow (a \rightarrow c)$ if I can always let the $\epsilon$ from the 1st given equal the $\delta$ from the second line.
http://i900.photobucket.com/albums/a...3/deltau-2.jpg
• September 4th 2012, 07:15 PM
Prove It
Re: Limit proof required for special case of chain rule
• September 4th 2012, 09:52 PM
lamp23
Re: Limit proof required for special case of chain rule
I'm not looking for a way to prove the product of the limits is the limit of the product. I'm looking for a way to prove the two expressions I circled in red are equal, i.e. $\lim_{\Delta x\to 0}\frac{\Delta y}{\Delta x} = \lim_{\Delta u\to 0}\frac{\Delta y}{\Delta x}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9362544417381287, "perplexity": 211.01186825718227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999668190/warc/CC-MAIN-20140305060748-00043-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/600502/s-acting-on-a-spin-chain-raises-the-entropy-by-at-most-ln2/633636#633636 | # $S^+$ acting on a spin chain raises the entropy by at most $\ln(2)$
Consider the operator $$S^+ = \sum_{i=1}^L S^+_i$$ acting on a spin-chain of spin-1/2 particles. Denote the half-chain Von Neumann entanglement entropy of a state $$|\psi\rangle$$ by $$\mathbb{S}[|\psi\rangle]$$. (For simplicity in notation in the following, take $$\mathbb{S}[0] = 0$$.)
Consider the following example. Define the state $$|\Omega\rangle$$ as having all spins down: $$|\Omega\rangle = \otimes_{i=1}^L|\downarrow\rangle$$. Then $$\mathbb{S}[|\Omega\rangle] =0$$ because $$|\Omega\rangle$$ is a product state, while $$\mathbb{S}[S^+|\Omega\rangle] = \ln(2)$$, as can be seen by a quick Schmidt decomposition by hand. Notice in particular that
$$\mathbb{S}[S^+|\Omega\rangle] - \mathbb{S}[|\Omega\rangle] = \ln(2)$$
After playing around a little with numerics, I have the following conjecture: $$\max_{|\psi\rangle \in \mathscr{H}} \left( \mathbb{S}[S^+|\psi\rangle] - \mathbb{S}[|\psi\rangle] \right)= \ln(2)$$
That is, $$S^+$$ can only increase the entropy of a state by $$\ln(2)$$ and no more. Similarly, I conjecture that $$\max_{|\psi\rangle \in \mathscr{H}} \left( \mathbb{S}[(S^+)^n|\psi\rangle] - \mathbb{S}[|\psi\rangle] \right)=\mathbb{S}[(S^+)^n|\Omega\rangle] - \mathbb{S}[|\Omega\rangle]$$
Are these conjectures correct? How can I prove these conjectures?
A small piece of supporting evidence (not near a proof) for the first conjecture is that it is easy to check that all product states $$|p\rangle$$ in the $$S^z$$-basis obey $$\mathbb{S}[S^+|p\rangle] \leq \ln(2)$$, as the resulting state's Schmidt decomposition has at most two states. Another small piece of supporting evidence, when I feed in random states for $$|\psi\rangle$$, the entanglement entropy decreases relative to the entropy of the random state. However, this is just supporting evidence that is far from a statement about all possible states in the Hilbert space.
• Only saw this now: The point is that the operator S+ has Schmidt rank 2 (as an operator), and thus, it can increase the Schmidt rank of any state by at most a factor of 2. However, what happens to the Schmidt coefficients can be rather different, as you correctly note in your answer. May 2 at 22:03
• @NorbertSchuch Thanks, that adds clarity to what's happening. May 2 at 22:34
My conjectures in my question were incorrect. Since $$S^+$$ can destroy certain states, it's possible to begin with a low entanglement entropy state that gains a high entropy after being acted upon by $$S^+$$.
For example, consider (for some large $$L$$) a state $$|\psi\rangle = \sqrt{.999999999} \otimes_{i=1}^L|\uparrow\rangle_i + \sqrt{.000000001} (\text{scrambled, normalized superposition of a massive number of states}).$$ The entropy of this initial state arising from the second term is suppressed by the tiny coefficient. However, the action of $$S^+$$ will destroy the first term, removing the suppression of the second term after normalizing. The second term after being acted upon by $$S^+$$ will still have some large amount of entanglement, perhaps smaller than the second term had before, but still a much larger amount than the full initial state. Thus, the entanglement of the final state will be much larger than the initial state, easily exceeding $$\ln(2)$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637705087661743, "perplexity": 164.6571515847266}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00048.warc.gz"} |
https://wellness-trends.com/does-the-cold-climate-affect-the-spread-of-coronavirus/health/ | ## A study conducted by the GVN, the Global Virus Network, would have identified a correlation between temperature and diffusion rate of the coronavirus: let’s find out.
Is there a relationship between the temperature and the rate at which coronavirus spreads? The answer comes from the GVN, the Global Virus Network. The prestigious international coalition would have conducted a study based on the observation of the diffusion rate of COVID-19, in relation to the climate of a given place.
The study gave us an answer : the temperature affects the propagation of the coronavirus. Like? Let’s find out.
### Coronavirus, does the climate affect?
The study conducted by the Global Virus Network (GVN) stems from the observation that in hot countries the virus would seem to progress more slowly than in other countries.
In fact, through the analysis of the statistics , it is possible to establish that the coronavirus would seem to spread differently according to the temperature. In particular, it would have spread more insistently in areas with colder or temperate climates, where the average temperature is between 5 and 11 degrees on the Celsius scale.
Not surprisingly, COVID-19 has spread significantly in Europe and the United States, without however being able to effectively hit the hottest countries of South Asia and Africa. Obviously, the virus has also reached these areas, but the spread would seem slower and the number of significant outbreaks less.
### Coronavirus and seasonal flu: cold weather helps spread
Like seasonal flu , the work of coronavirus would appear to be greatly influenced by temperature. Both would spread more easily in colder climates. The hope is that, with the arrival of spring and then of summer, the emergency can slowly return until it completely disappears .
However, the fact that the virus tends to expand more easily in colder areas does not necessarily mean that with the arrival of the heat it will disappear. However, it could slow down its spread, helping doctors to better treat patients and allowing more time for scientists to find a possible vaccine .
Photo source: https://pixabay.com/it/photos/termometro-estate-heiss-di-calore-4294021/ | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863381028175354, "perplexity": 1237.7726826869464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00484.warc.gz"} |
https://www.physicsforums.com/threads/double-integrals-in-polar-coordinates.325701/ | # Double integrals in polar coordinates
1. Jul 18, 2009
### compliant
1. The problem statement, all variables and given/known data
Find
$$\int{\int_{D}x dA}$$
where D is the region in Q1 between the circles x2+y2=4 and x2+y2=2x using only polar coordinates.
3. The attempt at a solution
Well, the two circles give me r=2 and r=2 cos $$\theta$$, and the integrand is going to be r2cos $$\theta$$, but I have no idea how to determine the bounds of integration in this case.
2. Jul 18, 2009
### tiny-tim
Hi compliant!
(have a theta: θ and a pi: π )
Just integrate θ from 0 to 2π (or -π and π), and integrate r between whatever values it goes between for a fixed value of θ.
3. Jul 18, 2009
### HallsofIvy
Staff Emeritus
I would recommend first drawing a picture. $x^2+ y^2= 4$ is, of course, a circle with center at (0,0) and radius 2. $x^2+ y^2= 2x= x^2- 2x+ y^2= 0$ or $x^2- 2x+ 1+ y^2= (x- 1)^2+ y^2= 1$ is a circle with center at (1, 0) and radius 1: it is tangent to the y-axis at (0,0) and tangent to the first circle at (2, 0). Now think in terms of polar coordinates. Both equations become very simple in polar coordinates. What drawing the graph tells you is that you will want to handle the integration in three parts: $\theta= 0$ to $\pi/2$, $\theta= \pi/2$ to $3\pi/2$, and $\theta= 3\pi/2$ to $2\pi$.
suggested, the outside radius (the upper limit of integration) is always 2 and the inner radius (the lower limit of integration, for $\theta= 0$ to $\pi/2$ is
4. Jul 19, 2009
### compliant
tiny-tim, thanks for those. desperately needed.
hallsofivy, I did draw the diagram, and found that it was rather inconveniently symmetrical, which was why I got stumped. going by your suggestion, from θ = 0 to θ = π/2, I would be integrating along the right side of the curve, where the upper bound is r = 2, and the lower bound is r = 2 cos θ. I would then solve accordingly, with r2 cos θ as the integrand.
I'm just wondering though, how is the left side of the curve from θ = π/2 to θ = 3π/2 and not θ = π/2 to θ = π ? And as for the third part of the curve that goes from θ = 3π/2 to θ = 2π, that's...a straight line. =/
Argh.
5. Jul 21, 2009
### compliant
sorry to do this, but bump.
6. Jul 22, 2009
### tiny-tim
Hi compliant !
I'm confused
The area is between two circles, one touching both the edge and the centre of the other.
So there are two regions:
the "left" region, which is simply a semicircle, so you know the answer already, and you needn't integrate at all (though if you did, you would integrate a constant, over the whole angle π/2 to 3π/2)
and the "right" region, which is from -π/2 to π/2, which you seem to be ok with.
Similar Discussions: Double integrals in polar coordinates | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9839168787002563, "perplexity": 697.7129081886227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805023.14/warc/CC-MAIN-20171118190229-20171118210229-00565.warc.gz"} |
http://math.stackexchange.com/questions/712482/on-number-of-different-factorizations-over-integers-of-a-number-field | # On number of different factorizations over integers of a number field
Let $K$ be a finite field extension of the rational numbers and let $\mathcal{O}_K$ denote its ring of integers. If a rational integer $n$ factors into two distinct ways into irreducible elements in $\mathcal{O}_K$, that is,
$$n = \prod{a_j} = \prod{b_j},$$
where $a_j, b_j$ are irreducible and no $a_j$ is associate to any $b_j$, then $n^2 = \prod{a_j}^2 = \prod{b_j}^2 = \prod{a_j}\prod{b_j}$ has at least three distinct factorizations; by taking powers of $n$ one thus can see that the number of distinct factorizations of rational integers is unbounded. Is the number of distinct "primitive" factorizations of rational integers over $\mathcal{O}_K$ bounded, that is, factorizations that do not arise from a construction as above (that is, cannot be split into different "sub-factorizations")? If yes, can this bound be given explicitly in terms of the size of the class group and $[K : \mathbb{Q}]$?
-
You might be interested in this question: math.stackexchange.com/questions/538959/… – John M Mar 17 '14 at 14:38
The question has been answered here: mathoverflow.net/questions/162916/… – streetcar277 Jun 19 '14 at 11:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877618908882141, "perplexity": 187.81773940824732}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121934081.85/warc/CC-MAIN-20150124175214-00016-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://ru.scribd.com/document/390591295/Chapter-1 | Вы находитесь на странице: 1из 16
# Chapter One
## Stresses in Soils from Surface Loads
1. Introduction
Various types of loads are applied to soils. For example, an oil tank will impose a uniform circular,
vertical stress on the surface of the soil while an unsymmetrical building may impose a non-uniform
vertical stress. We would like to know how the surface stresses are distributed within the soil mass
and the resulting deformations.
The distribution of surface stresses within a soil is determined by assuming that the soil is a semi-
infinite, homogeneous, linear, isotropic, elastic material. A semi-infinite mass is bounded on one
side and extends infinitely in all other directions; this is also called an “elastic half space.” For soils,
the horizontal surface is the bounding side. Equations and charts for several types of surface loads based
on the above assumptions are presented in this chapter.
## 1.1 Definitions of key Terms
Stress or intensity of loading is the load per unit area. The fundamental definition of a stress is the ratio
of the force ΔP acting on a plane to the area of the plane ΔS when ΔS tends to zero; Δ denotes a small
quantity.
Strain or intensity of deformation is the ratio of the change in dimension to the original dimension
or the ratio of change in length to the original length.
Sample Practical Situation Two storage tanks are to be founded on a deep layer of stiff saturated
clay. Your client and the mechanical engineer, who is designing the pipe works, need an estimate of
the settlement of the tanks when they are completely filled. Because of land restrictions, your client
desires that the tanks be as close as possible to each other. You should realize that if two separate
foundations are placed too close to each other, the stresses in the soil induced by each foundation
overlap and cause intolerable tilting of the structures and their foundations.
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 1
1.2. Stresses and Strains
## 1.2.1. Normal Stresses and Strains
Consider a cube of dimensions x = y = z that is subjected to forces Px, Py, Pz, normal to the
three adjacent sides as shown in Fig. 1.1.
## The normal stresses are:
Let as assume that under these forces the cube compresses by x, y, and z in the X, Y, and Z
directions. The strains in these directions, assuming they are small (infinitesimal), are:
## 1.2.2 Volumetric Strain
The volumetric strain which is the sum of strains in X, Y,and Z direction is expressed as follows:
## 1.2.3 Shear Stresses and Shear Strains
Let us consider, for simplicity, the XZ plane and apply a force F that causes the square to distort into a
parallelogram as shown in Fig. 1.2. The force F is a shearing force and the shear stress is expressed as
follows:
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 2
The simple shear strain, also called engineering shear strain, is a measure of the angular
distortion of a body by shearing forces. If the horizontal displacement is Dx, the shear
strain or simple shear strain, γzx , is expressed as:
## For small strains tanγzx=γzx and therefore,
If the shear stress on a plane is zero, the normal stress on that plane is called a principal stress. In
geotechnical engineering, compressive stresses in soils are assumed to be positive. Soils cannot
sustain any appreciable tensile stresses and we normally assume that the tensile strength of soils is
negligible. Strains can be compressive or tensile.
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 3
1.3 Stresses in Soil from Surface Loads
## 1.3.1 Point Load
Boussinesq (1885) presented a solution for the distribution of stresses for a point load applied on the
soil surface. An example of a point load is the vertical load transferred to the soil from an electric
power line pole. The increases in stresses on a soil element located at point A (Fig. 1.3a) due to a point
load, Q, are:
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 4
Equations (1.10) to (1.13) represent the same stresses in terms of the angle and depth z.
Where is Poisson’s ratio. Most often, the increase in vertical stress is needed in practice. Equation
(1.6) can be written as:
## Where I is an influence factor, and
The distribution of the increase in vertical stress from Eq. (1.14) reveals that the increase in
vertical stress decreases with depth (Fig. 1.3 b) and radial distance (Fig.1.3c).
EXAMPLE 1.1
A pole carries a vertical load of 200 kN. Determine the vertical stress increase at a depth 5 m (a)
directly below the pole and (b) at a radial distance of 2 m.
EXAMPLE 1.2
## Under a concentrated vertical load of 100 kN, determine:
a) The distribution of σz on horizontal planes at depths of 1 m, 2 m, and 3 m below the
ground surface - vary r/z from 0.0 to 3.0 in 0.25 increments,
b) The distribution of σz on the vertical planes at 1 m, 2 m, and 3 m from the applied
concentrated load and the position of the maximum vertical stress on these planes - vary z
in 0.5 m increments up to 6 m.
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 5
1.3.2 Line Load
With reference to Fig. 1.4a, the increase in stresses due to a line load, Q (force/length), are:
A practical example of line load is the load from a long brick wall.
## 1.3.3 Line Load Near a Buried Earth Retaining Structure
The increase in lateral stress on a buried earth retaining structure (Fig. 1.4b) due to a line load of
intensity Q (force/length) is:
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 6
1.3.4 Strip Load
A strip load is the load transmitted by a structure of finite width and infinite length on a soil surface. Two
types of strip loads are common in geotechnical engineering. One is a load that imposes a uniform stress on
the soil, for example, the middle section of a long embankment (Fig. 1.5a). The other is a load that induces a
triangular stress distribution over an area of width B (Fig. 1.5b). An example of a strip load with a triangular
stress distribution is the stress under the side of an embankment. The increases in stresses due to a
surface stress qs (force/area) are as follows:
Figure 1.5: Strip load imposing (a) a uniform stress and (b) a linearly varying stress. (c) Strip load
near a retaining wall and (b) lateral force near a retaining wall from a strip load.
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 7
(b) Area transmitting triangular stress (Fig. 1.5b)
## (c) Area transmitting triangular stress (Fig. 1.5c, d)
The lateral force and its location were derived by Jarquio (1981) and are expressed as follows:-
where
## 1.3.5 Uniformly Loaded Circular Area
An example of circular area that transmits stresses to a soil mass in a circular foundation of an oil
or water tank. The increase in vertical and radial stresses under a circular area of radial distance
ro are articulated as follows:
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 8
and the increase in radial stress is:
The vertical elastic settlement at the surface of due to a circular flexible loaded area is:
## 1.3.6 Uniformly Loaded Rectangular Area
Many structural foundations are rectangular or approximately rectangular in shape. The increase
in stresses below the corner of a rectangular area of width B and length L are:
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 9
These equations can be written as
follows:
Where I denotes the influence factor. The influence factor for the vertical stress is
where m= B/ z and n= L /z . You can program your calculator or use a spreadsheet to find Iz.
You must be careful in the last term (tan-1) in programming. If m2 + n2 +1 < m2n2 , then you
have to add P to the quantity in the last term. In general, the vertical stress increase is less than
10% of the surface stress when z > 3B. The vertical elastic settlement at the ground surface under
a rectangular surface load is:
Where Is is a settlement influence factor that is a function of the L/B ratio (L is length and B is
width). Setting ξs =L /B , the equations for Is are:
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 10
The above equations can be simplified to the following for ξs > 1.
## 1.3.7 Approximate Method for Rectangular Loads
In preliminary analyses of vertical stress increases under the center of rectangular loads,
geotechnical engineers often use an approximate method (sometimes called the 2:1 method). The
surface load on an area, BxL , is dispersed at a depth z over an area (B + z)x (L + z) as illustrated
in Fig. 1.6.
The vertical load increase under the center of the rectangle is:
EXAMPLE 1.3
A rectangular concrete slab, 3 m×4.5 m, rests on the surface of a soil mass (Fig. E1.3). The load
on the slab is 2025 kN. Determine the vertical stress increase at a depth of 3 m (a) under the
center of the slab, point A, (b) under point B, and (c) at a distance of 1.5 m from a corner, point C.
Strategy: The slab is rectangular and the equations for a uniformly loaded rectangular area are
for the corner of the area. You should divide the area so that the point of interest is the corner of
a rectangle(s). You may have to extend the loaded area if the point of interest is outside it
(loaded area). The extension is fictitious so you have to subtract the fictitious increase in stress
for the extended area.
Figure E1.3
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 12
1.3.8 Vertical Stress below Arbitrarily Shaped Area
## Figure 1.7: Newmark's chart for increase in vertical stress.
Newmark (1942) developed a chart to determine the increase in vertical stress due to a uniformly
loaded area of any shape. The chart consists of concentric circles divided by radial lines (Fig.1.7).
The area of each segment represents an equal proportion of the applied surface stress at depth z
below the surface. If there are 10 concentric circles (only 9 are shown because the 10th extends
to infinity) and 20 radial lines, the stress on each circle is qs/10 and on each segment isqs (10x20).
The radius to depth ratio of the first (inner) circle is found by setting Dσz =0.1qs in Eq. (1.30),
that is,
from which r/ z=0.27 . For the other circles, substitute the appropriate value for sz; for example,
for the second circle, sz = 0.2qs , and find r/z . The chart is normalized to the depth; that is, all
dimensions are scaled by a factor initially determined for the depth. Every chart should show a
scale and an influence factor IN, which for our case is 1/(10x20) = 0.005.
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 13
The procedure for using Newmark’s chart is as follows:
1. Set the scale, shown on the chart, equal to the depth at which the increase in vertical stress is required.
We will call this the depth scale.
2. Identify the point on the loaded area below which the stress is required. Let us say this point is point A.
3. Plot the loaded area using the depth scale with point A at the center of the chart.
4. Count the number of segments (Ns) covered by the scaled loaded area. If certain segments are not fully
covered, you can estimate what fraction is covered.
5. Calculate the increase in vertical stress as z q s I N N s .
EXAMPLE 1.4
The plan of a foundation of uniform thickness for a building is shown in Fig. E1.4a. Determine
the vertical stress increase at a depth of 4 m below the centroid. The foundation applies a vertical
stress of 200 kPa on the soil surface.
Figure E1.4 a, b
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 14
Strategy: You need to locate the centroid of the foundation, which you can find using the given
dimensions. The shape of the foundation does not fit nearly into one of the standard shapes (e.g.,
rectangles or circles) discussed. The convenient method to use for this (odd) shape foundation is
Newmark’s chart
## Soil Mechanics II Lecture Notes Instructor: Nuru I. 15
Individual Assignment to be submitted
1. A shallow Foundation 25m x 18m carries a uniform pressure of 175kN/m2. Determine the
vertical stress at a point 12m below the midpoint of one of the longer sides.
## a. Using influence factors
b. By means of Newmark's chart
2. A line load of 150 kN/m acts 2m behind the sub-surface of an earth retaining structure of
4m high. Calculate the total thrust and plot the distribution of the pressure on the
structure due to the line load.
## Нижнее меню
### Получите наши бесплатные приложения
Авторское право © 2021 Scribd Inc.
Авторское право © 2021 Scribd Inc. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411005735397339, "perplexity": 3070.6346485036674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00241.warc.gz"} |
http://mathoverflow.net/questions/112618/leray-spectral-sequence?sort=newest | # Leray Spectral Sequence
Let $f:X\to Y$ be a smooth map between paracompact differential manifolds $X$ and $Y$. Let $U$ be an open and dense subset of $Y$. For any $y\in U$, let $f^{-1}(y)=F$ be a generic fiber that is a submanifold of $F$.
Assume the singular fibers are $F/\Gamma_t$, where for each $t\in Y\setminus U$, $\Gamma_t$ is a finite subgroup (depending on $t$) of the automorphism group of $F$ that is acting properly discontinuously on $F/\Gamma_t$, i.e., the latter is also a smooth manifold.
If $\Gamma_t$ is the identity for all $t$, and $f$ is a fibration, then there is a Leray spectral sequence relating the homology of $X$ to that of $F$ and $Y$. Is there some spectral sequence for the case when $\Gamma_t$ is not always the identity, and if so what? A reference for this would be appreciated too.
-
As Algori mentions, the answer is yes, and it's also the Leray spectral sequence. What reference are you using? – Ryan Budney Nov 16 '12 at 23:37
Ru -- the Leray spectral sequence exists for any map $f:X\to Y$ of arbitrary topological spaces and any sheaf $F$ on $X$ and its second term is $$E_2^{p,q}=H^p(Y,R^q f_*F)$$ where $R^q f_*F$ are the sheaves on $Y$ that are obtained by sheafifying the presheaves $U\mapsto H^q(f^{-1}(U),F)$. Here are some remarks that might help:
1. If $f$ is a locally trivial fibration and $F$ is constant then all $R^q f_*F$ are locally constant; if in addition $Y$ is simply-connected then the sheaves are constant and we can express $E_2$ in terms of the constant cohomology of $Y$.
2. It may happen that all fibers $f^{-1}(y),y\in Y$ are homeomorphic but some or all $R^q f_*F$ are non-constant; take e.g. $X=(\mathbb{R}\setminus \{ 0\})\sqcup \{ 0\}, Y=\mathbb{R},f$ the identity map.
3. Nevertheless, if $f:X\to X/G$ where $G$ is a connected Lie group that acts nicely on $X$ (say so that the quotient is Hausdorff) with finite stabilizers, and $F$ is a constant sheaf with stalk $\mathbb{Q}$ (or $\mathbb{R}$ or $\mathbb{C}$) then any sheaf $R^q f_*F$ is constant with stalk $H^q(G,\mathbb{Q})$ (resp., $H^q(G,\mathbb{R})$ and $H^q(G,\mathbb{C})$).
Two possible references (which means, to be honest, that there may be better references but that's where I first learned this from) are Godement, Topologie alg\'ebrique et th\'eorie des faisceaux, the very end of chapter 4, and Griffiths-Harris, the very end of vol.1
-
Thanks Algori, As you mentioned spectral sequence argument seems to work for cohomology - how about homology? – user13559 Nov 18 '12 at 20:48
Ru -- welcome. Re how about homology: it depends: for locally trivial fibrations everyng works fine in a similar way; for more general maps the homological version exists but is quite a bit more complicated (the strategy basically consists in reducing everything to the cohomological case via the Verdier duality). – algori Nov 18 '12 at 22:49
Thanks again. Do you know of any reference about homological case? – user13559 Nov 19 '12 at 18:34
Ru -- I've never seen it done in detail in the homological case but if I had to guess I would define, following Borel-Moore, Homology theory for locally compact spaces, Michigan Math. J. 7, 2, 1960, thm 3.8 and \S 5, $H_i(X,F)=\mathbb{H}_c^{-i}(X,DF)$ where $\mathbb{H}_c$ stands for compactly supported hypercohomology and $D$ for the Verdier dual, and then see how it goes. For a careful introduction to constructible sheaves and related things see e.g. Borel, Intersection cohomology. Also, there was an Asterisque volume called "Etale homology" by Deligne et al, which may also be relevant. – algori Nov 20 '12 at 2:12
Thanks Algori! Have a good day. – user13559 Nov 23 '12 at 0:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571852087974548, "perplexity": 236.41491821079646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.investopedia.com/articles/06/probabilitydistribution.asp | Almost regardless of your view about the predictability or efficiency of markets, you'll probably agree that asset returns are uncertain or risky. This is with rare exception. If we ignore the math that underlies probability distributions, we can see they are pictures that describe a particular view of uncertainty.
Uncertainty refers to randomness and is different from a lack of predictability, or market inefficiency. An emergent research view holds that financial markets are both uncertain and predictable. Also, markets can be efficient but also uncertain. In finance, we use probability distributions to draw pictures that illustrate our view of an asset return's sensitivity when we think the asset return can be considered a random variable. In this article, we'll go over a few of the most popular probability distributions and show you how to calculate them.
What Are They?
There are two ways of categorizing distributions: by whether it is discrete or continuous, and by whether it is a probability density function (PDF) or a cumulative distribution.
Discrete refers to a random variable drawn from a finite set of possible outcomes. A six-sided die, for example, has six discrete outcomes. A continuous distribution refers to a random variable drawn from an infinite set. Examples of continuous random variables include speed, distance and some asset returns. A discrete random variable is illustrated typically with dots or dashes, while a continuous variable is illustrated with a solid line. Figure 1 shows discrete and continuous distributions for a normal distribution with mean (expected value) of 50 and standard deviation of 10:
Figure 1
The distribution is an attempt to chart uncertainty. In this case, an outcome of 50 is the most likely but only will happen about 4% of the time; an outcome of 40 is one standard deviation below the mean and it will occur just under 2.5% of the time.
The other distinction is between the probability density function and the cumulative distribution function.
The PDF is the probability that our random variable reaches a specific value (or in the case of a continuous variable, of falling between an interval). We show that by indicating the probability that a random variable ‘X' will equal an actual value 'x':
P[x=X]
The cumulative distribution is the probability that random variable 'X' will be less than or equal to actual value 'x':
P[x<=X]
For example, if your height is a random variable with an expected value of 5'10" inches (your parents' average height), then the PDF question is, "What's the probability that you will reach a height of 5'4"?" The corresponding cumulative distribution function question is, "What's the probability you'll be shorter than 5'4"?"
Figure 1 showed two normal distributions. You can now see these are probability density function (PDF) plots. If we re-plot the exact same distribution as a cumulative distribution, we'll get the following:
Figure 2
The cumulative distribution must eventually reach 1.0 or 100% on the y-axis. If we raise the bar high enough, then at some point, virtually all outcomes will fall under that bar (we could say the distribution is typically asymptotic to 1.0).
Finance, as a social science, is not as clean as physical sciences. Gravity, for example, has an elegant formula that we can depend on, time and again. Financial asset returns, on the other hand cannot be replicated so consistently. A staggering amount of money has been lost over the years by clever people who confused the accurate distributions (i.e., as if derived from physical sciences) with the messy, unreliable approximations that try to depict financial returns. In finance, probability distributions are little more than crude pictorial representations.
Uniform
The simplest and most popular distribution is the uniform distribution in which all outcomes have an equal chance of occurring. A six-sided die has a uniform distribution. Each outcome has a probability of about 16.67% (1/6). Our plot below shows the solid line (so you can see it better), but keep in mind that this is a discrete distribution - you can't roll 2.5 or 2.11:
Figure 3
Now roll two dice together, as shown in Figure 4, and the distribution is no longer uniform. It peaks at seven, which happens to have a 16.67% chance. In this case, all the other outcomes are less likely:
Figure 4
Now roll three dice together, as shown in Figure 4. We start to see the effects of a most amazing theorem: the central limit theorem. The central limit theorem boldly promises that the sum or average of a series of independent variables will tend to become normally distributed, regardless of their own distribution. Our dice are individually uniform but combine them and - as we add more dice - almost magically their sum will tend toward the familiar normal distribution!
Figure 5
Binomial
The binomial distribution reflects a series of "either/or" trials, such as a series of coin tosses. These are called Bernoulli trials but you don't need even (50/50) odds. A Bernoulli trial refers to events that have only two outcomes. The binomial distribution below plots a series of 10 coin tosses where the probability of heads is 50% (p-0.5). You can see in Figure 6 that the chance of flipping exactly five heads and five tails (order doesn't matter) is just shy of 25%:
Figure 6
If the binomial distribution looks normal to you, you are correct about that. As the number of trials increase, the binomial tends toward the normal distribution.
Lognormal
The lognormal distribution is very important in finance because many of the most popular models assume that stock prices are distributed lognormally. It is easy to confuse asset returns with price levels:
Asset returns are often treated as normal - a stock can go up 10% or down 10%. Price levels are often treated as lognormal - a \$10 stock can go up to \$30 but it can't go down to -\$10. The lognormal distribution is non-zero and skewed to the right (again, a stock can't fall below zero but it has no theoretical upside limit):
Figure 7
Poisson
The Poisson distribution is used to describe the odds of a certain event (e.g., a daily portfolio loss below 5%) occurring over a time interval. So, in the example below, we assume that some operational process has an error rate of 3%. We further assume 100 random trials; the Poisson distribution describes the likelihood of getting a certain number of errors over some period of time, such as a singe day.
Figure 8
Student's T
The student's T distribution is also very popular because it has a slightly "fatter tail" than the normal distribution. The student's T is used typically when our sample size is small (i.e. less than 30). In finance, the left tail represents the losses. Therefore, if the sample size is small, we dare underestimate the odds of a big loss. The fatter tail on the student's T will help us out here. Even so, it happens that this distribution's fat tail is often not fat enough. Financial returns tend to exhibit, on rare catastrophic occasion, really fat-tail losses (i.e. fatter than predicted the distributions). Large sums of money have been lost making this point.
Figure 9
Beta Distribution
Finally, the beta distribution (not to be confused with the beta parameter in the capital asset pricing model) is popular with models that estimate the recovery rates on bond portfolios. The beta distribution is the utility player of distributions. Like the normal, it needs only two parameters (alpha and beta), but they can be combined for remarkable flexibility. Four possible beta distributions are illustrated in Figure 10 below:
Figure 10
The Bottom Line
Like so many shoes in our statistical shoe closet, we try to choose the best fit for the occasion, but we don't really know what the weather holds for us. We may choose a normal distribution then find out it underestimated left-tail losses; so we switch to a skewed distribution, only to find the data looks more "normal" in the next period. The elegant math underneath may seduce you into thinking these distributions reveal a deeper truth, but it is more likely that they are mere human artifacts. For example, all of the distributions we reviewed are quite smooth, but some asset returns jump discontinuously.
The normal distribution is omnipresent and elegant and it only requires two parameters (mean and distribution). Many other distributions converge toward the normal (e.g., binomial and Poisson). However, many situations, such as hedge fund returns, credit portfolios and severe loss events, don't deserve the normal distributions.. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828836977481842, "perplexity": 548.3631924662617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270577.0/warc/CC-MAIN-20140728011750-00096-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/223897/if-b-is-a-continuous-bilinear-function-such-that-bh-k-o-lverth-k-rvert | # If $B$ is a continuous bilinear function such that $B(h,k) = o(\lVert(h,k)\rVert^2)$, then $B=0$.
Suppose that $B: H \times K \Rightarrow F$ is a continuous bilinear function, where $H,K$ and $F$ are real normed spaces.
I have to prove (not as homework) that if $B(h,k) = o(\lVert(h,k)\rVert^2)$, then $B=0$.
Since $B$ is bilinear and continuous, we have that $\lVert B(h,k)\rVert \leq \lVert B\rVert \lVert h \rVert \lVert k\rVert \leq \lVert B\rVert \lVert(h,k)\rVert^2$, where $\lVert B\rVert$ is the operator norm.
Hence we have $$0=\lim_{(h,k)\rightarrow 0} \frac{\lVert B(h,k)\rVert}{\lVert (h,k) \rVert^2} \leq \lim_{(h,k)\rightarrow 0} \frac{\lVert B\rVert \lVert(h,k)\rVert^2}{\lVert (h,k) \rVert^2}= \lVert B\rVert.$$
If there is some way for me to get that the last limit is also $0$, I have what has to be proven. But I don't see a way to this. Could anyone provide me with a tiny hint? (No full answers please)
-
Could you please edit in some examples of the kind of $B$ you are talking about? I am unable to combine things in the way you indicate. Mostly I have no clue what $\parallel B \parallel$ should mean here. Anyway, please type in a few actual $B(h,k),$ which would appear to be a function taking real values. Plus, if this is what I think, the word continuous is superfluous. – Will Jagy Oct 30 '12 at 0:29
@WillJagy, Is this ok? – sxd Oct 30 '12 at 0:36
Let $f, g$ be arbitrary and take $\epsilon > 0$ some parameter. Then look at the behaviour $\epsilon^{-2} B(\epsilon f, \epsilon g)$ as $\epsilon \to 0$.
This is a hint how you can prove your proposition, i.e. that $B = 0$ if $B$ is bilinear and satisfies $B(h,k) = o(\| (h,k) \|^2 )$. I'm afraid your attempt just shows $\| B \| \geq 0$ which you know anyway. To get $\| B \| = 0$ try what I suggested. – DanielM Oct 31 '12 at 20:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289665818214417, "perplexity": 168.03415152436858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737904854.54/warc/CC-MAIN-20151001221824-00044-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://askdev.io/questions/995543/dealing-with-tychonoffs-theorem | # Dealing with Tychonoff's Theorem.
Here are my few questions that I encountered while going through Tychonoff's theorem in .
a) First of all, so far I was thinking that Heine Borel definition of compactness implies sequential compactness but not the other way around ( although i am failing to find some examples to appreciate it). But what wikipedia says is that " but NEITHER implies the other in general topological space . What am i missing here ?
b) It is easy to see that finite product ( countable product is not true, right ? ) of sequentially compact spaces is compact which we can see using diagonalization argument . and it discusses of embedding X ( completely regular Hausdorff space ) into $[0,1]^{C(X,[0,1])}$ (what does $[0,1]^{C(X,[0,1])}$ mean? I am not able to make any sense) , where $C(X,[0,1])$ is the set of continuous map from $X$ to $[0,1]$. I would appreciate your help.
Thanks!
3
2022-07-25 20:47:13
Source Share | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8260747194290161, "perplexity": 218.31769397212085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00197.warc.gz"} |
https://www.kidbrooke.com/knowledge-base/part-ii-portfolio-construction-sampling-optimisation/ | # Part II - Portfolio Construction - Sampling & Optimisation
The main findings of the first part of the “Portfolio Construction”- series suggest that the parameter uncertainty has a significant impact on the optimal portfolio allocations. Therefore, a Bayesian sampling method is proposed to introduce the parameter uncertainty to the model. This section presents and compares the results from the Bayesian simulation model to those obtained through using a regular multivariate normal model.
Portfolio Construction New
Portfolio Construction New | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791841864585876, "perplexity": 1030.004367781013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00381.warc.gz"} |
https://gitlab.lrde.epita.fr/spot/spot/-/commit/54b25b8c8ec03c0b9210cd57cd54410abf1423b6 | Commit 54b25b8c by Alexandre Duret-Lutz
### ltlcross: more documentation
* doc/org/ltlcross.org: Describe statistics, and mention --products=N.
parent 9b82d755
... ... @@ -31,7 +31,8 @@ The core of =ltlcross= is a loop that does the following steps: If there are 3 translators, the positive and negative translations will be denoted =P0=, =N0=, =P1=, =N1=, =P2=, =N2=. - Build the products of these automata with a random state-space (the same state-space for all translations). state-space for all translations). (If the =--products=N= option is given, =N= products are performed instead.) - Perform sanity checks between all these automata to detect any problem. - Gather statistics if requested. ... ... @@ -109,6 +110,8 @@ Detailed statistics about the result of each translation, and the product of that resulting automaton with the random state-space, can be obtained using the =--csv=FILE= or =--json=FILE= option. ** CSV or JSON output (or both!) The following compare =ltl2tgba=, =spin=, and =lbt= on three random formula (where =W= and =M= operators have been rewritten away because they are not supported by =spin= and =lbt=). ... ... @@ -179,9 +182,9 @@ This can be loaded in any spreadsheet application. Although we only supplied 2 random generated formulas, the output contains 4 formulas because =ltlcross= had to translate the positive and negative version of each. If we had used the option =--json=results.json= instead of =--cvs=results.csv=, the file =results.json= would have contained the following [[http://www.json.org/][JSON]] output. If we had used the option =--json=results.json= instead of (or in addition to) =--cvs=results.csv=, the file =results.json= would have contained the following [[http://www.json.org/][JSON]] output. #+BEGIN_SRC sh :results verbatim :exports results cat results.json ... ... @@ -271,6 +274,112 @@ bogus automata are still included: as shown below =ltlcross= will report inconsistencies between automata as errors, but it does not try to guess who is incorrect. ** Description of the columns =formula= and =tool= contain the formula translated and the command run to translate it. In the CSV, these columns contain the actual text. In the JSON output, these column contains an index into the =formula= and =tool= table declared separately. =states=, =edged=, =transitions=, =acc= are size measures for the automaton that was translated. =acc= counts the number of acceptance sets. When building (degeneralized) Büchi automata, it will always be =1=, so its value is meaningful only when evaluating translations to generalized Büchi automata. =edges= counts the actual number of edges in the graph supporting the automaton; an edge (labeled by a Boolean formula) might actually represent several transitions (each labeled by assignment of all atomic propositions). For instance in an automaton where the atomic proposition are $a$ and $b$, one edge labeled by $a\lor b$ actually represents three transitions $a b$, $a\bar b$, and $\bar a b$. The following picture displays two automata for the LTL formula =a U b=. They both have 2 states and 3 edges, however they differ in the number of transitions (7 versus 8), because the initial self-loop is more constrained in the first automaton. A smaller number of transition is therefore an indication of a more constrained automaton. #+BEGIN_SRC dot :file edges.png :cmdline -Tpng :exports results digraph G { 0 [label="", style=invis, height=0] 0 -> 1 1 [label="A1"] 1 -> 2 [label="b\n"] 1 -> 1 [label="a & !b\n"] 2 [label="B1", peripheries=2] 2 -> 2 [label="1"] 3 [label="", style=invis, height=0] 3 -> 4 4 [label="A2"] 4 -> 5 [label="b\n"] 4 -> 4 [label="a\n"] 5 [label="B2", peripheries=2] 5 -> 5 [label="1"] } #+END_SRC #+RESULTS: [[file:edges.png]] =scc= counts the number of strongly-connected components in the automaton. These SCCs are also partitioned on four sets based on their strengths: - =nonacc_scc= for non-accepting SCCs (such as states A1 and A2 in the previous picture) - =terminal_scc= for SCCs that consist of a single state with an accepting self-loop labeled by true (such as states B1 and B2 in the previous picture) - =weak_scc= for non-terminal SCCs in which all cycles are accepting - and =strong_scc= for accepting SCCs in which some cycles are not accepting. These SCC strengths can be used to compute the strength of the automaton as a whole: - an automaton is terminal if it contains only non-accepting or terminal SCCs, - an automaton is weak if it it contains only non-accepting, terminal, or weak SCCs, - an automaton is strong if it contains at least one strong SCC. This classification is used to fill the =terminal_aut=, =weak_aut=, =strong_aut= columns with Boolean values. Only one of these should contain =1=. We usually prefer terminal automata over weak automata, and weak automata over strong automata, because the emptiness check of terminal (and weak) automata is easier. =nondetstates= counts the number of non-deterministic states in the automaton. =nondeterministic= is a Boolean value indicating if the automaton is not deterministic. For instance in the previous picture showing two automata for =a U b=, the first automaton is deterministic (these two fields will contain 0), while the second automaton contain a nondeterministic state (state A2 has two possible successors for the assignment $ab$) and is therefore not deterministic. =time= obviously contains the time used by the translation. Time is measured with some high-resolution clock when available (that's nanosecond accuracy under Linux), but because translator commands are executed through a shell, it also includes the time to start a shell. (This extra cost apply identically to all translators, so it is not unfair.) Finally, =product_states=, =product_transitions=, and =product_scc= count the number of state, transitions and strongly-connect components in the product that has been built between the translated automaton and a random model. For a given formula, the same random model is of course used against the automata translated by all tools. Comparing the size of these product might give another indication of the "conciseness" of a translated automaton. There is of course a certain "luck factor" in the size of the product. Maybe some translator built a very dumb automaton, with many useless states, in which just a very tiny part is translated concisely. By luck, the random model generated might synchronize with this tiny part only, and ignore the part with all the useless states. A way to lessen this luck factor is to increase the number of products performed against the translated automaton. If option =--products=N= is used, =N= products are builds instead of one, and the fields =product_states=, =product_transitions=, and =product_scc= contain average values. * Detecting problems If a translator exits with a non-zero status code, or fails to output ... ... @@ -318,6 +427,11 @@ positive and negative formulas by the ith translator). : error: {P0,P2,P3,P4,P5,P6,P7,P8,P9} disagree with {P1} when evaluating the state-space If =--products=N= is used with =N= greater than one, the number of the state-space is also printed. This number is of no use by itself, except to explain why you may get multiple disagreement between the same sets of automata. - Consistency check: For each $i$, the products $P_i\otimes S$ and $N_i\otimes S$ ... ... @@ -329,7 +443,11 @@ positive and negative formulas by the ith translator). : error: inconsistency between P1 and N1 The above checks are the same that are performed by [[http://www.tcs.hut.fi/Software/lbtt/][LBTT]]. If =--products=N= is used with =N= greater than one, the number of the state-space in which the inconsistency was detected is also printed. The above checks are similar to those that are performed by [[http://www.tcs.hut.fi/Software/lbtt/][LBTT]]. If any problem was reported during the translation of one of the formulas, =ltlcheck= will exit with an exit status of =1=. Statistics ... ...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8145705461502075, "perplexity": 2820.0941591735814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00354.warc.gz"} |
http://imi.cas.sc.edu/events/720/ | IMI Interdisciplinary Mathematics InstituteCollege of Arts and Sciences
## Optimal sampling in weighted least-squares methods- Application to high-dimensional approximation
• Nov. 21, 2017
• 4:15 p.m.
• LeConte 312
## Abstract
Least squares methods are of common use when one needs to approximate a function based on its noiseless or noisy observation at n scattered points by a simpler function chosen in an m dimensional space with m less than n. Depending on the context, these points may be randomly drawn according to some distribution, or deterministically selected by the user.
This talk will survey some recent results on the stability and approximation properties of weighted least squares method, in relation with the spatial distribution of the sampling. One of our main finding is that an optimal random sampling strategy can be defined by means of the Christoffel function associated with the space where the approximation is searched. Here, optimal means that near-best approximation properties are achieved at the prize of a number of sample n larger than m only by a logarithmic factor.
One principal motivation is the application to high-dimensional approximation problems, such as coming PDEs with random input parameters. Motivated by this setting, we also discuss how the optimal sampling can be practically generated and incrementally updated when the approximation spaces are adaptively selected.
This is a joint work with Giovanni Migliorati.
Brief Bio:
Albert Cohen earned his PhD in 1990 under the supervision of Yves Meyer. Since 1995 he has been a full professor at Laboratoire Jacques-Louis Lions, Sorbonne Universite, Paris. He is the author of over 100 papers in journals and 3 books. He was an invited speaker at ICM 2002 and plenary speaker at ICIAM 2006. He received the Vasil Popov, Jacques Herbrant, and Blaise Pascal prizes. He is currently a member of the Institut Universitaire de France. His interests include approximation theory, statistics, signal-image-data processing, and numerical analysis.
© Interdisciplinary Mathematics Institute | The University of South Carolina Board of Trustees | Webmaster | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388592600822449, "perplexity": 893.2381694518238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815318.53/warc/CC-MAIN-20180224033332-20180224053332-00204.warc.gz"} |
https://euro-math-soc.eu/review/linear-canonical-transforms | # Linear Canonical Transforms
The Fourier transform rotates the time-frequency content of a signal over 90 degrees from the time axis to the frequency axis. Already in the 1970s it was observed that certain optical systems rotated the signal over an arbitrary angle, which became known as the fractional Fourier transform since it it acts like a fractional (i.e., a real) power of the Fourier operator. The fractional Fourier transform is thus depending on this rotation angle forms a one-parameter family of transforms.
Around the same time, quantum physicists realized that certain systems could transform the position and momentum as a vector into their new values leaving the quantum mechanics invariant. All such transforms were obtained by multiplying the position-momentum vector with a unimodular matrix. In the scalar case such a matrix depends on 3 free parameters. In a general N-dimensional space we have a 2 by 2 block matrix that form a real symplectic group Sp(2N). These transforms are the essence of the family of linear canonical transforms (LCT).
When in 2D optics the vector of position and momentum of the optical ray is considered, the same kind of transformation can be used, and the fractional Fourier transform like many other (fractional) transforms can be seen as a special case of the LCT. The reader less familiar with the subject should be warned that these fractional transforms are not directly related to the equally important and equally flourishing domain known as fractional calculus which studies fractional derivatives and fractional integrals.
Since the early days many papers appeared on all kinds of fractional transforms, and even several books, among which a basic one on The Fractional Fourier Transform by Haldun M. Ozaktas, Zeev Zalevsky, and M. Alper Kutay in 2001. There the emphasis was on definitions, mathematical properties and computation and their applications in optics and signal processing. The LCT is already there but it is not the main focus. Two of the authors of that book are now also editor of the present one. One could think of it as the LCT analog of their fractional Fourier transform book, but less extensive, and it is not a monograph. Several experts are contributing to the present book. It gives an up-to-date overview of the many aspects of the LCT. As it appears as a volume of the Springer Series in Optical Sciences, there is an understandable bias towards the optical viewpoint and applications, with less emphasis on the quantum physics.
The fifteen chapters are subdivided into three parts: (1) Fundamentals, (2) Discretization and computation, (3) Applications. All the aspects are covered: operator theory, theoretical physics, analysis, and group theory in the early chapters, discrete approximations and digital implementation in the second part, and it ends with some applications in the last part.
In the first part one gets the basics in some 100 pages: some history and of course the definition and properties, the kernels when written as an integral transform, all the types and special cases, and the effects of the transform in phase-space. The eigenmodes are as important as the Gauss-Hermite eigenfunctions of the Fourier transform. So there is a separate chapter dealing with the eigenfunctions. Also uncertainty principles play an important role when it comes to sampling theory for these transforms. The first part is completed with two chapters covering extensively the optical aspects of the LCT.
The second part deals with the computational aspects. While the fractional Fourier transform resulted in a rotation in the time-frequency plane, the LCT will result in oblique transforms. It is then important to obtain some analogs of bandwidth, sampling theorems, and degrees of freedom in the signal when one wants to come to an implementation of a fast discrete LCT transform. Analyzing these effects is related to a decomposition of the LCT in a sequence of elementary transforms which boils down to a sequence of chirp multiplications and Fourier transforms. Several possibilities are proposed to come to a fast digital LCT implementation, although no software is provided. The approach taken is from a signal processing viewpoint. One chapter gives an alternative that is based on optical interpretation. That alternative approach relies on coherent self-imaging, known as Talbot effect which describes Fresnel diffraction of a strictly periodic wavefront. Knowing that Fresnel diffraction is a special case of LCT. Therefore various generalizations of self-imaging in the wider LCT context can also lead to a practical implementation of discrete LCT.
The application part illustrates how LCT can be used to solve certain problems like deterministic phase retrieval, analyzing holographic systems, double random phase encoding, speckle metrology and quantum states of light.
This book is a most welcome addition to the literature. The subjects discussed appear in quite diverse contexts and it is therefore difficult to get the same overview as it is presented here. The general approach is the same as was used in the fractional Fourier book that I mentioned above, but it is less thorough. Moreover, since the different chapters are written by different authors, and because they highlight different aspects, the notation is not always strictly uniform, but that should not be hindering too much. The part on the algorithmic implementation is rather detailed but no software is provided, so it is a challenge for computer scientists to design an optimal implementation so that it can become standard software in signal processing packages.
Reviewer:
Book details
The linear canonical transform is a phase space transform with roots in optics and quantum mechanics. This is a collection of survey papers written by renowned specialists that give a state-of-the-art of the many aspects of the linear canonical transform with an emphasis on the optical interpretation and applications. The quantum mechanical aspects are less present. It covers the basic mathematical and optical aspects, the possible implementation of a fast discrete algorithm, and several of the possible optical and signal processing applications.
Author: Publisher:
Published:
2015
ISBN:
978-1-4939-3027-2 (hbk)
Price:
137,79 € (hbk)
Pages:
264
Categorisation | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644567131996155, "perplexity": 491.61482377913273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145534.11/warc/CC-MAIN-20200221172509-20200221202509-00516.warc.gz"} |
https://sbainvent.com/fluid-mechanics/compressibility-of-a-fluid/ | # Bulk Modulus
When analyzing a fluid it is important to consider how easily the fluids volume will change due to a pressure change. The reason why is because as that fluids volume changes its density will change. To determine the compressibility of a fluid you will need to determine what the bulk modulus is. The bulk modulus is found by finding the ratio of the change in pressure over the volume differential.
(Eq 1) $E_v=-\frac{dP}{dV/V}$
Also, since the density of a fluid will change if its volume changes, the density differential can also be used to determine the bulk modulus.
(Eq 2) $E_v=-\frac{dP}{dρ/ρ}$
The resulting dimensions of the bulk modulus will be $FL^{-2}$. For BG units this will be $lb/in^2$ (psi). While for SI units it will be $N/m^2$ (Pa). Notice that these are the same units that are used for the modulus of elasticity for a solid.
When the bulk modulus has a large value you can normally consider the fluid to be incompressible. This is because it will take a large pressure differential to cause a relatively small change in fluid volume or density. Liquids are normally always considered incompressible due to their large bulk modulus. The table below shows the bulk modulus of some select liquid. From this table you can see that a large pressure change will be required to create a volume or density change in these liquids.
Liquid Temperature Bulk Modulus Fahrenheit Celsius psi Pa Ethyl Alcohol 68 20 1.54e5 1.06e9 Gasoline 60 15.6 1.60e5 1.30e9 Mercury 68 20 4.14e6 2.85e10 SAE 30 oil 60 15.6 2.20e5 1.50e9 Sea Water 60 15.6 3.39e5 2.34e9 Water 60 15.6 3.12e5 2.15e9
### Compression and Expansion of a Gas
The relationship between a gas’s pressure and density when the gas expands or contracts is dependent on the type of process.
##### Isothermal Process
The first process that i’m going to talk about is the isothermal process. For this process to take place the temperature must remain constant. This will result in the following pressure and density relationship.
(Eq 3) $\frac{P}{ρ}=constant$
##### Isentropic Process
The next process that I would like to mention is the isentropic process. For the isentripic process to occur heat cannot be exchanged with the surroundings. The pressure and density relationship would be almost the same as the isothermal process, except you will need to consider the ratio of the specific heat at constant pressure $c_p$ and the specific heat at constant volume $c_V$.
(Eq 4) $k=\frac{c_p}{c_V}$
Note the two specific heat values can be used to find the ideal gas constant of the gas.
(Eq 5) $R= c_p-c_V$
Finally, once the ratio between the two specific heat values is determined you can now calculate the relationship between the pressure and density.
(Eq 6) $\frac{P}{ρ^k}=constant$
##### Bulk Modulus of a Gas
Unlike a liquid, a gas is considered compressible, since a relatively low pressure change will noticeably change the volume that a gas occupies. Due to this fact the bulk modulus will change directly with pressure. The bulk modulus can be found by taking the derivative $\frac{dp}{dρ}$.
The resulting bulk modulus for the isothermal process will be as following.
(Eq 7) $E_v=P$
The resulting bulk modulus for the isentropic process will be as follows.
(Eq 8) $E_v=kP$
### Example
If a gas was being compressed from one state to another determine the following equations. Write the an equation that can be used to find the final pressure of the gas if it were being compressed isothermally. Write the an equation that will determine the final pressure of the gas if it were compressed isentropically.
### Solution
Isothermal Process
$\frac{p_i}{ρ_i}=\frac{p_f}{ρ_f}$
$p_f = p_i\left(\frac{ρ_f}{ρ_i}\right)$
Isentropic Process
$\frac{p_i}{{ρ_i}^k}=\frac{p_f}{{ρ_f}^k}$
$p_f = p_i\left(\frac{ρ_f}{ρ_i}\right)^k$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188816547393799, "perplexity": 686.834123480453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887319.41/warc/CC-MAIN-20200705090648-20200705120648-00263.warc.gz"} |
http://jiangtanghu.com/blog/2012/09/16/statistical-notes-4-dragonrsquos-teeth-and-fleas-hypothesis-testing-in-plain-english/ | # Statistical Notes (4): Dragon’s Teeth and Fleas: Hypothesis Testing in Plain English
I was asked in several different occasions to explain hypothesis testing to non-technical people in plain English. Now I think I got a pretty neat one while honors belong to three great Germans, Friedrich Engels, Karl Marx, and Heinrich Heine. Engels wrote that Marx once quoted a saying from Heine that “I have sown dragon’s teeth and harvested fleas” (but I didn’t find the original source):
In terms of hypothesis testing, say, if you plant dragon’s teeth, you are not supposed to harvest the fleas; but if you do get the fleas, then most likely they are not dragon’s teeth at all!
To make up the story as simple as possible while keep most of the key messages, here take a one-sample T-test example from SAS TTEST Procedure User Guide, where the investigated data is the Degree of Reading Power (DRP, a measurement of children’s reading skill) from 44 third-grade children. The goal is to test if the mean score of DRP is equal to 30:
Null Hypothesis (H0): μ = 30
Alternative Hypothesis (H1): μ ≠ 30
For the following discussion, three statistical measures needed (number of cases, mean and standard error):
proc means data=read maxdec=4 n mean stderr;
var score;
freq count;
run;
and the output:
Here we go:
1. Suppose H0 is true that the mean score is equal to 30 (population mean μ = 30, we assume they are dragon’s teeth!).
2. We know the sample mean is the best estimate of the population mean, here it is 34.8636.
3. This sample mean is derived from a sample (namely, the 44 children). We’d like to know if this sample (with mean of 34.8636) really come from the population (with mean of 30).
4. To get to know this, we can repeat this survey (or trial, experiment), for example, take another 99 samples (also with 44 children in each sample) and calculate their sample means (we then have 100 different means from the whole 100 samples).
5. We may know these means (got from such mechanism), with some necessary mathematical transformation, follow a t-distribution with degree of freedom of 43 (df = 44 – 1):
The denominator as a whole is the standard error.
6. Now we can check our interested sample (with mean of 34.8636) in this t-distribution. There is a corresponding t-value of this sample in the distribution, (34.8636 – 30) / 1.6930 = 2.8728:
7. In this distribution, there are only 0.63% (0.0063) of t-values either above 2.8728 or below -2.8728 (symmetry of t-distribution). What does this mean? Obviously, our interested sample (with mean of 34.8636 and t-value of 2.8728) is not in the mainstream of the whole distribution. We can even think it is very “extreme” because, there are only 0.63% of elements in the whole that are more (or at least as equal) extreme than it.
8. We call this number, 0.63% (0.0063) the P-value. According to wikipedia:
the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.
Here the test statistic that was actually observed is 2.8728, the t-value from our interested sample. And we got from this t-distribution that P{|t|>2.8728} = 0.0063.
9. A question is, how extreme is extreme? We can’t define “extreme” exactly, but probably, we can. The threshold is the significance level α (usually 0.05). As a rule of thumb, if p-value is less than the predefine significance level α, then we think the event associated with this p-value is pretty extreme.
10. Retell our story: first we assume the population mean is 30 just as the null hypothesis claims (assume they are dragon’s teeth), and under this assumption, we built a t-distribution; we then got that our interested sample is an extreme case (it is a flea!). It is so extreme (much less than 0.05) that we can even think about that our assumption (the null hypothesis) is far away from true (most likely they are not dragon’s teeth at all!). We reject the null hypothesis and take the alternative that the mean score is different from 30.
11. Formally, it just follows a simple logic (taken from Glenn Walker and Jack Shostak, P.18):
In our story, P is the null hypothesis, while Q is the extreme case: if H0 is true, most likely such extreme case should not happen; now we observe the extreme case (a t-value of 2.8728 in a t-distribution with degree of freedom of 43), then it is reasonable to question the null hypothesis itself.
12. To end this analysis, we can take a look at the output of SAS PROC TTEST:
var score;
freq count;
run;
The same result (P-value 0.0063 < 0.05, reject H0):
# SAS Notes
12. In this example, input data take a form of cell count data, not as raw as the case-record data, so in both PROC MEANS and PROC TTEST, a “FREQ” statement added.
13. You can compute this P-value in step-6 with SAS using ProbT function:
data;
t = 2.8727938571;
df = 43;
tail = 2;
P = tail*(1-probt(t,df));
put P = ;
run;
According to the symmetry of t-distribution, you should multiple 2 to get the two-sided probability:
14. Long long ago, a so called “critical value” is calculated to support the decision. That’s the calculation method using TINV function:
data;
alpha = 0.05;
tail = 2;
df = 43;
tCritic = tinv(1-alpha/tail,df);
put tCritic=;
run;
We get this critical value of 2.0166921992:
These two shaded zones are called “rejection zones”. The rule is, if the t-value we observed lies within the rejection zones, we should then reject the null hypothesis and take the alternative. In step-6, the t-value we calculated is 2.8728, which is bigger than this critical value, so we should reject H0.
Critical value approach is equivalent with the p-value approach we discussed before, but slightly old fashioned. P-value is pretty intuitive and can be also easily calculated by computer. In days when computing power was not wildly available, people just took the t-distribution table to look up the critical values.
Most statistical packages including SAS software don’t report the critical value (but still alive in statistics text books).
# Reference
Common Statistical Methods for Clinical Research with SAS Examples by Glenn Walker and Jack Shostak
_Hypothesis Testing: A Primer_ (in Chinese)
Two sides T-distribution calculator
Single side t-distribution calculator
Online Latex Equation Editor | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8741306662559509, "perplexity": 2159.2403874495717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591683.78/warc/CC-MAIN-20180720135213-20180720155213-00039.warc.gz"} |
https://immunologyphd.hms.harvard.edu/browse/people?f%5B0%5D=sm_og_vocabulary%3Ataxonomy_term%3A121840&f%5B1%5D=sm_og_vocabulary%3Ataxonomy_term%3A121852&f%5B2%5D=sm_og_vocabulary%3Ataxonomy_term%3A121881&f%5B3%5D=sm_og_vocabulary%3Ataxonomy_term%3A121859&f%5B4%5D=sm_og_vocabulary%3Ataxonomy_term%3A121829 | # Jack L. Strominger
Higgins Professor of Biochemistry
The study of histocompatibility in man and in other vertebrates led to the understanding of the mechanisms of immune recognition and to the discovery of... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771323561668396, "perplexity": 4434.130812557658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00104.warc.gz"} |
http://tex.stackexchange.com/questions/4152/how-do-i-prevent-widow-orphan-lines?answertab=active | How do I prevent widow/orphan lines?
How do I prevent a line from appearing by itself:
Orphan: at the bottom of the page, or
Widow: at the top of the page?
-
As Brent points out, you cannot always do this. The best you can do is to tell TeX that it's infinitely bad for these to appear:
\widowpenalty10000
\clubpenalty10000
One thing to keep in mind is that when presented with multiple infinitely bad options, TeX just picks one of them so you can still get widows or orphans.
-
I looked up somewhere else and they had \widowpenalty=10000. It didn't work then. Thanks! :) – Kit Oct 16 '10 at 1:30
@Kit (it's a late comment, but anyway) You don't need the = in the assignment, but it doesn't hurt. \widowpenalty=10000 and \widowpenalty 10000 are the same. – topskip Oct 4 '11 at 8:38
You can now use the nowidow package to make this task easier:
\usepackage[all]{nowidow}
-
Where do I find nowindow.sty, I don't seem to have it in my standard (Mac) TeX install... Thanks – Emit Taste Oct 25 at 9:03
The Memoir manual, in section 3.5 "Sloppybottom" discusses this in some detail, which I won't reproduce here.
Be prepared even to re-word in the most intractable cases.
Update:
I think the specific commands like \enlargethispage and \sloppybottom are exclusively for the memoir package, but here's a snippet extracted from the aforementioned that you may care to adjust (you can see the extensive comments in the original):
\clubpenalty=9996
\widowpenalty=9999
\brokenpenalty=4991
\predisplaypenalty=10000
\postdisplaypenalty=1549
\displaywidowpenalty=1602
Personally, I tend to avoid this TinXering with Plain TeX internals; although I don't know how to do it specifically for newlfm, I'd probably opt for adjusting the textheight on a case-by-case basis, as a final tidy-up before publishing.
-
I'm using newlfm. Is this applicable, too? – Kit Oct 16 '10 at 1:18
while \sloppybottom is indeed memoir-specific (\raggedbottom is the comparable "plain" command), \enlargethispage is defined in base latex, so should be usable with any document class. – barbara beeton Sep 27 '11 at 13:27
This FAQ answer gives some tips, including enlarging/reducing the (double-)page, setting the paragraph tighter, using \raggedbottom (for which, see also this FAQ answer which discusses putting some stretch in the \topskip). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577876329421997, "perplexity": 2708.8040008701546}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758566/warc/CC-MAIN-20131218054918-00038-ip-10-33-133-15.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.