url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/122080/grossone-the-arxiv-blogs-pick-of-the-day/122115 | # GrossOne? The arXiv blog's pick of the day.
The arXiv blog having chosen GrossOne for its daily pick of today, I read the arXiv paper concerned, and posted a comment there. The arXiv blog used to be quite high profile as these things go.
Is there an existing well-known (but not to me) mathematical construction that does what Sergeyev proposes?
-
5
Looks like El Naschie all over again... – Rasmus Mar 19 '12 at 13:42
2
All the sources in the article are either El Naschie or Sergeyev. That's not a good (IMO) basis for a paper. A certain quote of Pauli, about student's research papers, comes to mind... To answer your question, I would say no. There is no mathematical construction that does what Sergeyev proposes. – mixedmath♦ Mar 19 '12 at 15:18
Thanks. I worried that I would get downvotes by association for posting this, but I wanted to check that it was bad, bad, bad, and to bring it to MathSE's attention if it was. The arXiv blog's daily choices have been getting steadily less interesting by my estimation. – Peter Morgan Mar 19 '12 at 16:54
1
– BlueRaja - Danny Pflughoeft Mar 19 '12 at 18:45
Why are you always the subject of interest outside the numeral ? Do you have something to say about grossone as a numeral ? – user30444 May 2 '12 at 13:14
show 1 more comment
## 2 Answers
Copying over here what I posted there:
I haven't read the references the author cites, but I don't see a lot of merit in this paper. The final answers seems to be simply taking the formula for the area of the Sierpiniski carpet (respectively Menger sponge) after n steps replacing the symbol n by the GrossOne symbol.
Note that this is not assigning an actual area to the Sierpinsiki carpet! If we defined the Sierpinski carpet by a different procedure, we would get a different answer. For example, if we lumped together every two steps, we would get the answer that the area of the Sierpinski carpet after GrossOne steps is (8/9)^{2*GrossOne}. The point of measure theory (which assigns the Sierpinski carpet measure 0) or more generally Hausdorff measure (which tells us that the Sierpinski carpet has (log 8/log 3)-dimensional area equal to 1) is to assign areas to subsets of the plane no matter how they are defined.
I also strongly disagree that mathematicians do not have good tools to describe behavior of functions close to infinity. For this sort of simple exponential decay, the ordinary language of asymptotic expansions does excellently. (See, for example, de Bruijn's Asymtotic Methods in Analysis.) For more complicated decay, the theory of transeries is excellent (see Edgar, Transseries for Beginners, Real Anal. Exchange 35 (2010), also available at http://www.math.osu.edu/~edgar.2/preprints/trans_begin/beginners.pdf ) for a good introduction. And, for these sort of specific fractal examples, this is what Hausdorff dimension and Hausdorff measure were invented for! See any textbook on Fractal Geometry.
I would not expect this paper to appear in a quality journal.
I usually wouldn't write something like this in public. This is what I'd normally send to an editor who contacted me to referee something like this. (Also, I am not an expert on fractals or measure theory, so I am an unlikely choice of referee.) But I worry that the arXiv blog has as much visibility as all but the best journals, so it is a major problem if it singles out something like this. For example, your software tells me that 111 people are reading this right now. How many people are reading the Bulletin of the AMS right now?
May I ask what procedure the arXiv blog uses to vet the papers it recommends? I would hope that they are sent out to experts for quick opinions as to general merit (similar to the ones JAMS and other top math journals solicit before beginning the reviewing process).
-
1
Killing. Thanks. – Peter Morgan Mar 19 '12 at 16:34
This is a comment, but it is too long and complex to be added in the usual manner.
The proposed mathematical structure is not in the paper above, which references three other sources by the same author. I was able to find one online:
A new applied approach for executing computations with infinite and infinitesimal quantities (arXiv:1203.3132)
This paper apparently introduces the structure, but does not define it very formally. From the paper:
[The Infinite Unit Axiom] is added to axioms for real numbers (remind that we consider axioms in sense of Postulate 2). Thus, it is postulated that associative and commutative properties of multiplication and addition, distributive property of multiplication over addition, existence of inverse elements with respect to addition and multiplication hold for grossone as for finite numbers
leaving to the reader to decide exactly what those are* and how to extend them to include this new element. It seems that we start with a complete ordered Archimedean field and adjoin a new element to result in, at least, a field. (I intentionally omit completeness and order, q.v. below.)
The Infinite Unit Axiom has three parts:
Infinity. Any finite natural number n is less than grossone
This limits the order on the new structure. By the Archimedean property, any real x is less than grossone.
Identity. The following relations link grossone to identity elements 0 and 1 [six formulas: multiplication by 0, division by itself, and exponents work for grossone as for real numbers]
This supports the quote about grossone acting like finite numbers.
Divisibility. For any finite natural number n sets Nk,n, $1\le k\le n$, being the nth parts of the set, N, of natural numbers have the same number of elements indicated by the numeral grossone/n where $$\mathbb{N}_{k,n}=\{k,k+n,k+2n,\ldots\},\ 1\le k\le n,\ \bigcup_{k=1}^n\mathbb{N}_{k,n}=\mathbb{N}.$$
This gives some kind of definition of what cardinalities look like in this system.
Normally given a system like this I'd look for contradictions, but the definitions are too wishy-washy to nail that down easily. My assumption that the "axioms for real numbers" were the ordered field axioms plus completeness is inadequate, for example, since apparently exponentiation is included as well. Perhaps someone will find these notes useful, though, so I leave them here.
-
## protected by J. M.May 2 '12 at 19:24
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351058602333069, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-geometry/126126-lebesgue-integral.html | # Thread:
1. ## Lebesgue Integral
Let $f \in L_p(X)$,1 $\le p < \infty$ and let $\epsilon > 0$. Show that there exists a set $E_\epsilon \subseteq X$ with $m(E_\epsilon) < \infty$ such that if $F \subseteq X$ and $F \cap E_\epsilon = \phi$,then $\int_F |f|^p dm < \epsilon^p$
I let $E_\epsilon = \{ x \in X : |f(x)| \ge \delta_\epsilon \}$,where $\delta_\epsilon > 0$ and $F = \{ x \in X : |f(x)| < \delta_\epsilon \}$.
$\int_F |f|^P dm < \int_F (\delta_\epsilon)^p dm = (\delta_\epsilon)^p m(F)$
I am not sure whether my construction of the sets are correct because I can not make any conclusion regarding $m(F)$.
Can anyone comment on this?
2. Originally Posted by problem
Let $f \in L_p(X)$,1 $\le p < \infty$ and let $\epsilon > 0$. Show that there exists a set $E_\epsilon \subseteq X$ with $m(E_\epsilon) < \infty$ such that if $F \subseteq X$ and $F \cap E_\epsilon = \phi$,then $\int_F |f|^p dm < \epsilon^p$
I let $E_\epsilon = \{ x \in X : |f(x)| \ge \delta_\epsilon \}$,where $\delta_\epsilon > 0$ and $F = \{ x \in X : |f(x)| < \delta_\epsilon \}$.
$\int_F |f|^P dm < \int_F (\delta_\epsilon)^p dm = (\delta_\epsilon)^p m(F)$
I am not sure whether my construction of the sets are correct because I can not make any conclusion regarding $m(F)$.
Can anyone comment on this?
I think you have to go right back to the definition of the Lebesgue integral to do this properly. For a positive function such as $|f|^p$, the usual way to define its integral (as described here, for example) is that it is the supremum of the integrals of nonnegative simple functions majorised by $|f|^p$. If the integral $\int_X|f|^pdm$ is finite (as it is if $f \in L_p(X)$) then each of these simple functions must have finite integral and therefore finite support. We can find a simple function s such that $s\leqslant|f|^p$ and $\int_Xs\,dm>\int_X|f|^pdm - \varepsilon^p$. Now take $E_\varepsilon$ to be the support of s. You should find that this does the required job. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147669076919556, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-equations/173762-tricky-equation-first-order.html | # Thread:
1. ## A tricky equation of the first order
My teacher gave me a "bonus" assignment the other day, and I've been racking my brain trying to solve it. Here it is:
$xy'(ln(\frac{x}{y}) +1) = y(ln(\frac{x}{y}) - 1)$
So far I've figured that I should substitute $ln(\frac{x}{y})$ for $u(x)$, thus giving me $y = \frac{x}{e^{u(x)}}$, but from there I'm kind of lost. Any ideas?
2. Originally Posted by getsallad
My teacher gave me a "bonus" assignment the other day, and I've been racking my brain trying to solve it. Here it is:
$xy'(ln(\frac{x}{y}) +1) = y(ln(\frac{x}{y}) - 1)$
So far I've figured that I should substitute $ln(\frac{x}{y})$ for $u(x)$, thus giving me $y = \frac{x}{e^{u(x)}}$, but from there I'm kind of lost. Any ideas?
I haven't done anything with it, but note that if you divide both sides by x you have an equation in y/x...
-Dan
3. I'm sorry, my knowledge of differential equations in itself is quite shaky, but how would that help me? If I get what you're saying, is it that I get
$y'(ln(\frac{x}{y}) + 1) = \frac{x}{y}(ln(\frac{x}{y}) - 1)$?
If that's the case, are you suggesting I could substitute u(x) for $\frac{x}{y}$ instead?
4. The substitution $u = x/y$ does render the equation separable (the original DE is homogeneous). The problem is, the resulting integral in $u$ is horrendous. I suppose you could say, at that point, that you've "reduced the DE to quadratures", but if your professor is looking for a closed-form solution, that won't do.
The substitution $u=\ln(x/y)$ is much better. You have to translate the DE over to the $u$ domain:
$u'=\dfrac{y}{x}\cdot\dfrac{y-xy'}{y^{2}}=\dfrac{y-xy'}{xy}.$
Solve this for $y',$ and plug everything into the DE. A few things should cancel, leaving you with a much nicer separable equation.
5. Very cute problem, by the way! Thanks for posting!
6. Thanks for the help. I'm trying it out right now, doing all the steps myself to see if I get it.
7. I just did the integral using Akbeet's method as well as using the homogeneous method and I'd rate them to be about the same level of difficulty.
For the "connoisseurs" out there, the homogeneous solution includes a nice little integral:
$\displaystyle \int \frac{ln(u)~du}{u}$ which I haven't seen done in a long time. (it's not particularly hard to do, just a nice little piece of work.)
-Dan
8. OK, so I've gotten somewhere. Now I'm stuck on the actual separable differential equation.
I have
$\frac{2}{u + 1} = xu'$
which leads me to
$\frac{2}{u + 1} du = x dx$
but that approach seems to give me a completely different solution from what Wolfram Alpha gives me when I feed it the equation. So I figure I must be doing something wrong, since Wolfram Alpha's solution also seems more in tune with the final solution of the problem.
10. OK, so I've hit one last roadblock and it's mighty frustrating. In the end I get
$\frac{u^2}{2} + u = 2ln(x) + c$
and Wolfram Alpha gives the solution
$\frac{1}{2}*ln^2(\frac{x}{y}) - ln(\frac{x}{y}) = 2ln(x) + c$
Which is basically the same thing with - instead of +. Should I show you step by step what I am doing? This is driving me nuts.
11. No, WolframAlpha is not giving you that solution, it's giving you this solution. Notice that the x and y in the WolframAlpha solution are flipped from what you have. Because the left-most logarithm is squared, the change in sign goes unnoticed. That, of course, doesn't happen with the second term.
I'm saying your solution is correct.
12. Incidentally, you could, if you wanted to, use the quadratic formula on log(x/y) and then solve for y. You get a multi-valued "function" then, but it is an explicit formula for y, which is nice.
13. Ah, can't believe I missed that. Thanks a million for the help on this one! I'll be sure to include the explicit formula for y just to be on the safe side. You've been most helpful! =)
14. You're very welcome! Have a good one! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616499543190002, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/72546/is-the-l-function-of-the-complex-cohomology-of-a-motive-equal-to-the-l-function | ## Is the “L-function of the complex cohomology” of a motive equal to the L-function of its l-adic realization?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let's say I have a motive in $\mathcal{M}_{num}(K)$ ($K$ a number field). For each prime $l$ there is a realization of this motive in terms of etale cohomology with coefficients in $\mathbb{Q}_l$. This has more structure than being a mere vector space: it is a representation of $Gal(K)$! This representation has an $L$-function. As I understand it, the $L$ function doesn't depend on $l$.
What I wonder is whether there is something special about etale cohomology with coefficients in an $l$-adic field, or whether every Weil cohomology has an $L$-function attached to it that would equal the $L$-function of any other realization.
The usual (singular) cohomology with the complex topology, is also a representation of $Gal(K)$ (factoring through the motivic Galois group, if this means anything to you). Is it true, then, that the $L$-function attached to this representation is the same as the $L$-function of the realizations via etale cohomology with coefficients in $\mathbb{Q}_l$?
How does one go about proving equalities between $L$-functions of different realizations of the same motive?
-
2
Although I'm not a number theorist, I'm a bit skeptical as to whether $Gal(K)$ acts on singular cohomology with $\mathbb{Z}$ or $\mathbb{Q}$-coefficients in an interesting way. Or do I misunderstand the set up? Perhaps the work of Denef, Loeser and Kapranov on motivic zeta functions, might be what you are looking for. – Donu Arapura Aug 10 2011 at 10:40
3
I think that the problem is that maps are going the wrong way: the (conjectural) motivic Galois group should map to $Gal(K)$. – Donu Arapura Aug 10 2011 at 10:59
How would you propose to construct such an $L$-function for, e.g., an elliptic curve? The Betti realization seems to be insensitive to moduli, while the usual $L$ function is very sensitive. – S. Carnahan♦ Aug 11 2011 at 3:55
## 1 Answer
As far as I can tell, the Galois action that you assert exists doesn't in fact exist.
First of all, to define the motivic Galois group, you have to choose a realization; it seems that you are choosing the Betti realization, so the motivic Galois group is a pro-algebraic group $G$ over $\mathbb Q$ which acts on $H^i(X(\mathbb C),\mathbb Q)$ for every smooth projective variety $X$ over $\mathbb K$.
If we consider the $\mathbb Q_{\ell}$ points of $G$, then we get a map $\rho_{\ell}:Gal(\overline{K}/K) \to G(\mathbb Q_{\ell})$, corresponding precisely to the Galois action on $\ell$-adic cohomology (and using the canonical isomorphism of $\ell$-adic cohomology with $H^i(X(\mathbb C),\mathbb Q_{\ell})$). This just encodes Galois actions on $\ell$-adic cohomology, by its construction.
There is also a map of pro-algebraic groups $G \to Gal(\overline{K}/K)$, given by the action of $G$ on $H^0$s. (This is the map that Donu refers to in his comment.) The maps $\rho_{\ell}$ are sections to this map (after passing to $\mathbb Q_{\ell}$-valued points), again by construction.
There is nothing in this formalism (and it is just formalism) which suggests a natural action of the Galois group on $H^i(X(\mathbb C),\mathbb Q)$ for $i > 0$.
As for proving the equality of $L$-functions for different realizations (say different $\ell$-adic realizations), this is a very difficult problem which remains open in general, as far as I know. (If one wants to compare the $L$-functions built from the $\ell$-adic and $\ell'$-adic realizations, then the equality of Euler factors at primes of good reduction different from $\ell$ and $\ell'$ is due to Deligne as part of his proof of Weil's Riemann hypothesis. If $\ell$ or $\ell'$ is a prime of good reduction, then the equality of Euler factors at $\ell$ or $\ell'$ is known but is much more involved (since even defining the Euler factors in this case is much harder). At primes of bad reduction, my understanding is that equality is not know in general.)
-
So as things stand, there is no way to define the L-function of a motive without the notion of l-adic cohomology? – James D. Taylor Aug 11 2011 at 16:20
Well, you could try to directly "count points" on the reduction mod $p$, but you have the problem of non-uniqueness of models, especially at primes of bad reduction, and this is related to the whole difficulty of working at those points. – Emerton Aug 11 2011 at 18:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453558921813965, "perplexity_flag": "head"} |
http://katlas.math.toronto.edu/wiki/The_Jones_Polynomial | # The Jones Polynomial
### From Knot Atlas
(For In[1] see Setup)
In[2]:= ?Jones
Jones[L][q] computes the Jones polynomial of a knot or link L as a function of the variable q.
In Naming and Enumeration we checked that the knots 6_1 and 9_46 have the same Alexander polynomial. Their Jones polynomials are different, though:
`In[3]:=` `Jones[Knot[6, 1]][q]`
`Out[3]=` ``` -4 -3 -2 2 2
2 + q - q + q - - - q + q
q```
`In[4]:=` `Jones[Knot[9, 46]][q]`
`Out[4]=` ``` -6 -5 -4 2 -2 1
2 + q - q + q - -- + q - -
3 q
q```
On links with an even number of components the Jones polynomial is a function of $\sqrt{q}$, and hence it is often more convenient to view it as a function of t, where t2 = q:
`In[5]:=` `Jones[Link[8, Alternating, 6]][q]`
`Out[5]=` ``` -(9/2) -(7/2) 3 3 4 3/2
-q + q - ---- + ---- - ------- + 3 Sqrt[q] - 2 q +
5/2 3/2 Sqrt[q]
q q
5/2 7/2
2 q - q```
`In[6]:=` `PowerExpand[Jones[Link[8, Alternating, 6]][t^2]]`
`Out[6]=` ``` -9 -7 3 3 4 3 5 7
-t + t - -- + -- - - + 3 t - 2 t + 2 t - t
5 3 t
t t```
The Jones polynomial attains 2110 values on the 2226 knots and links known to `KnotTheory``:
`In[7]:=` `all = Join[AllKnots[], AllLinks[]];`
`In[8]:=` `Length /@ {Union[Jones[#][q]& /@ all], all}`
`Out[8]=` `{2110, 2226}`
#### How is the Jones polynomial computed?
(See also: The Kauffman Bracket using Haskell)
The Jones polynomial is so simple to compute using Mathematica that it's worthwhile pause and see how this is done, even for readers with limited prior programming experience. First, recall (say from ) the definition of the Jones polynomial using the Kauffman bracket $\langle\cdot\rangle$:
[KBDef]
$\langle\emptyset\rangle=1; \qquad \langle\bigcirc L\rangle = (-A^2-B^2)\langle L\rangle; \qquad \langle\slashoverback\rangle = A\langle\hsmoothing\rangle + B\langle\smoothing\rangle;$
$J(L) = \left.(-A^3)^{w(L)}\frac{\langle L\rangle}{\langle\bigcirc\rangle}\right|_{A\to q^{1/4}},$
here A is a commutative variable, B = A−1, and w(L) is the writhe of L, the difference n + −n− where n + and n− count the positive $(\overcrossing)$ and negative $(\undercrossing)$ crossings of L respectively.
PD[X[1,4,2,5], X[3,6,4,1], X[5,2,6,3]] and P[1,4] P[1,5] P[2,4] P[2,6] P[3,5] P[3,6]
Just for concreteness, let us start by fixing L to be the trefoil knot shown above:
`In[9]:=` `L = PD[Knot[3, 1]]`
`Out[9]=` `PD[X[1, 4, 2, 5], X[3, 6, 4, 1], X[5, 2, 6, 3]]`
Our first task is to perform the replacement $\langle\slashoverback\rangle\to A\langle\hsmoothing\rangle + B\langle\smoothing\rangle$ on all crossings of L. By our conventions (see Planar Diagrams) the edges around a crossing Xabcd are labeled a, b, c and d: ${}^c_d\slashoverback{}_a^b$. Labeling the smoothings $(\hsmoothing, \ \smoothing)$ in the same way, ${}^c_d\hsmoothing{}_a^b$ and ${}^c_d\smoothing{}_a^b$, we are lead to the symbolic replacement rule $X_{abcd}\to AP_{ad}P_{bc}+BP_{ab}P_{cd}$. Let us apply this rule to L, switch to a multiplicative notation and expand:
`In[10]:=` `t1 = L /. X[a_,b_,c_,d_] :> A P[a,d] P[b,c] + B P[a,b] P[c,d]`
`Out[10]=` ```PD[A P[1, 5] P[2, 4] + B P[1, 4] P[2, 5],
B P[1, 4] P[3, 6] + A P[1, 3] P[4, 6],
A P[2, 6] P[3, 5] + B P[2, 5] P[3, 6]]```
`In[11]:=` `t2 = Expand[Times @@ t1]`
`Out[11]=` ``` 2
A B P[1, 4] P[1, 5] P[2, 4] P[2, 6] P[3, 5] P[3, 6] +
2 2
A B P[1, 4] P[2, 5] P[2, 6] P[3, 5] P[3, 6] +
2 2
A B P[1, 4] P[1, 5] P[2, 4] P[2, 5] P[3, 6] +
3 2 2 2
B P[1, 4] P[2, 5] P[3, 6] +
3
A P[1, 3] P[1, 5] P[2, 4] P[2, 6] P[3, 5] P[4, 6] +
2
A B P[1, 3] P[1, 4] P[2, 5] P[2, 6] P[3, 5] P[4, 6] +
2
A B P[1, 3] P[1, 5] P[2, 4] P[2, 5] P[3, 6] P[4, 6] +
2 2
A B P[1, 3] P[1, 4] P[2, 5] P[3, 6] P[4, 6]```
In the above expression the product P[1,4] P[1,5] P[2,4] P[2,6] P[3,5] P[3,6] represents a path in which 1 is connected to 4, 1 is connected to 5, 2 is connected to 4, etc. (see the right half of the figure above). We simplify such paths by repeatedly applying the rules $P_{ab}P_{bc}\to P_{ac}$ and $P^2_{ab}\to P_{aa}$:
`In[12]:=` `t3 = t2 //. {P[a_,b_]P[b_,c_] :> P[a,c], P[a_,b_]^2 :> P[a,a]}`
`Out[12]=` ``` 3 2
B P[1, 1] P[2, 2] P[3, 3] + A B P[2, 2] P[4, 4] +
3 2 2
A P[3, 3] P[4, 4] + A B P[3, 3] P[4, 4] + 3 A B P[5, 5] +
2
A B P[1, 1] P[5, 5]```
To complete the computation of the Kauffman bracket, all that remains is to replace closed cycles (paths of the form Paa by −A2−B2, to replace B by A−1, and to simplify:
`In[13]:=` `t4 = Expand[t3 /. P[a_,a_] -> -A^2-B^2 /. B -> 1/A]`
`Out[13]=` ``` -9 1 3 7
-A + - + A + A
A```
We could have, of course, combined the above four lines to a single very short program, that compues the Kauffman bracket from the beginning to the end:
`In[14]:=` ```KB0[pd_] := Expand[
Expand[Times @@ pd /. X[a_,b_,c_,d_] :> A P[a,d] P[b,c] + 1/A P[a,b] P[c,d]]
//. {P[a_,b_]P[b_,c_] :> P[a,c], P[a_,b_]^2 :> P[a,a], P[a_,a_] -> -A^2-1/A^2}]```
`In[15]:=` `t4 = KB0[PD[Knot[3, 1]]]`
`Out[15]=` ``` -9 1 3 7
-A + - + A + A
A```
We will skip the uninteresting code for the computation of the writhe here; it is a linear time computation, and if that's all we ever wanted to compute, we wouldn't have bothered to purchase a computer. For our L the result is −3, and hence the Jones polynomial of L is given by
`In[16]:=` `(-A^3)^(-3) * t4 / (-A^2-1/A^2) /. A -> q^(1/4) // Simplify // Expand`
`Out[16]=` ``` -4 -3 1
-q + q + -
q```
At merely 3 lines of code, our program is surely nice and elegant. But it is very slow:
`In[17]:=` `time0 = Timing[KB0[PD[Link[11, Alternating, 548]]]]`
`Out[17]=` ``` -23 5 10 -3 5 13 17 21 25
{1., A + --- + -- + A + 6 A + 6 A + 5 A - 5 A + 4 A - A }
15 7
A A```
Here's the much faster alternative employed by `KnotTheory``:
`In[18]:=` ```KB1[pd_PD] := KB1[pd, {}, 1];
KB1[pd_PD, inside_, web_] := Module[
{pos = First[Ordering[Length[Complement[List @@ #, inside]]& /@ pd]]},
pd[[pos]] /. X[a_,b_,c_,d_] :> KB1[
Delete[pd, pos],
Union[inside, {a,b,c,d}],
Expand[web*(A P[a,d] P[b,c]+1/A P[a,b] P[c,d])] //. {
P[e_,f_]P[f_,g_] :> P[e,g], P[e_,_]^2 :> P[e,e], P[e_,e_] -> -A^2-1/A^2
}
]
];
KB1[PD[],_,web_] := Expand[web]```
`In[19]:=` `time1 = Timing[KB1[PD[Link[11, Alternating, 548]]]]`
`Out[19]=` ``` -23 5 10 -3 5 13 17 21
{0.015, A + --- + -- + A + 6 A + 6 A + 5 A - 5 A + 4 A -
15 7
A A
25
A }```
(So on L11a548 `KB1` is -23 5 10 -3 5 13 17 21 25 {1., A + --- + -- + A + 6 A + 6 A + 5 A - 5 A + 4 A - A }1,1
``` 15 7
A A/ -23 5 10 -3 5 13 17 21 25
```
{0.015, A + --- + -- + A + 6 A + 6 A + 5 A - 5 A + 4 A - A }1,1
``` 15 7
A A ~ -23 5 10 -3 5 13 17 21 25
{1., A + --- + -- + A + 6 A + 6 A + 5 A - 5 A + 4 A - A }1,1
15 7
A A
```
Round[--------------------------------------------------------------------------------]
``` -23 5 10 -3 5 13 17 21 25
{0.015, A + --- + -- + A + 6 A + 6 A + 5 A - 5 A + 4 A - A }1,1
15 7
A A times faster than `KB0`.)
```
The idea here is to maintain a "computation front", a planar domain which starts empty and gradualy increases until the whole link diagram is enclosed. Within the front, the rules defining the Kauffman bracket, Equation [KBDef], are applied and the result is expanded as much as possible. Outside of the front the link diagram remains untouched. At every step we choose a crossing outside the front with the most legs inside and "conquer" it -- apply the rules of [KBDef] and expand again. As our new outpost is maximally connected to our old territory, the length of the boundary is increased in a minimal way, and hence the size of the "web" within our front remains as small as possible and thus quick to manipulate.
In further detail, the routine `KB1[pd, inside, web]` computes the Kauffman bracket assuming the labels of the edges inside the front are in the variable `inside`, the already-computed inside of the front is in the variable `web` and the part of the link diagram yet untouched is `pd`. The single argument `KB1[pd]` simply calls `KB1[pd, inside, web]` with an empty `inside` and with `web` set to 1. The three argument `KB1[pd, inside, web]` finds the position of the crossing maximmally connected to the front using the somewhat cryptic assignment
```pos = First[Ordering[Length[Complement[List @@ #, inside]]& /@ pd]]}
```
`KB1[pd, inside, web]` then recursively calls itself with that crossing removed from ```pd}, with its legs
added to the <code>inside```, and with `web` updated in accordance with [KBDef]. Finally, when `pd` is empty, the output is simply the value of `web`.
[Kauffman] ^ L. H. Kauffman, On knots, Princeton Univ. Press, Princeton, 1987. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909686803817749, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=318121 | Physics Forums
4-momentum of a massive scalar field in terms of creation and annihilation operators
Hi,
I'm trying to compute
$$P^{\mu} = \int d^{3}x T^{0\mu}$$
where T is the stress energy tensor given by
$$T^{\mu\nu} = \frac{\partial\mathcal{L}}{\partial[\partial_{\mu}\phi]}\partial^{\nu}\phi - g^{\mu\nu}\mathcal{L}$$
for the scalar field $\phi$ with the Lagrangian density given by
$$\mathcal{L} = \frac{1}{2}\partial^{\mu}\phi\partial_{\mu}\phi - m^2\phi^2$$
This is what I get
$$T^{\mu 0} = g^{0\mu}\mathcal{H}$$
(using $\mathcal{H} = \Pi\dot{\phi} - \mathcal{L} = \partial^{0}\phi\partial_{0}\phi - \mathcal{L}$)
so
$$\int d^{3}x T^{0\mu} = g^{0\mu}H = \frac{1}{2}\int d^{3}p g^{0\mu}E_{p}[a(p)a^{\dagger}(p) + a^{\dagger}(p)a(p)]$$
Now, the problem is that if we have
$$p^{\mu} = (E_{p}, \vec{p})$$
then $E_{p} = p^{0}$, so
$$\int d^{3}x T^{0\mu} = \frac{1}{2}\int d^{3}p g^{0\mu}p^{0}[a(p)a^{\dagger}(p) + a^{\dagger}(p)a(p)]$$
Is there some mistake here, because the answer should involve $p^{\mu}$?
The correct answer is
$$\int d^{3}x T^{0\mu} = \frac{1}{2}\int d^{3}p p^{\mu}[a(p)a^{\dagger}(p) + a^{\dagger}(p)a(p)]$$
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Maveric, I would like to draw your attention that from naive (common sense) considerations, the total energy of a system of several non-interacting particles is simply the sum of one-particle energies $$E(\mathbf{p})$$. In the creation-annihilation operator notation this means $$E = \int d^{3}p E(\mathbf{p}) a^{\dagger}(\mathbf{p})a(\mathbf{p})$$ Similarly, the total momentum is the sum of one-particle momenta $$\mathbf{P} = \int d^{3}p \mathbf{p} a^{\dagger}(\mathbf{p})a(\mathbf{p})$$ So, it seems that quantum field recipe outlined by you does not match exactly with common sense.
Quote by meopemuk Maveric, I would like to draw your attention that from naive (common sense) considerations, the total energy of a system of several non-interacting particles is simply the sum of one-particle energies $$E(\mathbf{p})$$. In the creation-annihilation operator notation this means $$E = \int d^{3}p E(\mathbf{p}) a^{\dagger}(\mathbf{p})a(\mathbf{p})$$
Thank you for your reply meopemuk, but I am not sure how you can justify this rigorously. The expression I wrote comes from a direct (explicit) computation of the Hamiltonian of the Klein Gordon field. After normal ordering, the expression takes the form you wrote.
But maybe you have a different point?
Also, this question is from a book, and the answer I wrote down is the one given in the back of that book. I think there's a problem with the index raising/lowering.
EDIT -- Okay, I think I get the point of your post. Yes, offhand that is what the expression should be intuitively. But it isn't -- except in the normally ordered sense. One reason I can think of is the way the field is constructed...the second term in the ground state Hamiltonian is a momentum delta function evaluated at its singularity. To "remove" it, we define a normally ordered Hamiltonian. Is there a deeper reason? I'm new to QFT btw, so I would appreciate if you could dwell on the point you're trying to make further.
PS -- Please also take a look at my original question...I'm still stuck with the index ordering :-P
4-momentum of a massive scalar field in terms of creation and annihilation operators
The point I am making does not answer your original question, but (I hope) it is not irrelevant.
There are two ways to look at quantum field theory in general and at operators of observables in particular. One way is long and painful, and the other way is fast and easy.
The long and painful way is based on the idea of quantum field. Unfortunately, this way is presented in most QFT textbooks and fills many pages there. Roughly, it goes like this (in the case of a non-interacting field):
1. First we assume that there exists some (mysterious) substance called "field".
2. Then we postulate a certain Lagrangian and action for the field, and demand that this action must be minimized.
3. Then we apply Noether's theorem and derive field-based expressions for basic observables, such as total energy and momentum.
4. Then we derive a field equation (e.g., Klein-Gordon) by minimizing the action.
5. Then we solve this equation in the form of a "Fourier series".
6. Then we "quantize" this solution by converting coefficients to (creation-anihilation) operators with prescribed commutation relations.
7. Then we insert this solution for the "quantum field" in the formulas for the energy and momentum found in 3.
8. Then we see that obtained energy has a divergent term and artificially delete this term by normal ordering.
9. Finally, we arrive to the desired expressions
$$P = \int d^3p p a^{\dag}(p)a(p)$$.........(1)
$$E = \int d^3p E(p) a^{\dag}(p)a(p)$$..........(2)
The other (fast and easy) way is based on the idea of particles. As far as I know the only major textbook that uses this path is Weinberg's "The quantum theory of fields", vol. 1.
The idea is that world is made of particles. If particles do not interact, then the total momentum and energy of any N-particle system are simply
P = p_1 + p_2 + ... + p_N.........(3)
E = e_1 + e_2 + ... + e_N.........(4)
Then we notice that operators (1) and (2) have exactly forms (3) and (4), respectively, in any N-particle Hilbert space (N-particle sector of the Fock space). So, quite naturally, we choose (1) and (2) as our total momentum and total energy operators.
As you can guess, I prefer the fast and easy way of working with QFT. So, I am not sure where you made a mistake in your algebra. You can compare your (long and painful) calculations with derivations of formulas (2.31) and (2.33) in M. E. Peskin and D. V. Schroeder "An introduction to quantum field theory".
Quote by meopemuk As you can guess, I prefer the fast and easy way of working with QFT. So, I am not sure where you made a mistake in your algebra. You can compare your (long and painful) calculations with derivations of formulas (2.31) and (2.33) in M. E. Peskin and D. V. Schroeder "An introduction to quantum field theory".
However, I think you're referring to something else. The question asks to use the expression for the (conserved) stress energy tensor to explicitly compute the momentum this way. I don't see how it is long or painful (even for me :-D), and the problem is merely in the index raising/lowering in the final step...I believe I have the correct expression till the point before the final step.
Note that Peskin and Schroeder have already dropped the delta function term after equation (2.31), so their calculation is "slightly" different. I am not using the normally ordered notation yet, nor am I dropping the term..if you will, I am using equation (2.31) as the definition of the Hamiltonian in my calculation, without expressing the integrand in terms of the commutator -- this means I am using the 5th equation in my original post.
Quote by maverick280857 Hi, $$T^{\mu 0} = g^{0\mu}\mathcal{H}$$ (using $\mathcal{H} = \Pi\dot{\phi} - \mathcal{L} = \partial^{0}\phi\partial_{0}\phi - \mathcal{L}$)
The 2nd equation in parenthesis is correct. The 1st is wrong.
Just forget about the Hamiltonian and use the formula you have of the stress tensor in terms of the Lagrangian.
Quote by RedX The 2nd equation in parenthesis is correct. The 1st is wrong. Just forget about the Hamiltonian and use the formula you have of the stress tensor in terms of the Lagrangian.
Thanks RedX. I'll check this out.
EDIT: This was my working:
$$T^{0\mu} = \partial^{0}\phi\partial^{\mu}\phi - g^{0\mu}(\partial^{0}\phi\partial_{0}\phi-\mathcal{H})$$
Is this correct?
yeah that's correct.
Quote by RedX yeah that's correct.
Ok, so now
$$T^{0\mu} = \partial^{0}\phi\partial^{\mu}\phi - g^{0\mu}(\partial^{0}\phi\partial_{0}\phi-\mathcal{H})$$
gives me
$$T^{0\mu} = \partial^{0}\phi\partial^{\mu}\phi - g^{0\mu}\partial^{0}\phi\partial_{0}\phi + g^{0\mu}\mathcal{H}$$
I get my mistake: I canceled the first two terms -- clearly a wrong thing to do, since there is no repeated index which would convert the second term into the first one. But I retained the Hamiltonian since I know how to write it in terms of $a(p)$ and $a^{\dagger}(p)$. I'll now try to solve it without expressing it in terms of the Hamiltonian.
Quote by maverick280857 I don't see how it is long or painful (even for me :-D), and the problem is merely in the index raising/lowering in the final step...
By "long and painful" I meant the full 9-step procedure of introducing quantum fields in QFT. My goal was to draw your attention to the alternative (particle-based) approach to QFT. Sorry for hijacking your thread.
Quote by maverick280857 Ok, so now $$T^{0\mu} = \partial^{0}\phi\partial^{\mu}\phi - g^{0\mu}\partial^{0}\phi\partial_{0}\phi + g^{0\mu}\mathcal{H}$$ I get my mistake: I canceled the first two terms -- clearly a wrong thing to do, since there is no repeated index which would convert the second term into the first one. But I retained the Hamiltonian since I know how to write it in terms of $a(p)$ and $a^{\dagger}(p)$. I'll now try to solve it without expressing it in terms of the Hamiltonian.
I don't know if it's too late and you calculated everything, but you can see from the expression that if mu is not equal to zero, then only the first term survives. So you really only need to calculate the first term.
Quote by RedX I don't know if it's too late and you calculated everything, but you can see from the expression that if mu is not equal to zero, then only the first term survives. So you really only need to calculate the first term.
Yeah, saw that. Thanks. (PS -- I'm in a different time zone..its 5 minutes past noon here, so its certainly not late )
Quote by meopemuk By "long and painful" I meant the full 9-step procedure of introducing quantum fields in QFT. My goal was to draw your attention to the alternative (particle-based) approach to QFT. Sorry for hijacking your thread.
Hi meopemuk, thanks for your insight...you certainly did not hijack the thread! The 9 step procedure you described put everything into perspective for me actually. As I said I am new to QFT, but I wanted to go through that 9 step process since I have not really done relativistic QM or classical field theory "formally". So I reckon before I can get to the shorter method (ala Weinberg) I need to get my hands dirty in all this algebra..I'm still fairly new to such manipulations and interpretations.
Thanks a ton for your help again, and hope I run into you more often on this forum...I'm still on Chapter 2 of most QFT books
Thread Tools
| | | |
|-----------------------------------------------------------------------------------------------------------|-----------------|---------|
| Similar Threads for: 4-momentum of a massive scalar field in terms of creation and annihilation operators | | |
| Thread | Forum | Replies |
| | Quantum Physics | 8 |
| | Quantum Physics | 2 |
| | Quantum Physics | 2 |
| | Quantum Physics | 16 |
| | General Physics | 3 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417045712471008, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/80469/list | ## Return to Answer
2 added 4 characters in body
Suppose you have any optimization problem that is symmetric. I somehow weaker question is: How much of the original symmetry carries over to the solutions? For the symmetric group $S_n$ if the degree $d$ of the polynomials that describe the problem is low (compared to the number of variables) the "degree and half principle" says that one always finds minimizers contained in the set of points invariant by a group $S_{l_1}\times\ldots\times S_{l_m}$ S_{l_d}$where$l_1+\ldots+l_m=n$l_1+\ldots+l_d=n$
1
Suppose you have any optimization problem that is symmetric. I somehow weaker question is: How much of the original symmetry carries over to the solutions? For the symmetric group $S_n$ if the degree of the polynomials that describe the problem is low (compared to the number of variables) the "degree and half principle" says that one always finds minimizers contained in the set of points invariant by a group $S_{l_1}\times\ldots\times S_{l_m}$ where $l_1+\ldots+l_m=n$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305567145347595, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/36113/was-the-universes-entropy-equal-to-zero-at-the-big-bang-is-zero-entropy-state?answertab=active | # Was the Universe's entropy equal to zero at the Big Bang? Is zero-entropy state unique?
It is postulated by many cosmologists that at the Big Bang time the universe was in an unusual low entropy state.
Does this claim specifically mean that the entropy of the initial universe was zero?
Is zero-entropy state unique for given physical laws?
Is it possible that entropy was growing always so that only difference in entropy has physical meaning rather than absolute value? Was there ever negative entropy state?
-
2
Zero entropy means unique microstate with given macrostate. Entropy cannot be negative. And absolute value of entropy is meaningful. – C.R. Sep 11 '12 at 1:18
@Karsus Ren, I see but I conjectured that there can be fractional number of microstates, even the number below 1 because in quantum information theory it is possible to independently manipulate with information quantities below one bit. If a fraction of one bit is possible (which corresponds to the number of microstates between 1 and 2), why there cannot be even negative piece of information in some beyond-quantum theory? – Anixx Sep 11 '12 at 1:23
@Anixx less than one bit is also possible for a classical system. E.g. a 2-state classical system with $p_1 = 1/4$ and $p_2=3/4$ has an entropy of $-(\frac{1}{4}\log_2\frac{1}{4}+ \frac{3}{4}\log_2\frac{3}{4}) \approx 0.811$ bits -- but $<1$ is quite different from $<0$. – Nathaniel Sep 11 '12 at 8:19
@Nathaniel yes, indeed. What I meant is that in classical computing you need an analog computer for that but in quantum computing you can manipulate less-than-bit quantities in a digital manner (without loss). You are completely right though. – Anixx Sep 11 '12 at 9:00
@Nathaniel also I am not sure but is seems to me that to manipulate less-than-bit quantities on a classical analog computer your device should at least support 1 but register. So to store less-than-bit quantity you still need 1 bit of storage. – Anixx Sep 11 '12 at 9:03
show 3 more comments
## 2 Answers
Whether entropy was zero at the Big Bang or not is very much an open question of physics, in big part due to the fact that we do not yet have a good enough understanding of physics at high energies and high gravitational fields.
But for the zero entropy state this is a bit easier to answer and the answer does depend on laws of physics. Zero entropy state basically depends on how many completely distinguishable states the laws of physics allow. The universe is in a zero entropy state precisely when it is in a single state and it can be known which state it is in. In many situations there are infinitely many different zero entropy states. So the zero entropy state at the beginning of the universe is unique if and only if the laws of physics at that time require that there is a single state in which the universe can be found. Whether they do require that or not is a very big question in physics which everybody would like to know the answer to.
-
So based on currently known laws how a zero-entropy state should look like? Infinitely-dense pointlike piece of matter, cosmological horizon of zero radius, vacuum-like state at absolute zero temperature? – Anixx Sep 11 '12 at 17:27
Also is not Dirac delta function-like distribution of density corresponds actually to negatively infinite entropy rather than zero entropy? – Anixx Sep 11 '12 at 17:29
1
Quantum mechanically, a zero entropy state is a pure state $|psi>$, which can be perfectly distinguished out of a complete basis of states. Assuming quantum mechanics can be extended to universal scales (which due to the lack of quantum gravity we are not sure if and how this is done), then a state where every particle in the universe has a definite state, for example, is a zero-entropy state. But it may also be that particles are entangled to one another, which would imply that some particles have positive entropy, while the universe as a whole is still of zero entropy. – SMeznaric Sep 11 '12 at 17:39
– SMeznaric Sep 11 '12 at 17:41
It depends what you mean by big-bang. I consider the big-bang to begin with inflation, not with a singularity, so that the starting point is the inflating universe, making no hypotheses about what came before (if the question even makes sense).
The inflating initial starting state is for all intents and purposes, a perfect deSitter state which is adiabatically growing as the inflaton slides down the potential. At the end of inflation, when the inflaton starts shaking non-thermally, the state is no longer unique, but the semiclassical description of the initial state is by a thermal state inside a deSitter horizon.
The natural entropy to associate with this state is the area of the cosmological horizon in Planck units, and this entropy is far from zero. But it is infinitesimally small compared to the maximum entropy we could squeeze into the universe today, given that the cosmological horizon has grown so much, but past the end of inflation, the growth has been out-of-equilibrium.
So the entropy of the initial state of the universe is about the square of the de-Sitter radius at the end of inflation. I don't know a precise number, but suppose it's a deSitter temperature of $10^{14}$ GeV, that's about a million planck lengths of radius, so a dimensionless entropy of order $10^{12}$. Compare with $10^{135}$, which is the maximum entropy you can squeeze in the modern cosmological horizon, and you can see how low-entropy the initial state was, despite being in thermal equilibrium at the time.
This explanation of the low-entropy initial conditions requires you to consider a single horizon-volume as all there is, and this is the holographic view of inflation promoted by Banks, Fischler, Shenker and Susskind. It was suggested to be the reason for the low-entropy initial condition by Davies in the early 1980s, but it is still not accepted by the astrophysical community, for reasons that I wouldn't be able to properly explain, because I think they are ridiculous.
-
I was interested to know about the really initial state not the state at the end of inflation. Was there zero-entropy state ever? – Anixx Sep 11 '12 at 8:00
@Anixx: Doesn't the question above need a metaphysics tag? If you have a thermal state, how can you ask about prior states reasonably? You can do it if you know the scalar potential, but this is a matter of speculation at present. – Ron Maimon Sep 11 '12 at 8:58
I am not asking what state exactly, I am asking about entropy. Was not entropy growing ever? – Anixx Sep 11 '12 at 9:04
Do you think zeo-entropy state actually possible with known physical laws? – Anixx Sep 11 '12 at 9:06
@Anixx: recall that entropy is a statistical concept --- so it's on you to define the macrostate that you want to consider. Ron punts on that question by starting at inflation, so assumes close to equilibrium conditions and there after uses standard thermodynamics (one might quibble about that assumption, but it seems sensible). – genneth Sep 11 '12 at 10:35
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386135935783386, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/tagged/basic-concepts?sort=active&pagesize=15 | # Tagged Questions
The basic-concepts tag has no wiki summary.
1answer
61 views
### Interpretation of mean in this example
I recently presented a national test and the company in charge of preparing the test then does a standardization to provide the final scores for each person. These are the values they gave at the end ...
1answer
39 views
### Is it required for panel data to use dummy variables?
I am doing a research considering seven countries and I have panel data. My question is: do I need to include dummy variables every time I use panel data in regression, or is enough to do it as a ...
0answers
13 views
### Assigning weights in crowdsourcing or voting system
In Multi-weighted voting system, how is each individual worker is assigned with a weight? What is the concept behind it?
1answer
270 views
### Sampling with replacement
Suppose you have 40 different books (20 math books, 15 history books, and 5 geography books). Let M = math books, H = history books, G = geography books You pick 5 books at random, with replacement, ...
2answers
137 views
### Meaning of hypothesis test for “μ = 25”; isn't it impossible?
I'm working through a stats textbook and have a question of the form: You will perform a significance test of $H_0: μ=25$ based on an SRS of $n=25$. Assume $σ=5$. I'm stuck on the 'equals' part ...
2answers
79 views
### Why is the Likelihood function NOT a case of the inverse fallacy?
This may be a trivial question, but as a research psychologist I do not have a robust statistics background to answer it. It appears to me that the likelihood function--\$L(\theta | \text{data}) = ...
11answers
2k views
### What is a standard deviation?
What is a standard deviation, how is it calculuated and what is its use in statistics?
2answers
182 views
### Can a sample be too large for ANOVA or a t-test?
I have close to a million data sets and whenever I run mean comparison test, either ANOVA or a t-test, I get a significance level of less than .0001 on SPSS. I'm concerned that my sample is so large ...
1answer
29 views
### Finding similar users
I am working on a problem in the online advertising space. I am trying to identify consumers similar to the set of consumers who have bought a product in the past (have 'converted'). If I can identify ...
1answer
61 views
### Soccer contest rule puzzled me
I have a little knowledge about soccer contest rule, there is a question I cannot understand. The dotplot below shows the numbers of goals scored by 20 teams playing in a city's high school soccer ...
1answer
106 views
### Given a historical disease incident rate of x per 100,000, what is the probability of y per 100,000?
Excuse the rather basic question, but I was reading this article on thyroid cancer in Fukushima and it was reported that 3 cases in 38,000 children were found in the previous fiscal year. Another web ...
2answers
136 views
### Scale parameters — How do they work, why are they sometimes dropped?
I'm having difficulty wrapping my head around scale parameters. How exactly do they work? Why are they sometimes ignored? (in other words, when is it important to preserve them in a calculation?) ...
2answers
212 views
### Simple probability question
A person from group A has 20% chance of having some characteristic A person from group B has 30% chance of having the same characteristic How can I calculate the probability of a person belonging to ...
1answer
130 views
### Conceptual distinction between heteroscedasticity and non-stationarity
I'm having trouble distinguishing between the concepts of scedasticity and stationarity. As I understand them, heteroscedasticity is differing variabilities in sub-populations and non-stationarity is ...
2answers
203 views
### Why are probability distributions denoted with a tilde?
What is the meaning of the tilde when specifying probability distributions? For example: $$Z \sim \mbox{Normal}(0,1).$$
0answers
47 views
### How to assess mean estimate reliability?
Disclaimer: I am trying to fresh up my statistics from basic courses in the uni, so undeniably this is a very fundamental question, I hope you can keep that in mind in your answers. Background: Given ...
3answers
152 views
### SPSS Interesting analysis types: Beyond the basics
Following a previous question I made I'd like to state once more that this is my first attempt at analyzing data with SPSS. I have encoded and transformed my data properly and I also used some of the ...
0answers
70 views
### Is GLMM correct for data with several binary outcome variables per subject?
My experimental design is roughly this: I have observations of 5 categories of behavior in a specific situation, with two individuals interacting at a time. In each dyad, I only collect data from one ...
1answer
115 views
### An analytical framework for considering the geometric mean
Is there an analytical method of looking at the geometric mean that will allow one to break it down to its various components? The focus of the question is more for financially related returns, but I ...
1answer
46 views
### How does further dividing the population increase confidence?
I am reading up on significance tests but I can't quite grasp where the "number of groups" from my example fits in. When I ask 100 people which party they would vote for, imagine I get these results: ...
1answer
513 views
### Question about independence assumption for ANOVA, t-test, and non-parametric tests
I'm a novice in statistics and I have some confusion about the assumption of independence for statistical tests. I searched the Internet and some information says that for the t-test, the ...
4answers
364 views
### Explain regression to 7 years old [closed]
Please give a clean, simple, explanatory answer that any 7 yr old can understand. You can also link to a regression guide that is very good and simple. Should be fully explanatory all the way ...
2answers
230 views
### How does regression analysis help one understand how the typical value of the dependent variable change?
regression analysis helps one understand how the typical value of the dependent variable changes... -- http://en.wikipedia.org/wiki/Regression_analysis What does this mean? What "typical ...
0answers
76 views
### How to understand / visualize the error surface in online learning algorithms
I have a question about the shape of the error surface for online gradient descent algorithms. Take into account that I am trying to translate my specific question into a more general and idealized ...
2answers
126 views
### What does regression analysis estimate Exactly? [closed]
"regression analysis estimates the conditional expectation of the dependent variable given the independent variables...." so it's used to estimate the conditional expectation... which is basically a ...
2answers
425 views
### Does the probability of multiple independent events follow a normal distribution?
I'm heavily into role-playing systems, scripted a set of utilities in ruby for pen-and-paper games and I sort of understood statistics when I took it, but I could never for the life of me figure out ...
2answers
602 views
### Interpreting t-value as a measure of evidence
Came across an article on the web on "evaluation of training effectiveness". The author suggests that the "t-value" obtained from a "Paired t-test" conducted using pre-test and post test scores can ...
3answers
3k views
### Difference between standard error and standard deviation
I'm a beginner in statistics. I'm struggling to understand the difference between the standard error and the standard deviation. How are they different and why do you need to measure standard error? ...
2answers
91 views
### Sample size to tell if more than X% of the population can do <thing>
I want to perform a test to determine (with 95% confidence) whether at least 70% of a population can perform some task. The test involves sitting a randomly chosen person down and them attempting a ...
2answers
2k views
### How should one interpret the comparison of means from different sample sizes?
Take the case of book ratings on a website. Book A is rated by 10,000 people with an average rating of 4.25 and the variance $\sigma = 0.5$. Similarly Book B is rated by 100 people and has a rating of ...
1answer
79 views
### Does it matter how you pull marbles out of this vat?
There's this large, well-mixed vat containing some unknown but finite varieties of marbles: $$\{v_{1},v_{2},v_{3},...,v_{k}\}$$ Some varieties are more common than others. Marbles are going to be ...
2answers
1k views
### Yates continuity correction for 2 x 2 contingency tables
I would like to gather input from people in the field about the Yates continuity correction for 2 x 2 contingency tables. The Wikipedia article mentions it may adjust too far, and is thus only used in ...
2answers
72 views
### How to detect variables discriminating a sample from the rest of the samples?
I have a traditionally structured data set where rows are observations and columns are variables. There are only a few observations but comparatively more variables. The observations are regions of a ...
0answers
112 views
### Comparing drug effects in different conditions
What is the ‘correct’ way to compare the effect of a drug in different conditions? For example, if I have conditions a, b, c and d, where there are different numbers of ‘control’ and ‘drug’ cells in ...
3answers
126 views
### Frequentist ability to use observations
I'm not an expert in statistics, but I gather there is disagreement whether a "frequentist" or "Bayesian" interpretation of probability is the "right" one. From Wagenmakers et. al p. 183: Consider ...
3answers
179 views
### Are these populations non-random and different?
What would be the simplest, straightest-forwardest way to determine the following: Whether A,B,C is non-randomly distributed in each single group (IE: are there more A's in group 1 than random ...
1answer
353 views
### Basic question about cross validation
A very basic question about cross validation for Neural Networks. Do I have to create a new network for each fold or do I have to keep the network and incrementally train it with the k-th training ...
1answer
159 views
### Problem with character vectors and linear regression in R
Quick question. I want to perform a linear regression that looks like this: lm(y ~ x1 + x2 + x3 + x4 +x5, mydata) This works fine if I manually write out this ...
2answers
406 views
### Proper ways to perform time series and ARIMA
Note that I do most of my analysis using R and Excel. Let's take this data set for example. I modified it as the data itself is proprietary: the years are also different: ...
1answer
59 views
### Likelihood of errors based on a sample
I frequently do something like: load a bunch of data, and then scan some fraction of it randomly to verify that no errors occurred. The more data I verify, the greater my certainty that no errors ...
0answers
69 views
### R: looking for “time” clusters in a data set
I am new to R and seeking some advice. I have a set (~20M) of data describing on which step a process did fail or succeeded: ...
1answer
305 views
### When can I suppress the intercept using treatreg?
Can I suppress the intercept if I know the treatment will be zero if the independent variables are zero. Also, can I suppress the intercept if I know the right hand side of the primary regression ...
1answer
121 views
### Generalization of basic probability question
This is the motivation for my question: Suppose we have $n$ tickets in a bag, and we draw $k$ of them uniformly at random without replacement. Now, repeat the same procedure independently (same $n$ ...
2answers
281 views
### t-test that means are the same?
I think this is simple, and I am wrong-thinking it.... t-tests are used to test the null hypothesis that the means are equal; $H_0 : \beta_1 = \beta_2$, $H_1 : \beta_1 \neq \beta_2$ But I want to ...
0answers
526 views
### How do I interpret this Weibull plot?
I was exploring Weibull analysis for understanding reliability of two specific specimen. I used the R package, "weibulltoolkit" and "survival" to get the plot in question. The dataset is big so I am ...
0answers
92 views
### Elliptic regression, basic conceptual question
I'm considering circular regression and elliptic regression on a computational and conceptual basis. If we fit an ellipse to our data then we deal with the principal components as reference for the ...
1answer
1k views
### How to compare proportions across different groups with varying population sizes?
I am trying to understand what statistical measures I can use to compare three groups having varying populations to understand which group is bad (highest probability of death or most vulnerable or ...
7answers
3k views
### If mean is so sensitive, why use it in the first place?
It is a known fact that median is resistant to outliers. If that is the case, when and why would we use the mean in the first place? One thing I can think of perhaps is to understand the presence of ...
2answers
9k views
### Calculating the 95th percentile: Comparing normal distribution, R Quantile, and Excel approaches
I was trying to compute the 95th percentile on the following dataset. I came across a few online references of doing it. Approach 1: Based on sample data The first one tells me to obtain the ...
1answer
187 views
### Value based on a supplied standard-deviation
Sorry if this is a very basic question, my statistics knowledge is very low! I've got a dataset which is basically a set of greyscale values in a greyscale texture (ie. 0-255). What I want to do is ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309208393096924, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/64211?sort=newest | ## Roth’s theorem and Behrend’s lower bound
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Roth's theorem on 3-term arithmetic progressions (3AP) is concerned with the value of $r_3(N)$, which is defined as the cardinality of the largest subset of the integers between 1 and N with no non-trivial 3AP. The best results as far as I know are that
$CN(\log\log N)^5/\log N \ge r_3(N) \ge N\exp(-D\sqrt{\log N})$
for some constants $C,D>0$. The upper bound is by Tom Sanders in 2010 and the lower bound is by Felix Behrend dating back to 1946. My question is this: even though the upper and lower bounds are still quite a bit apart, I hear mathematicians hinting that something close to Behrend's lower bound might be giving the correct order, such as in Ben Green's paper "Roth's theorem in the primes", and I am curious as to where this is coming from. Is it because there has been no significant improvement on the lower bound (whereas there has been lot's of work on the upper bound)? Or in analogy to a similar type of problem? Or maybe just a casual remark? Or some other reason?
Thank you.
-
Another example of such musing (this time from Gowers): gilkalai.wordpress.com/2008/07/10/… – Kevin O'Bryant May 8 2011 at 3:30
Thanks to all for your responses! – Yui Nishizawa May 17 2011 at 20:45
## 2 Answers
Dear Yui,
It's only slightly more than a casual remark. Our inability to find a better example is certainly a big reason for believing that Behrend's bound is correct. Julia Wolf and I slightly rehashed the proof of Behrend's bound
http://arxiv.org/abs/0810.0732
When formulated this way, I think the construction looks both fairly natural and fairly unimprovable.
Also, there are beginning to be hints as to the correct behaviour coming from apparently similar equations such as $x_1 + x_2 + x_3 = 3x_4$. I think that Schoen and others, using work of Sanders, may have improved the bounds for this equation to $N \exp(-\log^c N)$, though I'm not certain about this.
Despite these remarks it is not known whether better bounds for Roth's theorem follow from any other more "natural" conjectures, such as the Polynomial Freiman-Ruzsa conjecture, so any suggestion that Behrend is sharp is somewhat tenuous. Some other people think differently, I believe - that it may not be sharp.
-
If indeed there is now matching bounds for Behrend's type examples for the question mentioned by Ben this seems to be a dramatic support for the belief by many experts that Behrend's represent the right behavior. – Gil Kalai May 8 2011 at 14:43
1
I believe that Schoen and others have only managed this bound for six or more variables (e.g. $x_1+x_2+x_3+x_4+x_5=5x_6$). – Thomas Bloom May 9 2011 at 5:35
2 questions: Is there a reference or a link to Schoen et als result? Is there some similar result (showing exponentially small upper bound for the density) for a version of the cupset problem? – Gil Kalai May 25 2011 at 15:40
Their paper has just been uploaded to the arXiv, giving the result for six or more variables. They also prove the same bound for the finite field case, i.e. the analogue of the capset problem. – Thomas Bloom Jun 9 2011 at 15:01
The link is front.math.ucdavis.edu/1106.1601 If I understand correctly the finite field analogs still have Behrend-like bounds so not bounds of the form $(p-t)^n$ for t>0. – Gil Kalai Jul 10 2011 at 8:44
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I am looking forward to answers better than mine, but as a start:
My understanding is that, yes, part of the reason is how difficult it is to improve the lower bound. After all, after Behrend it took 60+ years to get an improvement on it. And, it is (as far as I knnow) thus not at all clear where a much larger example should come from; e.g., there is no construction of a much larger set where one suspects the set has the property but one can 'just' not prove it.
In contrast, for the upper-bound the progress was more continous with a variety of improvements over the years. And, also at the moment there is an ongoing effort (with the details of which I am unfamiliar, but there is a Polymath-project, Polymath6, see here) to get further improvements. By exploiting a recent advance on a closeley related problem.
This closely related problem is a so-called finite field analogue; instead of considering the problem for integers one rather considers it for subsets of $r$-dimensional vector-spaces over the field with $3$ elements, so `$\mathbb{F}_3^r$.`
And, let `$r'_3(r)$` denote the analogue constant. This constant `$r'_3(r)$` is very similar, but in certain aspects easier to handle. For example, for this constant `$r'_3(r) \ll 3^r / r $` is known since well more then a decade; this corresponds to $N/ \log N$ as `$3^r$` is the size of the structure. Very recently, this was improved to `$$ r'_3(r) \ll 3^r / r^{1+\epsilon} $$.`
And, there is work done to carry-over this progress to the other situation, for an upper bound of `$N / (\log N)^{1+\epsilon}$` by Katz–Bateman.
So for the upper-bound there is continous progress and further hope for progress, as there are ideas for improvement. While the lower bound somehow seems more stubborn and undpredictable.
There are various blog posts on the finite field analogue and also the actual problem asked about.
One by Tao http://terrytao.wordpress.com/2007/02/23/open-question-best-bounds-for-cap-sets/ yet note this is four years old, and there was progress since. (It also mentions differing opinions on he finite field analogue; so there is no universal conjecture there.)
And a couple of recent and long ones by Gowers
http://gowers.wordpress.com/2011/01/18/more-on-the-cap-set-problem/
http://gowers.wordpress.com/2011/02/05/polymath6-a-is-to-b-as-c-is-to/
So, in summary, I believe that a large part is the unclearness where a much larger bound should possibly come from; while the upper bounds seems more flexible: they are very hard to actually improve, yet there seems more hope and ideas what could work.
However, as there seems to be no clear consensus on the finite field analogue, it might be the case that the opinions are actually not as fixed on the problem at hand either.
And, to end with a purely sociological reason: if a very convincing argument was known, it should be well-known. So none might be known.
I hope to be wrong on the last point, and learn one from this question.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9514152407646179, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/87255/linear-equivalence-of-divisors-in-smooth-algebraic-surface | ## Linear equivalence of divisors in smooth algebraic surface
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let's assume $X$ is a smooth algebraic surface and $C$ a curve containing a smooth point $p_0\in X$, then there exist divisors $H_1$ and $H_2$ non of which contain $p_0$ such that $C+H_1$ is linearly equivalent to $H_2$.
-
5
This is a fairly straightforward consequence of the definition of ample divisor. It's probably useful to work out the details for yourself. – Artie Prendergast-Smith Feb 1 2012 at 17:23
Besides, it is worded like a homework problem, not as a question. – Angelo Feb 2 2012 at 4:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.910675048828125, "perplexity_flag": "head"} |
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Shannon_capacity | # All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Shannon-Hartley theorem
(Redirected from Shannon capacity)
In information theory, the Shannon-Hartley theorem states the maximum amount of error-free digital data (that is, information) that can be transmitted over a communication link with a specified bandwidth in the presence of noise interference. The law is named after Claude Shannon and Ralph Hartley. The Shannon limit or Shannon capacity of a communications channel is the theoretical maximum information transfer rate of the channel.
Contents
## Theorem
Proved by Claude Shannon in 1948, the theorem describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. The theory doesn't describe how to construct the error-correcting method, it only tells us how good the best possible method can be. Shannon's theorem has wide-ranging applications in both communications and data storage applications. This theorem is of foundational importance to the modern field of information theory.
If we had such a thing as an infinite-bandwidth, noise-free analog channel we could transmit unlimited amounts of error-free data over it per unit of time. However real life signals have both bandwidth and noise-interference limitations.
Shannon and Hartley asked: How do bandwidth and noise affect the rate at which information can be transmitted over an analog channel? Surprisingly, bandwidth limitations alone do not impose a cap on maximum information transfer. This is because it is still possible (at least in a thought-experiment model) for the signal to take on an infinite number of different voltage levels on each cycle, with each slightly different level being assigned a different meaning or bit sequence. If we combine both noise and bandwidth limitations, however, we do find there is a limit to the amount of information that can be transferred, even when clever multi-level encoding techniques are used. This is because the noise signal obliterates the fine differences that distinguish the various signal levels, limiting in practice the number of detection levels we can use in our scheme.
The Shannon theorem states that given a channel with information capacity C and information is transmitted at a rate R, then if
$R \le C$
there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information without error up to a limit, C.
The converse is also important. If
R > C
the probability of error at the receiver increases without bound. This implies that no useful information can be transmitted beyond the channel capacity.
## Capacity of a binary symmetric channel with Gaussian noise
Considering all possible multi-level and multi-phase encoding techniques, Shannon's theorem gives the theoretical maximum rate of clean (or arbitrarily low bit error rate) data C with a given average signal power that can be sent through an analog communication channel subject to additive, white, Gaussian-distribution noise interference:
$C = BW \times \log_2(1+S/N)$
where
C is the channel capacity in bits per second inclusive of error correction;
BW is the bandwidth of the channel in hertz; and
S/N is the signal-to-noise ratio of the communication signal to the Gaussian noise interference expressed as a straight power ratio (not as decibels)
For large or small signal-to-noise ratios, this formula can be approximated.
If S/N >> 1, C = 0.332 · BW · SNR (in dB).
If S/N << 1, C = 1.44 · BW · S/N (in power).
Simple schemes such as "send the message 3 times and use a best 2 out of 3 voting scheme if the copies differ" are inefficient users of bandwidth, and thus are far from the Shannon limit. Advanced techniques such as Reed-Solomon codes and, more recently, Turbo codes come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. With Turbo codes and the computing power in today's digital signal processors, it is now possible to reach within 1/10 of one decibel of the Shannon limit.
The V.34 modem standard advertises a rate of 33.6 kbit/s, and V.90 claims a rate of 56 kbit/s, apparently in excess of the Shannon limit (telephone bandwidth is 3.3 kHz). In fact, neither standard actually reaches the Shannon limit, but closely approaches it. The speed improvement of V.90 was made possible by the elimination of an additional step of analog-to-digital conversion by the use of fully digital equipment at the other end of a modem connection. This improves the signal to noise ratio, which in turn produces the required headroom to exceed 33.6 kbit/s which was otherwise near the Shannon limit.
## Examples
1. If the S/N is 20 dB, and the bandwidth available is 4 kHz, which is appropriate for telephone communications, then C = 4 log2(1 + 100) = 4 log2 (101) = 26.63 kbit/s. Note that the value of 100 is appropriate for an S/N of 20 dB.
2. If it is required to transmit at 50 kbit/s, and a bandwidth of 1 MHz is used, then the minimum S/N required is given by 50 = 1000 log2(1+S/N) so S/N = 2C/W -1 = 0.035 corresponding to an S/N of -14.5 dB. This shows that it is possible to transmit using signals which are actually much weaker than the background noise level, as in spread-spectrum communications.
## References
• C. E. Shannon, The Mathematical Theory of Information. Urbana, IL:University of Illinois Press, 1949 (reprinted 1998).
• Herbert Taub, Donald L. Schilling, "Principles of Communication Systems", McGraw-Hill, 1986
## External links
03-10-2013 05:06:04
Science kits, science lessons, science toys, maths toys, hobby kits, science games and books - these are some of many products that can help give your kid an edge in their science fair projects, and develop a tremendous interest in the study of science. When shopping for a science kit or other supplies, make sure that you carefully review the features and quality of the products. Compare prices by going to several online stores. Read product reviews online or refer to magazines.
Start by looking for your science kit review or science toy review. Compare prices but remember, Price \$ is not everything. Quality does matter.
Science Fair Coach What do science fair judges look out for?
ScienceHound Science Fair Projects for students of all ages
All Science Fair Projects.com Site All Science Fair Projects Homepage
Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.870236337184906, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/95315-finding-basis.html | # Thread:
1. ## Finding a basis for...
This was a bonus question on my test yesterday (I hope I can remember it correctly).
Q: Let W be a subsapce of $P_{3}$ such that P(0)=0 and let U be a subspace of $P_{3}$ such that P(1)=0. Find a basis for both W and U.
A: For W I figured the basis ought to be $\{x,x^{2},x^{3}\}$ since I don't wan't $a_{0}$ to have any value. But, I am stuck on U. I think I am seeing it all wrong. My x's are my vectors correct? As in, $\{1,x,x^{2},x^{3}\}$ is my basis for all third degree polynomails which have the form $a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}$. I tried it a couple different ways on the exam, but should said they were all wrong, but that I was close. Even so, I think I am stuck in a rut and I am having a hard time starting from scratch. Im having trouble finding a systematic approach.
Thanks
2. Originally Posted by Danneedshelp
This was a bonus question on my test yesterday (I hope I can remember it correctly).
Q: Let W be a subsapce of $P_{3}$ such that P(0)=0 and let U be a subspace of $P_{3}$ such that P(1)=0. Find a basis for both W and U.
A: For W I figured the basis ought to be $\{x,x^{2},x^{3}\}$ since I don't wan't $a_{0}$ to have any value. But, I am stuck on U. I think I am seeing it all wrong. My x's are my vectors correct? As in, $\{1,x,x^{2},x^{3}\}$ is my basis for all third degree polynomails which have the form $a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}$. I tried it a couple different ways on the exam, but should said they were all wrong, but that I was close. Even so, I think I am stuck in a rut and I am having a hard time starting from scratch. Im having trouble finding a systematic approach.
Thanks
You are correct for V. For U, think about $\{x-1, x^2- 1, x^3- 1\}$, exactly like you have for V but "shifted".
For a more "systematic" method try this. A general vector in P(3) is $a+ bx+ cx^2+ dx^3$ and your subset is defined by $p(1)= a+ b+ c+ d= 0$. We can solve for any one of the coefficients as a function of the other 3 so this is a subspace of dimension 3. For example, a= -b- c- d. Now here is a very nice method for finding bases when you have a relation like that. Take b= 1, c= d= 0 and you get a= -1. That is, -1+ x or x-1 is in the set. Take c= 1, b= d= 0. Again a= -1 so we get $-1+x^2$. Finally, take d=1, b= c= 0. Yet again a= -1 so $-1+ x^3$. Taking each constant equal to 1 guarentees that those are independent so that is a basis, the same one I mentioned before.
Of course, we could have solved for any of the four coefficients. If, say, we had solved for d= -a- b- c, then: with a= 1, b=c= 0, we get $1- x^3$. With b=1, a= c= 0, we get $x- x^3$. With c= 1, a= b= 0, we get $x^2- x^3$ getting $\{1- x^3, x- x^3, x^2-x^3\}$ as a different basis for that subspace.
3. Originally Posted by Danneedshelp
This was a bonus question on my test yesterday (I hope I can remember it correctly).
Q: Let W be a subspace of $P_{3}$ such that P(0)=0 and let U be a subspace of $P_{3}$ such that P(1)=0. Find a basis for both W and U.
I was really confused by your question at first as I have never seen this notation before for polynomials and thought it was something to do with a projective space. Does anyone know if this is a standard notation?
Also it seems to me that the question as posed does not have an answer. Surely U and W should be the maximal subspaces with this property otherwise you can't tell whether they have full rank.
4. Originally Posted by alunw
I was really confused by your question at first as I have never seen this notation before for polynomials and thought it was something to do with a projective space. Does anyone know if this is a standard notation?
Also it seems to me that the question as posed does not have an answer. Surely U and W should be the maximal subspaces with this property otherwise you can't tell whether they have full rank.
Yes, that is standard notation and "P(n)", the vector space (or algebra) of polynomials of degree less than or equal to n, is a common example in Linear Algebra. And "maximal" would be understood here.
5. Thanks, but I must say I don't like either the notation or the convention that maximal would be understood.
There's a much better and more widely used notation for the space of all real polynomials: $\mathbb{R}[x]$ and some notation like $\mathbb{R}[x]_{3}$ would convey a lot more information. Personally I'd prefer a name like Poly(R,{x},3) since that would be something one could adapt for a computer program and have something much more likely to be meaningful to someone coming across things for the first time.
What's the benefit of writing "a subspace" instead of "the maximal subspace"? To my way of thinking it's going to lead to one assuming a subspace is maximal when its not meant to be. In group theory homomorphisms are almost always surjective, but plenty of writers still insert the word surjective rather than saying this is to be understood.
6. Thank you very much HallsofIvy!
Ugh, I'm kickin myself already! Makes perfect sense. I was just getting confused with the notation and thought each a,b,c, and d was distributed through every vector. I don't know why I was thinking that when I new what the general form was.
7. ## All cases discussed before :
I have tried to solve for variables other than 'a' and have obtained the results. I have attached that as an image. Hope it helps.
Attached Thumbnails | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9760763049125671, "perplexity_flag": "head"} |
http://terrytao.wordpress.com/2007/04/08/simons-lecture-iii-structure-and-randomness-in-pde/ | What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Simons Lecture III: Structure and randomness in PDE
8 April, 2007 in math.AP, question, talk, travel | Tags: concentration compactness, nonlinear PDE, randomness, Ricci flow, Simons lecture, solitons, structure
[This lecture is also doubling as this week's "open problem of the week", as it (eventually) discusses the soliton resolution conjecture.]
In this third lecture, I will talk about how the dichotomy between structure and randomness pervades the study of two different types of partial differential equations (PDEs):
• Parabolic PDE, such as the heat equation $u_t = \Delta u$, which turn out to play an important role in the modern study of geometric topology; and
• Hamiltonian PDE, such as the Schrödinger equation $u_t = i \Delta u$, which are heuristically related (via Liouville’s theorem) to measure-preserving actions of the real line (or time axis) ${\Bbb R}$, somewhat in analogy to how combinatorial number theory and graph theory were related to measure-preserving actions of ${\Bbb Z}$ and $S_\infty$ respectively, as discussed in the previous lecture.
(In physics, one would also insert some physical constants, such as Planck’s constant $\hbar$, but for the discussion here it is convenient to normalise away all of these constants.)
Observe that the form of the heat equation and Schrödinger equation differ only by a constant factor of i (in close analogy with Wick rotation). This makes the algebraic structure of the heat and Schrödinger equations very similar (for instance, their fundamental solutions also only differ by a couple factors of i), but the analytic behaviour of the two equations turns out to be very different. For instance, in the category of Schwartz functions, the heat equation can be continued forward in time indefinitely, but not backwards in time; in contrast, the Schrödinger equation is time reversible and can be continued indefinitely in both directions. Furthermore, as we shall shortly discuss, parabolic equations tend to dissipate or destroy the pseudorandom components of a state, leaving only the structured components, whereas Hamiltonian equations instead tend to disperse or radiate away the pseudorandom components from the structured components, without destroying them.
Let us now discuss parabolic PDE in more detail. We begin with a simple example, namely how the heat equation can be used to solve the Dirichlet problem, of constructing a harmonic function $\Delta u_\infty = 0$ in a nice domain $\Omega$ with some prescribed boundary data. As this is only an informal discussion I will not write down the precise regularity and boundedness hypotheses needed on the domain or data. The harmonic function will play the role here of the “structured” or “geometric” object. From calculus of variations we know that a smooth function $u_\infty: \Omega \to {\Bbb R}$ is harmonic with the specified boundary data if and only if it minimises the Dirichlet energy $E(u) := \frac{1}{2} \int_\Omega |\nabla u|^2$, which is a convex functional on u, with the prescribed boundary data. One way to locate the harmonic minimum $u_\infty$ is to start with an arbitrary smooth initial function $u_0: \Omega \to {\Bbb R}$, and then perform gradient flow $\partial_t u = -\frac{\delta E}{\delta u}(u) = \Delta u$ on this functional, i.e. solve the heat equation with initial data $u(0) = u_0$. One can then show (e.g. by spectral theory of the Laplacian) that regardless of what (smooth) data $u_0$ one starts with, the solution u(t) to the heat equation exists for all positive time, and converges to the (unique) harmonic function $u_\infty$ on $\Omega$ with the prescribed boundary data in the limit $t \to \infty$. Thus we see that the heat flow removes the “random” component $u_0 - u_\infty$ of the initial data over time, leaving only the “structured” component $u_\infty$.
There are many other settings in geometric topology in which one wants to locate a geometrically structured object (e.g. a harmonic map, a constant-curvature manifold, a minimal surface, etc.) within a certain class (e.g. a homotopy class) by minimising an energy-like functional. In some cases one can achieve this by brute force, creating a minimising sequence and then extracting a limiting object by some sort of compactness argument (as is for instance done in the Sacks-Uhlenbeck theory of minimal 2-spheres), but then one often has little control over the resulting structured object that one obtains in this manner. By using a parabolic flow (as for instance done in the work of Eells-Sampson to obtain harmonic maps in a given homotopy class via harmonic map heat flow) one can often obtain much better estimates and other control on the limit object, especially if certain curvatures in the underlying geometry have a favourable sign.
The most famous recent example of the use of parabolic flows to establish geometric structure from topological objects is, of course, Perelman’s use of the Ricci flow applied to compact 3-manifolds with arbitrary Riemannian metrics, in order to establish the Poincaré conjecture (for the special case of simply connected manifolds) and more generally the geometrisation conjecture (for arbitrary manifolds). [As I understand it, there are some minor but non-trivial technical issues left to clear up with the latter argument, but it seems that these will be resolved soon. The former argument, however, has been checked rather thoroughly.] Perelman’s work showed that Ricci flow, when applied to an arbitrary manifold, will eventually create either extremely geometrically structured, symmetric manifolds (e.g. spheres, hyperbolic spaces, etc.), or singularities which are themselves very geometrically structured (and in particular, their asymptotic behaviour is extremely rigid and can be classified completely). By removing all of the geometric structures that are generated by the flow (via surgery, if necessary) and continuing the flow indefinitely, one can eventually remove all the “pseudorandom” elements of the initial geometry and describe the original manifold in terms of a short list of very special geometric manifolds, precisely as predicted by the geometrisation conjecture. It should be noted that Richard Hamilton had earlier carried out exactly this program assuming some additional curvature hypotheses on the initial geometry; also, when Ricci flow is instead applied to two-dimensional manifolds (surfaces) rather than three, he observed that Ricci flow extracts a constant-curvature metric as its structured component of the original metric, giving an independent proof of the uniformisation theorem (see this recent paper for full details).
Let us now leave parabolic PDE and geometric topology and now discuss Hamiltonian PDE, specifically those of Schrödinger type. (Other classes of Hamiltonian PDE, such as wave or Airy type equations, also exhibit similar features, but we will restrict attention to Schrödinger for sake of concreteness.) These equations formally resemble Hamiltonian ODE, which can be viewed as finite-dimensional measure-preserving dynamical systems with a continuous time parameter $t \in {\Bbb R}$. However, this resemblance is not rigorous because Hamiltonian PDE have infinitely many degrees of freedom rather than finitely many; at a technical level, this means that the dynamics takes place on a highly non-compact space (e.g. the energy surface), whereas much of the theory of finite-dimensional dynamics implicitly relies on at least local compactness of the domain. Nevertheless, in many dispersive settings (e.g. when the spatial domain is Euclidean) it seems that almost all of the infinitely many degrees of freedom are so “pseudorandom” or “radiative” as to have an essentially trivial (or more precisely, linear and free) impact on the dynamics, leaving only a mysterious “core” of essentially finite-dimensional (or more precisely, compact) dynamics which is still very poorly understood at present.
To illustrate these rather vague assertions, let us first begin with the free linear Schrödinger equation $iu_t + \Delta u = 0$, where $u: {\Bbb R} \times {\Bbb R}^n \to {\Bbb C}$ has some specified initial data $u(0) = u_0: {\Bbb R}^n \to {\Bbb C}$, which for simplicity we shall place in the Schwartz class. It is not hard to show, using Fourier analysis, that a unique smooth solution, well-behaved at spatial infinity, to this equation exists, and furthermore obeys the $L^2({\Bbb R}^n)$ conservation law
$\int_{{\Bbb R}^n} |u(t,x)|^2 dx = \int_{{\Bbb R}^n} |u_0(x)|^2\ dx$, (*)
which can be interpreted physically as the law of conservation of probability. By using the fundamental solution for this equation, one can also obtain the pointwise decay estimate
$\lim_{t \to \infty} |u(t,x)| = 0$ for all $x \in {\Bbb R}^n$
and in a similar spirit, we have the local decay estimate
$\lim_{t \to \infty} \int_K |u(t,x)|^2 dx = 0$ for all compact $K \subset {\Bbb R}^n$ (**)
The two properties (*) and (**) may appear contradictory at first, but what they imply is that the solution is dispersing or radiating its (fixed amount of) $L^2$ mass into larger and larger regions of space, so that the amount of mass that any given compact set captures will go to zero as time goes to infinity. This type of dispersion – asymptotic orthogonality to any fixed object – should be compared with the notion of strong mixing discussed in the previous lecture. The analogous notion of weak mixing, by the way, is the slightly weaker statement
$\lim_{T \to \infty} \frac{1}{T} \int_0^T \int_K |u(t,x)|^2 dx dt = 0$ for all compact $K \subset {\Bbb R}^n$. (***)
[There is also a very useful and interesting quantitative version of this analysis, known as profile decomposition, in which a solution (or sequence of solutions) to the free linear Schrödinger equation can be split into a small number of "structured" components which are localised in spacetime and in frequency, plus a "pseudorandom" term which is dispersed in spacetime, and is small in various useful norms. These decompositions have recently begun to play a major role in this subject, but it would be too much of a digression to discuss them here. See however my CDM lecture notes for more discussion.]
To summarise so far, for the free linear Schrödinger equation all solutions are radiative or “pseudorandom”. Now let us generalise a little bit by throwing in a (time-independent) potential function $V: {\Bbb R}^n \to {\Bbb R}$, which for simplicity we shall also place in the Schwartz class, leading to the familiar linear Schrödinger equation $i \partial_t u + \Delta u = Vu$. This equation still has unique smooth global solutions, decaying at spatial infinity, for Schwartz data $u_0$, and still obeys the $L^2$ conservation law (*). What about the dispersion properties (**) or (***)? Here there is a potential obstruction to dispersion (or pseudorandomness), namely that of bound states. Indeed, if we can find a solution $\psi \in L^2({\Bbb R}^n)$ to the time-independent linear Schrödinger equation $-\Delta \psi + V\psi = -E \psi$ for some E > 0, then one easily verifies that the function $u(t,x) := e^{-iEt} \psi(x)$ is a solution to the time-varying linear Schrödinger equation which refuses to disperse in the sense of (**) or (***); indeed, we have the opposite property that the $L^2$ density $|u(t,x)|^2$ is static in time. One can then use the principle of superposition to create some more non-dispersing solutions by adding several bound states together, or perhaps adding some bound states to some radiating states.
The famous RAGE theorem of Ruelle, Amrein-Georgescu, and Enss asserts, roughly speaking, that there are no further types of states, and that every state decomposes uniquely into a bound state and a radiating state. For instance, if a solution fails to obey the weak dispersion property (***), then it must necessarily have a non-zero correlation (inner product) with a bound state. (If instead it fails to obey the strong dispersion property (**), the situation is trickier, as there is unfortunately a third type of state, a “singular continuous spectrum” state, which one might correlate with.) More generally, an arbitrary solution will decompose uniquely into the sum of a radiating state obeying (***), and a unconditionally convergent linear combination of bound states. The proof of these facts largely rests on the spectral theorem for the underlying Hamiltonian $-\Delta + V$; the bound states correspond to pure point spectrum, the weak dispersion property (***) corresponds to continuous spectrum, and the strong dispersion property (**) corresponds to absolutely continuous spectrum. Thus the RAGE theorem gives a nice connection between dynamics and spectral theory.
Let us now turn to nonlinear Schrödinger equations. There are a large number of such equations one could study, but let us restrict attention to a particularly intensively studied case, the cubic nonlinear Schrödinger equation (NLS)
$i u_t + \Delta u = \mu |u|^2 u$
where $\mu$ is either equal to +1 (the defocusing case) or -1 (the focusing case). (This particular equation arises often in physics as the leading approximation to a Taylor expansion to more complicated dispersive models, such as those for plasmas, mesons, or Bose-Einstein condensates.) We specify initial data $u(0,x) = u_0(x)$ as per usual, and to avoid technicalities we place this initial data in the Schwartz class. Unlike the linear case, it is no longer automatic that smooth solutions exist globally in time, although it is not too hard to at least establish local existence of smooth solutions. There are thus several basic questions:
1. (Global existence) Under what conditions do smooth solutions u to NLS exist globally in time?
2. (Asymptotic behaviour, global existence case) If there is global existence, what is the limiting behaviour of u(t) in the limit as t goes to infinity?
3. (Asymptotic behaviour, blowup case) If global existence fails, what is the limiting behaviour of u(t) in the limit as t approaches the maximal time of existence?
For reasons of time and space I will focus only on Questions 1 and 2, although question 3 is very interesting (and very difficult). The answer to these questions is rather complicated (and still unsolved in several cases), depending on the sign $\mu$ of the nonlinearity, the ambient dimension n, and the size of the initial data. Here are some sample results regarding Question 1 (most of which can be found for instance in Cazenave’s book, or my own):
• If n = 1, then one has global smooth solutions for arbitrarily large data and any choice of sign.
• For n=2,3,4, one has global smooth solutions for arbitrarily large data in the defocusing case (this is particularly tricky in the energy-critical case n=4), and for small data in the focusing case. For large data in the focusing case, finite time blowup is possible.
• For $n \geq 5$, one has global smooth solutions for small data with either sign. For large data in the focusing case, finite time blowup is possible. For large data in the defocusing case, the existence of global smooth solutions are unknown even for spherically symmetric data, indeed this problem, being supercritical, is of comparable difficulty to the Navier-Stokes global regularity problem.
Incidentally, the relevance of the sign $\mu$ can be seen by considering the conserved Hamiltonian
$H(u_0) = H(u(t)) := \int_{{\Bbb R}^n} \frac{1}{2} |\nabla u(t,x)|^2 + \mu \frac{1}{4} |u(t,x)|^4\ dx.$
In the defocusing case the Hamiltonian is positive definite and thus coercive; in the focusing case it is indefinite, though in low dimensions and in conjunction with the $L^2$ conservation law one can sometimes recover coercivity.
Let us now assume henceforth that the solution exists globally (and, to make a technical assumption, also assume that the solution stays bounded in the energy space $H^1({\Bbb R}^n)$) and consider Question 2. As in the linear case, we can see two obvious possible asymptotic behaviours for the solution u(t). Firstly there is the dispersive or radiating scenario in which (**) or (***) occurs. (For technical reasons relating to Galilean invariance, we have to allow the compact set K to be translated in time by an arbitrary time-dependent displacement x(t), unless we make the assumption of spherical symmetry; but let us ignore this technicality.) This scenario is known to take place when the initial data is sufficiently small. (Indeed, it is conjectured to take place whenever the data is “strictly smaller” in some sense than that of the smallest non-trivial bound state, aka the ground state; there has been some recent progress on this conjecture by Kenig-Merle and by Holmer-Roudenko in the spherically symmetric case.) In dimensions n=1,3,4, this scenario is also known to be true for large data in the defocusing case (the case n=1 by inverse scattering considerations, the case n=3 by Morawetz inequalities, and the case n=4 by the work of Ryckman-Visan; the n=2 case is a major open problem.
The opposite scenario is that of a nonlinear bound state $u(t,x) = e^{-iEt} \psi(x)$, where E > 0 and $\psi$ solves the time-independent NLS $-\Delta \psi + \mu |\psi|^2 \psi = -E \psi$. From the Pohozaev identity or the Morawetz inequality one can show that non-trivial bound states only exist in the focusing case $\mu = -1$, and in this case one can construct such states, for instance by using the work of Berestycki and Lions. Solutions constructed using these nonlinear bound states are known as stationary solitons (or stationary solitary waves). By applying the Galilean invariance of the NLS equation one can also create travelling solitons. With some non-trivial effort one can also combine these solitons with radiation (as was done recently in three dimensions by Beceanu), and one should also be able to combine distant solitons with each other to form multisoliton solutions (this has been achieved in one dimension by inverse scattering methods, as well as for the gKdV equation which is similar in many ways to NLS.) Presumably one can also form solutions which are a superposition of multisolitons and radiation.
The soliton resolution conjecture asserts that for “generic” choices of (arbitrarily large) initial data to an NLS with a global solution, the long-time behaviour of the solution should eventually resolve into a finite number of receding solitons (i.e. a multisoliton solution), plus a radiation component which decays in senses such as of (**) or (***). (For short times, all kinds of things could happen, such as soliton collisons, solitons fragmenting into radiation or into smaller solitons, etc., and indeed this sort of thing is observed numerically.) This conjecture (which is for instance discussed in the 2006 ICM proceedings article by Soffer, or in some of my own papers) is still far out of reach of current technology, except in the special one-dimensional case n=1 when the equation miraculously becomes completely integrable, and the solutions can be computed rather explicitly via inverse scattering methods, as was for instance carried out by Novoksenov. In that case the soliton resolution conjecture was indeed verified for generic data (in which the associated Lax operator had no repeated eigenvalues or resonances), however for exceptional data one could have a number of exotic solutions, such as a pair of solitons receding at a logarithmic rate from each other, or of periodic or quasiperiodic “breather solutions” which are not of soliton form.
Based on this one-dimensional model case, we expect the soliton resolution conjecture to hold in higher dimensions also, assuming sufficient uniform bounds on the global solution to prevent blowup or “weak turbulence” from causing difficulties. However, the fact that a good resolution into solitons is only expected for “generic” data rather than all data makes the conjecture extremely problematic, as almost all of our tools are based on a worst-case analysis and thus cannot obtain results that are only supposed to be true generically. (This is also a difficulty which seems to obstruct the global solvability of Navier-Stokes, as discussed in an earlier post.) Even in the spherically symmetric case, which should be much simpler (in particular, the solitons must now be stationary and centred at the origin), the problem is wide open.
Nevertheless, there is some recent work which gives a small amount of progress towards the soliton resolution conjecture. For spherically symmetric energy-bounded global solutions (of arbitrary size) to the focusing cubic NLS in three dimensions, it is a result of myself that the solution ultimately decouples into a radiating term obeying (**), plus a “weakly bound state” which is asymptotically orthogonal to all radiating states, is uniformly smooth, and exhibits a weak decay at spatial infinity. If one is willing to move to five and higher dimensions and to weaken the strength of the nonlinearity (e.g. to consider quadratic NLS in five dimensions) then a stronger result is available under similar hypotheses, namely that the weakly bound state is now almost periodic, ranging inside of a fixed compact subset of energy space, thus providing a “dispersive compact attractor” for this equation. In principle, this brings us back to the realm of dynamical systems, but we have almost no control on what this attractor is (though it contains all the soliton states and respects the symmetries of the equation), and so it is unclear what the next step should be. (There is a similar result in the non-radial case which is more complicated to state: see my paper for more details.)
## 10 comments
[...] For some more mathematics blogging of the highest possible quality, see Terry Tao’s postings on his Simons lectures at MIT, here, here and here. [...]
For parabolic PDE like the heat equation, can one construct entropies to describe these dispersions or flows? In your example, you define the ‘Dirichlet energy’ but is there an entropy that can be constructed and associated with the removal of the random components from the structured components that you described? I think in one of Perelman’s papers he discusses an entropy in connection with the Ricci flow, which seems analogous to heat flow. (Although the details of his papers are beyond my understanding.) A good example of an entropy in Lorentzian (spacetime) geometry would be that associated with black holes and black hole mechanics (leading to Hawking radiation etc.) where an entropy is naturally associated with the area of the horizon, which is a purely geometrical entity.
The algebraic similarity of the heat/diffusion and Schrodinger equation is also quite interesting I think. On a historical note, Schrodinger in 1931 originally considered diffusion processes or Brownian motions, described by the parabolic diffusion/heat equation, as a basis for quantum mechanics. He attempted to formulate Brownian motions in a symmetric form of time reversal, which (supposedly) helped motivate Kolmogorov to pursue some of his own (now classic) work on stochastic processes. But, as you mention, the diffusion/heat equation is not time symmetric whereas quantum theory is, but Schrodinger still felt that quantum theory must be some kind of diffusion or stochastic theory. Feynman’s path integral or sum over paths does also seem to have a very natural interpretation as a sum over mathematical Brownian motions if you make a Wick rotation. I don’t know what the status of this stochastic interpretation and connection with quantum mechanics is these days though.
Dear stevenm,
That is a good question, and I do not know the answer, though given that entropy and the heat equation both show up in thermodynamics, one would expect there to be some sort of connection; presumably it should be known to experts in probability. Certainly the heat flow has many interesting monotonicity properties, especially for non-negative solutions (which are the ones which have a physical Brownian motion interpretation), and I would not be surprised that an entropy-like expression has some nice monotonicity formulae. Perhaps one has to somehow lift the problem to very high dimension (e.g. taking repeated tensor products of the heat flow solution with itself) to see the connection more clearly, as entropy tends to be associated with various asymptotic statistics in the infinite-dimensional limit.
Perelman did introduce an entropy to study Ricci flow (indeed he interpreted Ricci flow as a gradient flow for this entropy), and Hamilton had earlier introduced another entropy-like quantity $-\int_M R \log R$ which also enjoyed a monotonicity property under certain conditions.
It does seem helpful to think of quantum mechanics as a kind of complexified (i.e. Wick rotated) version of classical Brownian motion, in which paths which increase the Hamiltonian are phase rotated rather than damped. There are also some formal similarities between manipulations of bras and kets, and the laws of conditional probability. But as I said before, these analogies seem well suited for understanding the algebraic structure of quantum mechanics, but not its analytic structure.
Actually, now that I think about it, I do know one explicit link between entropy and heat flow: one can interpret the Fisher information of a random variable as the rate of change of the Shannon entropy via heat flow (or more precisely, the Ornstein-Uhlenbeck process, which is basically heat flow renormalised by scaling). I believe this is a useful fact in probability theory, particularly in showing that gaussians are extremisers of various probabilistic quantities, but it is sort of outside my area of expertise.
[...] Simons Lecture III: Structure and randomness in PDE; [...]
[...] nonlinear Schrödinger equations (NLS). This conjecture (which I also discussed in my third Simons lecture) asserts, roughly speaking, that any reasonable (e.g. bounded energy) solution to such equations [...]
7 May, 2009 at 1:13 am
Student
On “the cubic nonlinear Schrödinger equation (NLS)
i u_t + \Delta u = \mu |u|^2 u
where \mu is either equal to +1 (the defocusing case) or -1 (the focusing case).”
I am a novice in Physics. What is the physics reason behind the term focusing and defocusing depending on these signs?
I am also curious to know what you mean by
“weak turbulence.”
Thank you.
Dear Student,
One can rewrite the NLS as $iu_t = (-\Delta + V) u$, where V is the time-dependent potential $V = \mu |u|^2$, thus the non-linearity can be viewed as a potential energy component of the Schrodinger operator. A negative value of $\mu$ corresponds to a negative $V$, i.e. a potential well, which in the linear theory of the Schrodinger equation is known to have an attractive effect. Since V is concentrated where the solution is large, this attractive effect should act to focus the solution. Conversely, a positive value of $\mu$ leads to a repulsive potential concentrated at where the solution is large, leading to a defocusing effect.
Weak turbulence is not a term with a commonly agreed upon rigorous definition, but for me it refers to the tendency of solutions of certain evolution equations to shift their energy from low frequencies to high frequencies over extended periods of time, which in particular should cause higher Sobolev norms (e.g. the $H^2$ norm) to grow polynomially in time. In contrast, the turbulent behaviour of equations such as Navier-Stokes results in a more rapid movement of energy from low frequencies to high frequencies, occurring in a bounded amount of time rather than an asymptotic amount, although after that period of turbulence, the dissipative effects tend to kick in and then remove the high-frequency energy from the system. (Establishing this latter fact rigorously is, of course, the \$1 million dollar question.)
[...] in PDE, does have the same algorithm-terminating flavour as the combinatorial arguments; see this earlier blog post for more [...]
19 May, 2010 at 8:57 pm
Steffen
This was too much of a coincidence for me not to comment:
It’s interesting that both the heat equation and the Poincaré conjecture made it into the news in the same week that I happened to be browsing this lecture!
http://www.sciencedaily.com/releases/2010/05/100513162755.htm
http://www.claymath.org/poincare/
Keep up the good work! I really enjoy reading your posts!
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 63, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332088828086853, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/97980/an-inequality-involving-binomial-coefficients/97983 | # An inequality involving binomial coefficients
For $k\leq n$, how do I prove that ${2n \choose n}(1-\frac{k}{n})^{k}\leq{2n \choose n+k}$?
-
## 1 Answer
Hint: If $0\lt a\leqslant b$, then $\dfrac{a}b\leqslant\dfrac{a+1}{b+1}$.
Application: The hint yields $\dfrac{n-k}n\leqslant\dfrac{n-k+i}{n+i}$ for every $i\geqslant0$. Multiplying these inequalities for $1\leqslant i\leqslant k$, one gets $$\left(\frac{n-k}n\right)^k\leqslant\frac{n-k+1}{n+1}\frac{n-k+2}{n+2}\cdots\frac{n}{n+k}=\frac{n!\,n!}{(n-k)!(n+k)!}.$$ Hence, $${2n\choose n}\left(\frac{n-k}n\right)^k\leqslant\frac{(2n)!}{n!\,n!}\,\frac{n!\,n!}{(n-k)!(n+k)!}=\frac{(2n)!}{(n-k)!(n+k)!}={2n\choose n+k}.$$
-
Thank you, Didier. – bscptn Jan 11 '12 at 2:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7379537224769592, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/109426?sort=votes | ## On the representation of a (real) square matrix as a product of two symmetric matrices
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(For this queation, all matrices are real).
According to the ancient paper http://www.springerlink.com/content/l455p582210k1113/ which I cannot really read fully, since it is in german, any square matrix can be written as a product of two symmetric matrices (one of which is non-singular). If we strengthen the conditions, such that one of the factors must be positive-(semi)definite, what can we say? Any way of characterizing square matrices which can be written as a product of a symmetric and a symmetric positive-semidefinit matrix?
If $A$ is symmetric positive-definite and $B$ is symmetric, then the product $AB$ is similar to a symmetric matrix, so has real eigenvalues. So if any square matrix could be written such, all square matrices would have real eigenvalues, which is absurd. So there must be some restriction.
And, any more recent references for this problem?
-
## 2 Answers
The only restriction is to be diagonalizable with real eigenvalue. For if $M=PDP^{-1}$ with $P,D$ real and $D$ diagonal, then $M=AB$ with $B=P^{-T}DP^{-1}$ symmetric and $A=PP^T$ is PSD. And conversely, such a product is similar to the symmetric matrix $A^{1/2}BA^{1/2}$, hence is diagonalizable with real eigenvalues.
About the fact that every square real matrix is the product of two Hermitian matrices (complex counterpart of what you mention), see http://mathoverflow.net/questions/60174 .
-
Not quite: see my response for a non-diagonalizable example. – Robert Israel Oct 12 at 6:10
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't have an answer, but slightly more restrictions.
If $A$ is symmetric positive semidefinite and $B$ is symmetric, $AB = A^{1/2} A^{1/2} B$ has the same characteristic polynomial as the symmetric matrix $C = A^{1/2} B A^{1/2}$, and in particular has real eigenvalues. If $u$ is an eigenvector of $C$ for nonzero eigenvalue $\lambda$ then $A^{1/2} u \ne 0$ and $AB A^{1/2} u = \lambda A^{1/2} u$, so $A^{1/2}$ maps the eigenspaces of $C$ for nonzero eigenvalues to the corresponding eigenspaces of $AB$. In particular, the geometric and algebraic multiplicities of a nonzero eigenvalue of $AB$ must be equal. However, this may not be the case for eigenvalue $0$, e.g. $$\pmatrix{0 & 1\cr 0 & 0\cr} = \pmatrix{1 & 0\cr 0 & 0\cr} \pmatrix{0 & 1\cr 1 & 0\cr}$$
EDIT:
On the other hand, $AB$ can't have a Jordan block of size greater than $2$ for eigenvalue $0$. In fact, suppose $(AB)^3 v = 0$.
Since $\text{Ker}(A) = \text{Ker}(A^{1/2})$, this says $A^{1/2} B A B A B v = (A^{1/2} B A^{1/2})^2 A^{1/2} B v = 0$, and since $\text{Ker}((A^{1/2} B A^{1/2})^2) = \text{Ker}(A^{1/2} B A^{1/2})$ we have $(A^{1/2} B A^{1/2})A^{1/2} B v = A^{1/2} B A B v = 0$ and thus $(AB)^2 v = 0$.
Since being of the form $AB$ with $A$ and $B$ symmetric and $A$ positive definite is a similarity invariant, i.e. for any matrix $AB$ of this form and any invertible $S$, $SABS^{-1} = (SAS^T)((S^{-1})^T B S^{-1})$, we see that a necessary and sufficient condition is that the eigenvalues are real and the only Jordan blocks of size greater than $1$ are $\pmatrix{0 & 1\cr 0 & 0\cr}$.
-
Robert, your example does not contradict my answer, because your factor is only semi-positive definite, not definite. – Denis Serre Oct 12 at 7:59
The original question seems to ask about positive semi-definite. Also you mention PSD in your answer. – Robert Israel Oct 12 at 16:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9054927825927734, "perplexity_flag": "head"} |
http://terrytao.wordpress.com/tag/nonlinear-bound-states/ | What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘nonlinear bound states’ tag.
## A global compact attractor for high-dimensional defocusing non-linear Schrödinger equations with potential
13 May, 2008 in math.AP, paper | Tags: compact attractor, nonlinear bound states, nonlinear Schrodinger equation, soliton resolution conjecture | by Terence Tao | 1 comment
I’ve just uploaded to the arXiv my paper “A global compact attractor for high-dimensional defocusing non-linear Schrödinger equations with potential“, submitted to Dynamics of PDE. This paper continues some earlier work of myself in an attempt to understand the soliton resolution conjecture for various nonlinear dispersive equations, and in particular, nonlinear Schrödinger equations (NLS). This conjecture (which I also discussed in my third Simons lecture) asserts, roughly speaking, that any reasonable (e.g. bounded energy) solution to such equations eventually resolves into a superposition of a radiation component (which behaves like a solution to the linear Schrödinger equation) plus a finite number of “nonlinear bound states” or “solitons”. This conjecture is known in many perturbative cases (when the solution is close to a special solution, such as the vacuum state or a ground state) as well as in defocusing cases (in which no non-trivial bound states or solitons exist), but is still almost completely open in non-perturbative situations (in which the solution is large and not close to a special solution) which contain at least one bound state. In my earlier papers, I was able to show that for certain NLS models in sufficiently high dimension, one could at least say that such solutions resolved into a radiation term plus a finite number of “weakly bound” states whose evolution was essentially almost periodic (or almost periodic modulo translation symmetries). These bound states also enjoyed various additional decay and regularity properties. As a consequence of this, in five and higher dimensions (and for reasonable nonlinearities), and assuming spherical symmetry, I showed that there was a (local) compact attractor $K_E$ for the flow: any solution with energy bounded by some given level E would eventually decouple into a radiation term, plus a state which converged to this compact attractor $K_E$. In that result, I did not rule out the possibility that this attractor depended on the energy E. Indeed, it is conceivable for many models that there exist nonlinear bound states of arbitrarily high energy, which would mean that $K_E$ must increase in size as E increases to accommodate these states. (I discuss these results in a recent talk of mine.)
In my new paper, following a suggestion of Michael Weinstein, I consider the NLS equation
$i u_t + \Delta u = |u|^{p-1} u + Vu$
where $u: {\Bbb R} \times {\Bbb R}^d \to {\Bbb C}$ is the solution, and $V \in C^\infty_0({\Bbb R}^d)$ is a smooth compactly supported real potential. We make the standard assumption $1 + \frac{4}{d} < p < 1 + \frac{4}{d-2}$ (which is asserting that the nonlinearity is mass-supercritical and energy-subcritical). In the absence of this potential (i.e. when V=0), this is the defocusing nonlinear Schrödinger equation, which is known to have no bound states, and in fact it is known in this case that all finite energy solutions eventually scatter into a radiation state (which asymptotically resembles a solution to the linear Schrödinger equation). However, once one adds a potential (particularly one which is large and negative), both linear bound states (solutions to the linear eigenstate equation $(-\Delta + V) Q = -E Q$) and nonlinear bound states (solutions to the nonlinear eigenstate equation $(-\Delta+V)Q = -EQ - |Q|^{p-1} Q$) can appear. Thus in this case the soliton resolution conjecture predicts that solutions should resolve into a scattering state (that behaves as if the potential was not present), plus a finite number of (nonlinear) bound states. There is a fair amount of work towards this conjecture for this model in perturbative cases (when the energy is small), but the case of large energy solutions is still open.
In my new paper, I consider the large energy case, assuming spherical symmetry. For technical reasons, I also need to assume very high dimension $d \geq 11$. The main result is the existence of a global compact attractor K: every finite energy solution, no matter how large, eventually resolves into a scattering state and a state which converges to K. In particular, since K is bounded, all but a bounded amount of energy will be radiated off to infinity. Another corollary of this result is that the space of all nonlinear bound states for this model is compact. Intuitively, the point is that when the solution gets very large, the defocusing nonlinearity dominates any attractive aspects of the potential V, and so the solution will disperse in this case; thus one expects the only bound states to be bounded. The spherical symmetry assumption also restricts the bound states to lie near the origin, thus yielding the compactness. (It is also conceivable that the localised nature of V also restricts bound states to lie near the origin, even without the help of spherical symmetry, but I was not able to establish this rigorously.)
Read the rest of this entry » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954767644405365, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/13100/why-must-gluinos-be-spin-1-2-instead-of-3-2?answertab=votes | # Why must gluinos be spin 1/2 instead of 3/2?
Is there some condition in the N=1 SUSY algebra telling that the spin of the superpartners of gauge bosons (either for colour or for electroweak) must be less than the spin of the gauge boson? I am particularly puzzled because sometimes a supermultiplet is got from sugra that contains one spin 2 particle, four spin 3/2, and then some spin 1, 1/2 and 0. If this supermultiplet is to be broken to N=1 it seems clear that the graviton will pair with the gravitino and the rest of spin 3/2 should pair with spin 1, so in this case it seems that a superpair (3/2, 1) is feasible. Why not in gauge supermultiplets?
-
If my (very superficial) memory serves there is no problem with these multiplets from the algebraic point of view but they cause problems for other reasons, like breaking some gauge symmetric stuff, causing anomalies, etc. Good question, I hope someone more knowledgeable will answer it. – Marek Aug 2 '11 at 13:01
– luksen Aug 2 '11 at 14:53
"The next-simplest possibility for a supermultiplet contains a spin-1 vector boson. If the theory is to be renormalizable, this must be a gauge boson that is massless, at least before the gauge symmetry is spontaneously broken. A massless spin-1 boson has two helicity states, so the number of bosonic degrees of freedom is nB = 2. Its superpartner is therefore a massless spin-1/2 Weyl fermion, again with two helicity states, so nF = 2. (If one tried to use a massless spin-3/2 fermion instead, the theory would not be renormalizable.)" – luksen Aug 2 '11 at 14:53
@luksen that could be the germ of an answer; is it just power counting and superficial divergence? The OP was for gluinos, but, what happens if we look for a massive supermultiplet for Z or W, can it have some combo of 3/2 and 1/2? And, is there some argument beyond renormalizability? Of course we do not ask renormalizability of sugra. – arivero Aug 2 '11 at 19:01
– arivero Aug 3 '11 at 10:24
## 1 Answer
Fermionic spin 3/2 fields, much like bosonic fields with spin 1 and higher, contain negative-norm polarizations. Roughly speaking, a spin 3/2 field is $R_{\mu a}$ where $\mu$ is a vector index and $a$ is a spinor index. If $\mu$ is chosen to be 0, the timelike direction, one gets components of the spintensor $R$ that creates negative-norm excitations.
This is not allowed to be a part of the physical spectrum because probabilities can't be negative. It follows that there must be a gauge symmetry that removes the $R_{0a}$ components - a spinor of them. The generator of this symmetry clearly has to transform as a spinor, too. There must be a spinor worth of gauge symmetry generators. The generators are fermionic because the original field $R$ is also fermionic, by the spin-statistics relation.
It follows that the conserved spinor generators are local supersymmetry generators and their anticommutator inevitably includes a vector-like bosonic symmetry which has to be the energy-momentum density. This completes the proof that in any consistent theory, spin 3/2 fields have to be gravitinos. The number of "minimal spinors" - the size of the gravitinos - has to be equal to the number of supercharges spinors which counts how much the local supersymmetry algebra is extended. In particular, it can't be linked to another quantity such as the dimension of a Yang-Mills group.
So while both 3/2 and 1/2 differ by 1/2 from $j=1$, more detailed physical considerations show that it is inevitable for the superpartner of a gauge boson, a gaugino, to have spin equal to 1/2 and not 3/2. Similarly, one can show that the superpartner of the graviton can't have spin 5/2 because that would require too many conserved spin-3/2 fermionic generators which would make the S-matrix essentially trivial, in analogy with the Coleman-Mandula theorem. Gravitinos can only have spin 3/2, not 5/2.
-
But, what does it happen when we break susy down to, say, N=1? The number of spin 3/2 fields is still the same, but only one generator survives. – arivero Aug 3 '11 at 11:05
Dear Alejandro, I am afraid that you have totally misunderstood or ignored my answer. The spin 3/2 fields are gravitino fields, so their number is the $N$ measuring the extended supersymmetry. Spontaneous breaking of SUSY has no impact on $N$; instead, SUSY breaking may make the gravitinos massive (much like electroweak SUSY breaking makes W+Z gauge bosons massive) but they're still spin 3/2 fields and they're still gravitinos. It's wrong to say that SUSY or other symmetry generators "don't survive" spont. SUSY breaking. They do: they just don't annihilate the vacuum. – Luboš Motl Aug 4 '11 at 6:23
In fact I am almost fully happy with your answer; I was just waiting for some alternative attempt to it, before to mark it as answered. As for susy breaking, perhaps it is my misunderstanding, but I thought that the same generator that links the spin 2 graviton to a given 3/2 gravitino, will link the other 3/2 gravitinos to spin 1 particles, and that this will still be true after the breaking. It can be a trivial thing, but I think it was relevant to the answer, so I was pushing about it. – arivero Aug 5 '11 at 14:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231017827987671, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/89752?sort=votes | ## Ax=0, estimate min(Hamming(x)) ? Equivalently: Bipartite graph. How to find (estimate) minimal number of vertices1 which are connected with EVEN number of vertices2 ? Equivalently: estimate minimal weight of error correcting code ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider system of linear equations Ax=0 over $F_2$ (field with two elements {0,1}). Where number of variables is bigger than equations - so we have many solutions $x$.
Question How to estimate minimal Hamming weight of $x$ ($x\ne 0$) ? (I.e. minimal number of $1$ in vector $x$ such that Ax=0).
## Equivalently
Consider bipartite graph of vertexes of two type 1 and 2. How to estimate minimal size of subset A of vertexes of type 1, such that: for each vertex $V$ of type 2 we have EVEN number of edges which starts at $V$ and finishes at $A$.
Equivalence can be seen like this: take matrix $A$ of size $n\times m$ over $F_2$ and bipartite graph of with $n$ and $m$ number of vertexes of types 1 and 2. Connect two vertexes if $A_{ij}=1$.
Exercise to see equivalence.
## Equivalently
Take $A$ as parity check of linear block code. I.e. code is exactly subspace x: Ax=0. Code is good than Hamming distance between codewords is big. We have code word x = 0, so the minimal Hamming weight of non-zero x will measure the "quality" of code.
Comment. Let Let dim(ker(A))=k, any linear map $F_2^k \to ker(A)$ is called "encoder".
## [EDIT] As "quid" answered - questions are NP - problems. So
a) what approximate algorithms are used for questions ?
b) If corresponding bipartite graph is tree - is the problem still NP ? (From coding theory it is very simple-degenerate case and bad codes). More generally can we control complexity somehow - if matrix is of special form (e.g. sparse or whatever) or graph is tree/treelike. In what terms we might hope to control complexity ?
[END EDIT]
-
## 2 Answers
It is known that to determine the minimal weight of a nonzero code word (i.e., the minimum distance of the code) is a hard problem.
Here is a part of the abstract of a paper by Vardy (The intractability of computing the minimum distance of a code, IEEE Information Theory, 1997):
It is shown that the problem of computing the minimum distance of a binary linear code is NP-hard, and the corresponding decision problem is NP-complete. This result constitutes a proof of the conjecture of Berlekamp, McEliece, and van Tilborg (1978).
However, you ask for an estimation not the exact value. Now, of course it depends what precisely this should mean. Yet, in certain senses the problem stays hard. From the abstract of a paper by Dumer, Micciancio, Sudan (Hardness of Approximating the Minimum Distance of a Linear Code)
We show that the minimum distance d of a linear code is not approximable to within any constant factor in random polynomial time (RP), unless NP (nondeterministic polynomial time) equals RP.
In addition, on Sudan's webapage there are slides from a talk on this, see http://people.csail.mit.edu/madhu/talks.html and towards the very end you find 'Hardness of approximating the minimum distance of a linear code'.
ps. I am not certain that this type of answer is what you were looking for, but since your question is not very specific I had to guess a bit.
-
1
@quid Thank you very much ! – Alexander Chervov Feb 28 2012 at 13:15
1
In general, as dear Martin and quid answered, It is very hard problem. In some case, the structure of LDPC code can be helpful. For example if our code be regular or derived from some well known structures as like as finite geometry or special groups and matrices. But in my work, I wrote a probabilistic algorithm that it works very good. Also some theorems in game theory is useful for obtaining good bound, I think Professor Shokrollahi has an article about game theory and minimum distance of code. – Shahrooz Feb 28 2012 at 21:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Seeing as you mentioned the connection to bipartite graphs I think you might be interested in low density parity check (LDPC) codes.
One perspective on this is that minimum distance isn't that big a deal for LDPC codes. The fact that you are using an iterative message passing decoder means that the structure of the graph (girth + more complicated configurations: stopping sets/trapping sets/pseudocodewords) is more important for estimating the performance of the code under belief propagation decoding. Having a few codewords of small weight might not be a problem whereas having many structures that aren't codewords but that confuse your decoder will be.
Still, if you are designing a code you might like to not have small minimum distance. A google search for "estimating minimum distance of ldpc code" brings up a number of likely looking results.
One idea due to Xiao-Yu Hu that comes with software and a paper is on Mackay's webpage (scroll down to "Source code for approximating the MinDist problem of LDPC codes"). This uses the "approximately-nearest-codewords search" where the zero codeword has errors added and is then decoded using belief propagation. This hopefully will sometimes decode to codewords which are close to the zero codeword thus giving you an upper bound on the minimum distance.
-
@Martin Thank you very much ! – Alexander Chervov Feb 28 2012 at 19:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918467104434967, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/137635-power-series-representation-function.html | # Thread:
1. ## Power series representation for the function
Find a power series representation for the function and determine the interval of convergence
2. Originally Posted by racewithferrari
Find a power series representation for the function and determine the interval of convergence
Let $z=x^4$ then we have $\frac{3}{1-z}$. Look familiar?
3. Originally Posted by Drexel28
Let $z=x^4$ then we have $\frac{3}{1-z}$. Look familiar?
Sorry didnt understand what you said.
what will the answer in terms of the $sum sign$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8578330874443054, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/250713/the-symmetric-difference-of-two-recursive-recursively-enumerable-sets-is-recur | # The symmetric difference of two recursive (recursively enumerable) sets is recursive (recursively enumerable)
I want to prove it, but don't know how... (I've tried to resolve complement by defining characteristic function like this: $\chi_{\bar A} = 1 - \chi_A$) Any ideas please? :-)
-
Why do you think it's true in the first place? – Carl Mummert Dec 4 '12 at 14:22
Actually I don't know if it is true, I just don't know how to start with proving/disproving ... – kolage Dec 4 '12 at 14:24
1
"r. (r.e.)"?${}$ – David Mitra Dec 4 '12 at 14:24
1
The answer is not the same in those two cases. – Henning Makholm Dec 4 '12 at 14:38
1
If the claim were true of two recursively enumerable sets, then every r.e. set would be recursive. – hardmath Dec 4 '12 at 15:17
show 2 more comments
## 2 Answers
If we denote the symmetric difference operator by $\bigtriangleup$, then $$S_1\bigtriangleup S_2=(S_1\cup S_2)\cap(\overline{S_1}\cup \overline{S_2})$$ Now you can use the facts that
• Recursive sets are closed under union, intersection, and complement.
• r.e. sets are closed under union and intersection, but not under complement.
This should get you started.
-
The first thing to do is to make an estimate of where in the arithmetical hierarchy the symmetric of two r.e. sets will be.
If it looks like the symmetric difference will be $\Sigma^0_1$, then try to prove it is r.e.; if it looks like the symmetric difference is higher than $\Sigma^0_1$ then try to find an example where the symmetric difference is not r.e.
Then do the same thing but assume the sets are computable ($\Delta^0_1$) instead of r.e. ($\Sigma^0_1$).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089177250862122, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/74206/ray-class-group?answertab=votes | # Ray class group
Can someone please go through a proof of the fact that the ray class group of a number field is finite?
I just can't find a nice readable elementary one on the internet...
Thanks in advance.
-
1
Just to put it out there... have you considered, I don't know, a book and/or a library? There is a world outside the internet, after all. – Arturo Magidin Oct 20 '11 at 4:56
– Dylan Moreland Oct 20 '11 at 5:10
@Arturo: I didn't figure my question was so cripted; yes, I was asking for a source recommendation (of course, books fit into that description). – Anna Oct 20 '11 at 5:41
@Dylan: Thanks! Didn't know of Milne's CFT book although at the moment I'm reading his ANT :-) – Anna Oct 20 '11 at 5:42
@Anna: Ah, you were asking for references... That would be a different kettle of fish, indeed. But what you actually wrote is "can someone please go through a proof..." Looks like you were asking for someone to present and walk you through a proof, not to direct you to a source where you can read one. – Arturo Magidin Oct 20 '11 at 5:45
show 1 more comment
## 1 Answer
The ray class group of conductor $\mathfrak m$ is defined to be the group of fractional ideals prime to $\mathfrak m$, modulo those fractional ideals which are principal, with a generator congruent to $1 \bmod \mathfrak m$. (Here, if a real place is contained in $\mathfrak m$, "congruenct to $1$ at such a place" is interpreted as "positive at this place".) In symbols, write this as $I_{\mathfrak m}/P_{\mathfrak m}^1$.
It is easy to see that any ideal class admits a representative that is coprime to $\mathfrak m$, and so the ray class field surjects onto the ideal class group. In symbols, write $P_{\mathfrak m}$ for the principal ideals in $I_{\mathfrak m}$; then we have the short exact sequence $$1 \to P_{\mathfrak m}/P^1_{\mathfrak m} \to I_{\mathfrak m} /P^1_{\mathfrak m} \to I_{\mathfrak m}/P_{\mathfrak m} \to 1.$$ The middle term is the ray class group, and, as noted, it surjects onto the class group, which is the third term.
Since the class group is finite, to show that the ray class group is finite, we just have to show that the first term $P_{\mathfrak m}/P^1_{\mathfrak m}$ is finite. Luckily, this kernel is easy to compute:
If $\mathfrak a$ is a principal ideal, we can map it to its principal generator $a$. If $a$ is $1$ mod $\mathfrak m$, then $\mathfrak a$ lies in $P^1_{\mathfrak m}$, and so we apparently get an identifiation between $P_{\mathfrak m}/P^1_{\mathfrak m}$ and $(\mathcal O/\mathfrak m)^{\times}.$ Actually, this is not quite true, because $a$ is not well-defined; it is only determined up to multiplication by a unit. Thus actually we get an isomorphism $$P_{\mathfrak m}/P^1_{\mathfrak m} \cong (\mathcal O/\mathfrak m)^{\times}/(\text{reduction mod }\mathfrak m \text{ of } \mathcal O^{\times}).$$ In any event, this is finite, since it is a quotient of the finite group $(\mathcal O/\mathfrak m)^{\times}$.
[Note: if $\mathfrak m$ contains some real places $v$, then we interpret $\mathcal O/\mathfrak m$ as denoting the product of copies of $\{\pm 1\}$ (one for each real place in $\mathfrak m$) and the usual quotient $\mathcal O/\mathfrak m'$, where $\mathfrak m'$ is the finite part of $\mathfrak m$. And of course, we interpret reduction modulo $\mathfrak m$ as being the product of the maps with attach the sign with respect to $v$ as an element in $\{\pm 1\}$, for each real place $v$ dividing $\mathfrak m$, and as the usual reduction mod $\mathfrak m'$ for the finite part $\mathfrak m'$ of $\mathfrak m$.]
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9605351090431213, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/189114-small-o-theta-questions.html | Thread:
1. Small-o and Theta questions
I have these questions from a book:
show that if f(n)=theta(log2 n) then f(n)=theta (log10 n)
show that if f(n)=o(g(n)) and g(n)=o(f(n)) then f(n)=theta(g(n))
how to prove? can anyone help me with it?
2. Re: please help me with these quetions
The first proposition, do you mean:
$f(n)=\theta(\log_{2}n) \Rightarrow f(n)=\theta(\log10n)$
?
3. Re: please help me with these quetions
To show the first statement, write the definition of $\Theta$ and note that $\log_2n=\log_210\log_{10}n$.
The second statement is a little odd because I believe that if f(n) = o(g(n)) and g(n) = o(f(n)), then f and g must be 0 from some point. However, the fact that g(n) = o(f(n)) implies that g(n) = O(f(n)), and this is almost the same as $f(n)=\Theta(g(n))$ (up to absolute value).
4. Re: Small-o and Theta questions
The first proposition, do you mean:
$f%28n%29=%5Ctheta%28%5Clog_%7B2%7Dn%29%20%5CRightarrow%20f%28n%29=%5Ctheta%28%5Clog10n%29$
?
yes, that's what i meant
To show the first statement, write the definition of $%5CTheta$ and note that $%5Clog_2n=%5Clog_210%5Clog_%7B10%7Dn$.
The second statement is a little odd because I believe that if f(n) = o(g(n)) and g(n) = o(f(n)), then f and g must be 0 from some point. However, the fact that g(n) = o(f(n)) implies that g(n) = O(f(n)), and this is almost the same as $f%28n%29=%5CTheta%28g%28n%29%29$ (up to absolute value).
thanks for help | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9074801802635193, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Newtons_law_of_cooling | # Convective heat transfer
(Redirected from Newtons law of cooling)
See also: Heat transfer and convection
Simulation of thermal convection. Red hues designate hot areas, while regions with blue hues are cold. A hot, less-dense lower boundary layer sends plumes of hot material upwards, and likewise, cold material from the top moves downwards. This illustration is taken from a model of convection in the Earth's mantle.
Convective heat transfer, often referred to simply as convection, is the transfer of heat from one place to another by the movement of fluids. Convection is usually the dominant form of heat transfer in liquids and gases. Although often discussed as a distinct method of heat transfer, convective heat transfer involves the combined processes of conduction (heat diffusion) and advection (heat transfer by bulk fluid flow).
The term convection can refer to transfer of heat with any fluid movement, but advection is the more precise term for the transfer due only to bulk fluid flow. The process of transfer of heat from a solid to a fluid, or the reverse, requires not only transfer of heat by bulk motion of the fluid, but also diffusion/conduction of heat through the still boundary layer next to the solid. Thus, this process with a moving fluid requires both diffusion and advection of heat, a summed process that is generally called convection. Convection that occurs in the earth's mantle causes tectonic plates to move.
Convection can be "forced" by movement of a fluid by means other than buoyancy forces (for example, a water pump in an automobile engine). In some cases, natural buoyancy forces alone are entirely responsible for fluid motion when the fluid is heated, and this process is called "natural convection." An example is the draft in a chimney or around any fire. In natural convection, an increase in temperature produces a reduction in density, which causes fluid motion due to pressures and forces when fluids of different densities are affected by gravity (or any g-force). For example, when water is heated on a stove, hot water from the bottom of the pan rises, displacing the colder denser liquid which falls. After heating has stopped, mixing and conduction from this natural convection eventually result in a nearly homogeneous density, and even temperature.
The convection heat transfer mode is comprised to two mechanisms. In addition to energy transfer due to random molecular motion (diffusion), energy is also transferred by bulk, or macroscopic, motion of the fluid. This motion is associated with the fact that, at any instant, large numbers of molecules are moving collectively or as aggregates. Such motion, in the presence of a temperature gradient, contributes to heat transfer. Because the molecules in aggregate retain their random motion, the total heat transfer is then due to the superposition of energy transport by random motion of the molecules and by the bulk motion of the fluid. It is customary to use the term convection when referring to this cumulative transport and the term advection when referring to the transport due to bulk fluid motion.[1]
## Overview
Papers lifted on rising convective air current from warm radiator
Convection is the transfer of thermal energy from one place to another by the movement of fluids or gases. Although often discussed as a distinct method of heat transfer, convection describes the combined effects of conduction and fluid flow or mass exchange.
Two types of convective heat transfer may be distinguished:
• Free or natural convection: when fluid motion is caused by buoyancy forces that result from the density variations due to variations of temperature in the fluid. In the absence of an external source, when the fluid is in contact with a hot surface, its molecules separate and scatter, causing the fluid to be less dense. As a consequence, the fluid is displaced while the cooler fluid gets denser and the fluid sinks. Thus, the hotter volume transfers heat towards the cooler volume of that fluid.[2] Familiar examples are the upward flow of air due to a fire or hot object and the circulation of water in a pot that is heated from below.
• Forced convection: when a fluid is forced to flow over the surface by an external source such as fans, by stirring, and pumps, creating an artificially induced convection current.[3]
Internal and external flow can also classify convection. Internal flow occurs when a fluid is enclosed by a solid boundary such when flowing through a pipe. An external flow occurs when a fluid extends indefinitely without encountering a solid surface. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other.[citation needed] The bulk temperature, or the average fluid temperature, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts.
For a visual experience of natural convection, a glass filled with hot water and some red food dye may be placed inside a fish tank with cold, clear water. The convection currents of the red liquid may be seen to rise and fall in different regions, then eventually settle, illustrating the process as heat gradients are dissipated.
## Newton's law of cooling
Convection-cooling can sometimes be described by Newton's law of cooling in cases where the heat transfer coefficient is independent or relatively independent of the temperature difference between object and environment. This is sometimes true, but is not guaranteed to be the case (see other situations below where the transfer coefficient is temperature dependent).
Newton's law, which requires a constant heat transfer coefficient, states that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings. The rate of heat transfer in such circumstances is derived below:[4]
Newton's cooling law is a solution of the differential equation given by Fourier's law:
$\frac{d Q}{d t} = h \cdot A( T(t)-T_{\text{env}}) = - h \cdot A \Delta T(t)\quad$
where
$Q$ is the thermal energy in joules
$h$ is the heat transfer coefficient (assumed independent of T here) (W/m2 K)
$A$ is the surface area of the heat being transferred (m2)
$T$ is the temperature of the object's surface and interior (since these are the same in this approximation)
$T_{\text{env}}$ is the temperature of the environment; i.e. the temperature suitably far from the surface
$\Delta T(t)= T(t) - T_{\text{env}}$ is the time-dependent thermal gradient between environment and object
The heat transfer coefficient h depends upon physical properties of the fluid and the physical situation in which convection occurs. Therefore, a single usable heat transfer coefficient (one that does not vary significantly across the temperature-difference ranges covered during cooling and heating) must be derived or found experimentally for every system analyzed. Formulas and correlations are available in many references to calculate heat transfer coefficients for typical configurations and fluids. For laminar flows, the heat transfer coefficient is rather low compared to turbulent flows; this is due to turbulent flows having a thinner stagnant fluid film layer on the heat transfer surface.[5] However, note that Newton's law breaks down if the flows should transition between laminar or turbulent flow, since this will change the heat transfer coefficient h which is assumed constant in solving the equation.
Newton's law requires that internal heat conduction within the object be large in comparison to the loss/gain of heat by convection (lumped capacitance model), and this may not be true (see heat transfer). Also, an accurate formulation for temperatures may require analysis based on changing heat transfer coefficients at different temperatures, a situation frequently found in free-convection situations, and which precludes accurate use of Newton's law. Assuming these are not problems, then the solution can be given if heat transfer within the object is considered to be far more rapid than heat transfer at the boundary (so that there are small thermal gradients within the object). This condition, in turn, allows the heat in the object to be expressed as
### Solution in terms of object heat capacity
If the entire body is treated as lumped capacitance thermal energy reservoir, with a total thermal energy content which is proportional to simple total heat capacity $C$, and $T$, the temperature of the body, or $Q = C T$, it is expected that the system will experience exponential decay with time in the temperature of a body.
From the definition of heat capacity $C$ comes the relation $C = dQ/dT$. Differentiating this equation with regard to time gives the identity (valid so long as temperatures in the object are uniform at any given time): $dQ/dt = C (dT/dt)$. This expression may be used to replace $dQ/dt$ in the first equation which begins this section, above. Then, if $T(t)$ is the temperature of such a body at time $t$, and $T_{env}$ is the temperature of the environment around the body:
$\frac{d T(t)}{d t} = - r (T(t) - T_{\mathrm{env}}) = - r \Delta T(t)\quad$
where
$r = hA/C$ is a positive constant characteristic of the system, which must be in units of $s^{-1}$, and is therefore sometimes expressed in terms of a characteristic time constant $t_0$ given by: $r = 1/t_0 = - (dT(t)/dt)/\Delta T$. Thus, in thermal systems, $t_0 = C/hA$. (The total heat capacity $C$ of a system may be further represented by its mass-specific heat capacity $c_p$ multiplied by its mass $m$, so that the time constant $t_0$ is also given by $mc_p/hA$).
The solution of this differential equation, by standard methods of integration and substitution of boundary conditions, gives:
$T(t) = T_{\mathrm{env}} + (T(0) - T_{\mathrm{env}}) \ e^{-r t}. \quad$
If:
$\Delta T(t) \quad$ is defined as : $T(t) - T_{\mathrm{env}} \ , \quad$ where $\Delta T(0)\quad$ is the initial temperature difference at time 0,
then the Newtonian solution is written as:
$\Delta T(t) = \Delta T(0) \ e^{-r t} = \Delta T(0) \ e^{-t/t_0}. \quad$
This same solution is more immediately apparent if the initial differential equation is written in terms of $\Delta T(t)$, as a single function of time to be found, or "solved for." '
$\frac{d T(t)}{d t} = \frac{d\Delta T(t)}{d t} = - \frac{1}{t_0} \Delta T(t)\quad$
## References
1. Incropera DeWitt VBergham Lavine 2007, Introduction to Heat Transfer, 5th ed., pg. 6 ISBN 978-0-471-45727-5 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335755109786987, "perplexity_flag": "middle"} |
http://nrich.maths.org/658/index?nomenu=1 | ## 'Plus Minus' printed from http://nrich.maths.org/
### Show menu
This problem follows on from What's Possible?
Jo has been experimenting with pairs of two-digit numbers. She has been looking at the difference of their squares.
Jo has collected together some answers which she found quite surprising:
$$55^2-45^2=1000$$ $$105^2-95^2=2000$$ $$85^2-65^2=3000$$
Can you find other pairs which give multiples of $1000$? Do you notice anything special about these pairs of numbers?
Jo was also surprised to get these answers:
$$89^2-12^2=7777$$ $$78^2-23^2=5555$$
Can you find any other pairs which give repeated digits? Do you notice anything special about these pairs of numbers?
Jo wanted to explain why she was getting these surprising results. She drew some diagrams to help her. Here is the diagram she used to work out $85^2-65^2$:
What is the connection between Jo's diagram and the calculation $85^2-65^2$? How could Jo work out the area of the long purple rectangle (without a calculator)? Can you draw similar diagrams for Jo's other calculations (or for your own examples)?
How can these diagrams help Jo to develop a quick method for evaluating $a^2-b^2$ for any values of $a$ and $b$?
Now you should be able to work out these calculations without a calculator:
$$7778^2-2223^2$$ $$88889^2-11112^2$$
Here are some follow-up questions to consider:
Can you write $1000, 2000, 3000...$ as the difference of two square numbers?
Can you write any of them in more than one way?
Can you write any repeated-digit number as the difference of two square numbers?
What about numbers like $434343$, $123321$, $123456$...?
Is it possible to write every number as the difference of two square numbers?
This problem is also available in French: Plus ou Moins | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946923553943634, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/cosmology+big-bang | # Tagged Questions
1answer
58 views
### Assuming space is infinite can our observable universe be an island amongst an archipelego?
According to recent measurements our observable universe is roughly 93 billion light years in diameter; also it appears (according to WMAP measurements) that spacetime is flat. Supposing space is ...
2answers
161 views
### Excluding big bang itself, does spacetime have a boundary?
My understanding of big bang cosmology and General Relativity is that both matter and spacetime emerged together (I'm not considering time zero where there was a singularity). Does this mean that ...
1answer
103 views
### Did space and time exist before the Big Bang? [duplicate]
I accept the Big Bang theory. What I can't understand is how there can be a where or when to the Big Bang if space time did not exist prior to it. Did space and time exist prior to the Big Bang?
1answer
64 views
### How fast did hydrogen atoms travel when they were first formed in the early universe?
I can't seem to find any data on this, is it a known value?
1answer
68 views
### Which was first, energy or matter in the creation of our universe?
Was it the Big Bang or was it something else that gives us our universe in its present condition? Did it all begin with just pure energy that eventually evolved into simple atoms of matter, that ...
3answers
142 views
### Can space expand with unlimited speed?
At the beginning, right after the Big Bang, the universe was the size of a coin. One millionth of a second after the universe was the size of the Solar System (acording to ...
1answer
77 views
### In the big crunch theory, when the big crunch singularity forms, can the resulting black hole decay through hawking's radiation?
I've been pondering about this and I couldn't really find the answer for this. The big crunch theory postulates that the universe will eventually stop expanding and reverse back in on its self into a ...
1answer
95 views
### Particles entangled after the big bang
Is that true that the big bang caused the quantum entanglement of all the particles of the universe so every particle is entangled to each other particle of the universe?
0answers
43 views
### Is the speed of light the ultimate speed limit? [duplicate]
As we all know nothing can go faster than the speed of light as mentioned by most of our pioneer's in physics. But as I was listening to one of the statements of Sir. Stephen Hawkins he stated that at ...
1answer
80 views
### What was ticking just after the Big Bang?
When reading about the Big Bang, I see phrases like 3 trillionths of a second after... So, what was ticking to give a time scale like this? We define time now in terms of atomic oscillations, but ...
0answers
105 views
### Did force of gravity cause macroevolution?
Did big bang create gravity? What role gravity is assumed to have played in the formation (starting from the big bang) of large structures of our universe and what other important physical mechanisms ...
2answers
86 views
### How exactly, or whats the process, rather, of energy changing into matter?
$E=mc^2$ this is the equation by Einstein claiming energy can change from energy to mass. this would happened at the big bang I assume, when electrons and protons were made to create hydrogen and some ...
0answers
55 views
### How can I read density fluctuation from microwaves?
The Cosmic Microwave Background Radiation shows temperature differences. The red and yellow areas are warmer. The green and blue areas are cooler. For example consider this picture of CMBR ...
1answer
69 views
### How to understand movement in expanding universe?
I know that universe is expanding equally between every pair of points but it was a single point in it's very past... so I was wondering if we could locate this center point of universe. Now I know ...
0answers
183 views
### Curiosity episode with Stephen Hawking. The Big-Bang
In an episode of Discovery's Curiosity with host Stephen Hawking, he claims the Big Bang event can be explained from physics alone, and does not require the intervention of a creator. 1) His ...
0answers
74 views
### Conservation of Energy in the Universe [duplicate]
Possible Duplicate: Is energy really conserved? Why can’t energy be created or destroyed? One of the laws of the universe that dazzles me the most is the law of conservation of energy. I ...
1answer
199 views
### Is it possible that the Big Bang was caused by virtual particle creation?
As far as I understand, it is understood that throughout the universe there exists, what is known as, a quantum field from which, due to its fluctuations, temporary (pairs of) virtual particles ...
0answers
35 views
### What's a compact scientific answer to question “(Why there is) / (what is before) the Big Bang?” [duplicate]
Possible Duplicate: Did spacetime start with the Big bang? on causality and The Big Bang Theory Before the Big Bang What's a compact scientific answer to question "(Why there is) / ...
3answers
220 views
### Do new universes form on the other side of black holes?
I have four questions about black holes and universe formations. Do new universes form on the other side of black holes? Was our own universe formed by this process? Was our big bang a black hole ...
1answer
268 views
### Branes Collision -> Big Bang
Imagine universe occurred when two parallel branes collided, Momentum of Branes converted to big bang kinetic energy after Collision. Thus, high-energy quanta are high-Vibrating strings. what ...
2answers
180 views
### How is it possible to come to a conclusion that Universe is a result of the Big Bang while we aren't able to observe the entire Universe?
-I'm not a religious person so this is not a denial. I'm just trying to understand the most fundamental topic about Universe. -I know the Big Bang cosmological model is not a law but it's a theory. ...
4answers
283 views
### How was the universe created?
I do not know much beyond high school Physics. Thus, I am asking this question from almost layman's perspective: What, as per the best of our existing knowledge and widely accepted among the ...
0answers
50 views
### Energy can't be created or destoryed? [duplicate]
Possible Duplicate: Conservation law of energy and Big Bang? If energy can't be created or destroyed then how did the big bang create energy? and if energy can't be created then does that ...
3answers
191 views
### Is the cosmic horizon related to the Big Bang event?
The Universe expands according to the Hubble's law: velocity is proportional to distance. There must be some distance, therefore, at which the velocity reaches the speed of light. This defines the ...
9answers
740 views
### Can a universe emerge from nothing?
If the Universe is flat and the total energy of the universe can be zero (we don't know if it is, but many theorists support the idea, i.e. at BB initial conditions: t = 0, V = 0, E = 0) then is it ...
3answers
207 views
### Why the red-shift of distant galaxies is considered to be the effect of expanding spacetime?
Why it's not explained just by Doppler redshift caused by faster movement of those galaxies billions of years ago when that light was emitted? Would the speeds of the galaxies necessary for Doppler ...
3answers
118 views
### Accelerating expansion of the universe and cyclic model
So I've heard that recent observations indicated that the rate of expansion of the universe is accelerating. Does this indicate that the universe can NOT be cyclical, with an infinite series of big ...
1answer
122 views
### Total momentum of the Universe
What is the total momentum of the whole Universe in reference to the point in space where the Big Bang took place? According to my reasoning (and a bit elementary knowledge) it should be exactly ...
3answers
124 views
### If the universe started again with exactly the same conditions… would it be the same?
Sorry if this is a basic question but I can't find anywhere to ask else to ask it: If the universe was created again and exactly the same conditions where in place at the beginning, would everything ...
1answer
52 views
### Can anything come out from the big bang?
If any configuration of matter can fall into a black hole and hit the singularity, and ditto for the big crunch, and there is time reversal CPT invariance, does it mean anything can pop out of the ...
2answers
177 views
### Was Planck's constant $h$ the same when the Big Bang happened as it is today?
Was Planck's constant $h$ the same when the Big Bang happened as it is today? Planck's constant : $$h= 6.626068 × 10^{-34}\, m^2 kg / s,$$ $$E=n.h.\nu,$$ $$\epsilon=h.\nu$$
2answers
118 views
### Shouldn't LHC have used $p\bar{p}$ collisions, instead of $pp$ collisions, to study baryogenesis?
Baryogenesis is the physical process(es) that produced baryon antibaryon asymmetry in the early universe. That means, the laws that governed the bigbang was baryon-antibaryon symmetric. On the other ...
1answer
99 views
### Reference request for low entropy big bang
There is a somewhat widely accepted argument that the second law of thermodynamics exists because the universe began in a low-entropy state. I'm writing a paper that mentions this (and must be ...
4answers
158 views
### Can cosmic inflation be explained by matter antimatter reactions?
The big bang theory proposes that equal amounts of matter and antimatter were created in the beginning. Shortly afterwards most of it annihilated. Could that have produced enough energy to drive ...
11answers
493 views
### Are there any theories that explain the very beginning of absolutely everything? [closed]
Of course there's the theory of The Big Bang, and there are theories on what caused The Big Bang, but what was the cause of the very first thing that ever happened? What started every I also won't ...
2answers
63 views
### proportion of dark matter/energy to other matters/energy at the beginning of the universe
How will the proportion of dark matter/energy to other matters/energy be like at the momenets after the beginning of the universe (standard Big Bang model)?
1answer
46 views
### observation and implied time since creation
I read on a post Big Bang and Cosmic microwave background radiation? We detect light from another 13 billion years ago does this mean that one billion years ago we could only detect light from about ...
1answer
208 views
### How is it possible that we see light from shortly after the big bang?
How can astronomers see light from shortly after the big bang? How did we get "here" before the light that emanated from our "creation"?
2answers
192 views
### How is Big Bang related to theory of relativity?
I'm not someone with good scientific knowledge, so if my question are weird, correct me. I was reading about big bang and I came by the theory of relativity. Can someone explain the relation between ...
1answer
207 views
### spacetime expansion and universe expansion?
First of all, does the expansion of spacetime solely cause the expansion of universe? Secondly, if spacetime is the sole cause, do objects(matter with mass) themselves expand? Thirdly, by spacetime ...
1answer
302 views
### How did the energy/entropy/volume/pressure/temperature relationship exist at the Big Bang and how did it evolve thereafter?
According to the current Big Bang with inflation cosmological model? I was under the mistaken impression that there was very low volume, very high temperature/pressure, very low entropy and the Big ...
1answer
154 views
### Are there any theories or suggestions for how the multiverse came into existence?
I've just seen a documentary about the multiverse. This provides an explanation for where the big bang came from. But it leaves me wondering: how did the multiverse come into existence? Because this ...
1answer
51 views
### Could there be a sort of “Molecular Destiny”?
Let's say we start with the Big Bang. Every bit of matter started from this event. Therefore, given EVERY variable (every particle, movements of particles, weights, times, etc) and an INFINITE amount ...
2answers
112 views
### Why does the homogeneity of the universe require inflation?
They say inflation must have occured because the universe is very homogeneous. Otherwise, how could one part of the universe reach the same temperature as another when the distance between the parts ...
3answers
154 views
### Where does the light of the Big Bang come from?
I'm wondering whether the residual light of the Big Bang comes from one particular direction and what possibilities do we have to detect its position?
4answers
411 views
### Did really everything begin with a state with very low entropy?
As emphasized by Penrose many years ago, cosmology can only make sense if the world started in a state of exceptionally low entropy. The low entropy starting point is the ultimate reason that the ...
1answer
188 views
### Was the universe a black hole at the beginning?
Big bang cosmology, as far as I understand it, says that the universe was super hot and super dense and super small. It looks like that all the current matter, seen and unseen, were compressed to ...
1answer
1k views
### What did Hawking mean? 'Time started at the big bang'. Book suggestions please [closed]
After writing down this question, I have come to realize that. What I really want is reading materials on the questions below. Before the big bang there was no such thing as 'time' (Steven Hawking on ...
2answers
190 views
### How to concile flat spacetime and big bang?
After reading How do we resolve a flat spacetime and the cosmological principle? I still remain perplex. Please excuse my ignorance and try explaining to me : I thought that basically, when we ...
1answer
227 views
### Superluminal expansion of the early universe how is this possible?
Is this a postulate? I get the expansion of the universe, the addition of discrete bits of space time between me and a distant galaxy, until very distant parts of the universe are moving relative to ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420771598815918, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/73006/tensor-product-in-k-linear-categories-how-canonical/73021 | ## tensor product in k-linear categories how canonical
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi,
my question is related to the tensor product in k-linear additive categories. If you have such a category, an object $A$ and a finite dimensional k-vector space $V$, then one knows that the tensor product $V\otimes_kA$ exists.
One usually chooses a basis of $V$ and sets $A\otimes_kV$ simply as $A^n$ and verifies the universal property
$Hom(V\otimes_k A,B)\simeq Hom_k(V,Hom(A,B))$ directly.
Now it is not clear for me inasfar this construction is canonical, as you chose a basis of $V$.
And if $V$ is simply $Hom(A,B)$, then you have a morphism
$Hom(Hom(A,B)\otimes A,B)$ corresponding to the identity. How canonical is also this one?
Thank you very much!
-
10
If you construct an object by invoking the non-canonical Lord Of The Choices, and somehow you end up with an object which satisfies the universal property you want, then you know ---because of universality--- that you have found the unique-up-to-isomorphism-compatible-with-the-universal-property object. If your category is not skeletal, then you can always make other choices and construct another solution to theuniversal property. – Mariano Suárez-Alvarez Aug 16 2011 at 18:56
3
@Mariano: I think I shall phrase it that way from now on :) – Zev Chonoles Aug 16 2011 at 19:09
well, somehow the problem is that you verify also the universal property by choosing a basis of $V$. That's what's confusing me here. – Descartes Aug 16 2011 at 22:24
Once you verify the universal property, that implies the construction is independent of the choice of basis. And the morphism corresponding to the identity is completely canonical: it is the evaluation morphism. – David Roberts Sep 14 2011 at 5:33
## 2 Answers
You can make $A\otimes V$ canonical by defining it to be a family of objects indexed by the bases of $V$. But that's a big mess, so you probably don't want to do it. If all you want is to make $V\mapsto A\otimes V$ into a genuine functor, you can do that by choosing a suitable adjoint to the inclusion functor from vector spaces of the form $k^n$ to the category of all vector spaces.
-
3
Anafunctors (ncatlab.org/nlab/show/anafunctor) were introduced by Makkai to reduce the problem of defining a functor via universal constructions, such as this. There is a perfectly good bicategory of categories, anafunctors and transformations which if you allow Choice is equivalent to the usual 2-category of categories, functors and natural transformations. – David Roberts Aug 16 2011 at 23:40
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If $C$ is a $k$-linear category for some ring $k$ (it does not have to be a field), then you can define $M \otimes_k X$ for $M \in \mathrm{Mod}(k), X \in C$ as $\mathrm{colim}_{m \in M} X$, if this colimit exists. Here the index category is $M$, where a morphism $m \to m'$ is an $r \in k$ such that $m'=rm$. Thus, $- \otimes_k X$ is the cocontinuous extension of $k \otimes_k X = X$. In particular, if $M$ is defined by some presentation $k^{(I)} \to k^{(J)} \to M \to 0$, then we get a presentation $X^{(I)} \to X^{(J)} \to M \otimes_k X \to 0$ (if these colimits exist). But of course, this is not the best way to define the tensor product. The best characterization is probably that $(-) \otimes_k X$ is left adjoint to $\mathrm{Hom}_C(X,-)$. As always, left adjoints just have to be defined pointwise, thus we may choose a presentation of $M$ in order to define $M \otimes_k X$, as soon as we ensure the universal property.
As for your last question, it is now clear how to define `$\mathrm{Hom}_C(X,Y) \otimes X = \mathrm{colim}_{f : X \to Y} X \to Y$`.
-
I would guess that you really need to the the colim over the category of finitely generated submodules, rather. – Mariano Suárez-Alvarez Nov 4 2011 at 13:28
It works as stated. – Martin Brandenburg Nov 22 2011 at 16:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9154751300811768, "perplexity_flag": "head"} |
http://alanrendall.wordpress.com/2012/02/ | # Hydrobates
A mathematician thinks aloud
## Archive for February, 2012
### Entrainment by oscillations
February 25, 2012
In the book of Goldbeter which I have mentioned in several recent posts a concept which occurs repeatedly is that of entrainment. While looking for some more information about this topic I found a paper of Russo, di Bernardo and Sontag (PloS Computational Biology 4, e1000739) which gives an insightful treatment of the subject. The basic idea is to consider two systems which are coupled in some way and to consider the influence of oscillations in one system on the behaviour of the other. It is easy to see how this might be translated into a problem expressed in terms of dynamical systems. A classical example related to this is contained in a story about Christiaan Huygens who was, among other things, the inventor of the pendulum clock in the mid 17th century. Apparently he did not construct clocks himself but had them made by others according to his plans. The well-known story is that he noticed that when two pendulum clocks were placed next to each other the phase of their oscillations became synchronized with, say, one always at the leftmost point of its swing when the other was at the rightmost. Another example is that of the circadian rhythm. There is a 24 hour rhythm in our body and it is interesting to know whether it comes from an intrinsic oscillator or not. Experiments with subjects isolated from the usual rhythm of day and night show that there is an intrinsic oscillator but that its period is closer to 25 hours. Under normal circumstances its period is brought to 24 hours due to the cycle of day and night by entrainment.
The particular mathematical set-up considered in the paper of Russo et. al. is the following. Consider an autonomous dynamical system containing some parameters. Now replace one or more of those parameters by functions of time with period $T$. If solutions of the original system have a suitable tendency to converge to a stationary solution for a given choice of the parameters then solutions of the resulting non-autonomous system converge to periodic solutions of period $T$. In the papers there are nice plots of numerical simulations which give a striking picture of this behaviour. The central result of the paper is a theorem which guarantees this type of behaviour under certain hypotheses. As pointed out in the paper verifying these hypotheses has some similarity to finding a Lyapunov function for an autonomous system. The positive side is that if it can be done it is possible to get strong conclusions. The negative side is that verifying the hypotheses is generally a matter of trial and error. There is no algorithm available for doing that.
The criterion is dependent on the choice of a matrix norm. This is used to define a quantity called the matrix measure $\mu(A)$ of a matrix $A$. The criterion is that the Jacobian of the function defining the dynamical system should have a matrix measure which is bounded above by a negative constant. In that case the system is said to be infinitesimally contracting. The matrix measure is defined by a limiting procedure, $\mu(A)=\lim_{h\to 0}\frac{1}{h}(\|I+hA\|-1)$, but for particular choices of the matrix norm it is possible to calculate in a purely algebraic way. I have no intuitive feeling for what this definition means.
Posted in dynamical systems, mathematical biology | 2 Comments »
### La vie oscillatoire
February 16, 2012
I have continued reading the book ‘La Vie Oscillatoire’ and I have learned many interesting things. Some of them were things I was aware of on some level already but they have now become clearer. Others were quite new to me. Some of them have to do with biology, some with mathematics. Chapter 5 is concerned with the secretion of hormones. I had the naive view that the effect of a hormone was due to its overall level. In reality frequency encoding is important for many hormone signals. For instance the triggering of ovulation is dependent on having the right kind of oscillatory signals and there is a therapy for infertility based on delivering a hormone in an appropriate oscillatory way. The menstrual cycle makes it natural to think about oscillations in that context but the oscillations just mentioned are on timescales of one hour rather than one month. Similar control mechanisms seem to apply to many other hormones. In the book biological systems are often compared across very different animals or other living organisms. One interesting conclusion is that the fact that the human menstrual cyle takes about one month is an accident. This deals a blow to more or less mystical ideas relating this cycle with the moon. A recurring theme in the book is the way in which frequency modulated signals are decoded in biological systems. Here I feel that I am at the edge of a part of the theory of dynamical systems which I should learn a lot more about.
The theme of Chapter 6 is rhythms in the brain. Elsewhere in this blog I have written about the Hodgkin-Huxley model several times. This can be used to describe the propagation of an action potential along an axon. However this is not its only application. It can be also be used to describe the oscillatory behaviour of individual neurons. The basic phenomena involved are the flow of sodium and potassium ions across the cell membrane. Calcium ions also play a role in some cases. A mathematical phenomenon which comes up in this discussion is that of bursting oscillations. Since I was not previously familiar with that I now read up on it a bit. My main source was the book ‘Mathematical Physiology’ by Keener and Sneyd. A variable in a dynamical system is said to display bursting oscillations when it has the following type of behaviour. There is a period where it changes very little followed by a period where it oscillates with high frequency and fairly constant amplitude. Then it returns to the quiescent phase it started in. It goes through this cycle repeatedly. The minimum ingredients required for a dynamical system to show this type of behaviour are dimension at least three and equations with different timescales. One example occurs in a model for the production of insulin by the $\beta$-cells of the pancreas. In this case there are one slow and two fast variables. Heuristically the slow variable is first thought of as a parameter. As it is varied the dynamics of the fast system changes. For certain parameter values the fast system has a stable steady state. Starting from this point the slow variable changes in such a way that this steady vanishes in a fold bifurcation. The solution then moves quickly to being close to a limit cycle of the fast system and stays there for a while. The slow variable then changes in such a way that the limit cycle is destroyed in a homoclinic bifurcation. By that time the stable steady state has reappeared and the solution can jump back to it. Burst oscillations are classified into three types and the scenario just sketched is Type I. Coming back to the brain (or at least to neurons) a popular experimental system is the sea slug Aplysia. An isolated neuron of this organism can exhibit bursting oscillations. In this case (Type II) there are two slow variables. The oscillatory phase starts and ends with a homoclinic bifurcation. In Type III a period of oscillations is bounded by two subcritical Hopf bifurcations.
Another class of oscillations arises from the collective behaviour of a small number (up to about thirty) neurons. This has been studied for instance in the context of the motion of the snail Lymnaea. In the human brain there is a great variety of oscillations, producing characteristic traces in the EEG. They include a number of type of waves named after letters of the Greek alphabet, starting with $\alpha$, some of which have entered popular culture. I would like to learn more about them sometime, but the day for that has not yet arrived. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555035829544067, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/7189/3d-minimum-uncertainty-wavepackets?answertab=active | # 3D Minimum uncertainty wavepackets
Based on the 1D case mentioned in Griffiths, I decided to try looking at the features of 3D Gaussian wavefunctions, i.e. (position basis) wavefunctions of the form $\psi(\mathbf{r}) = Ae^{-\mathbf{r}^\dagger\mathsf\Sigma\mathbf{r}/4}$, where A is a normalization constant, r is position, Σ is a positive-definite symmetric matrix (which by a suitable change of coordinate basis can be made diagonal), and † denotes the conjugate transpose. Applying standard results for Gaussian integrals, I was able to get
• $\langle \mathbf{r} \rangle = 0$
• $\langle r^2\rangle = \operatorname{Tr}\mathsf\Sigma$
• $\langle \mathbf{p} \rangle = 0$
• $\langle p^2\rangle = \frac{\hbar^2}{4}\operatorname{Tr}\mathsf\Sigma^{-1}$
So, substituting into Heisenberg's uncertainty principle and rearranging terms, it follows that, in order to get minimum uncertainty with respect to $\mathrm{r}$ and $\mathrm{p}$, we need to have
$(\operatorname{Tr}\mathsf\Sigma)(\operatorname{Tr}\mathsf\Sigma^{-1})=1$.
Here's where I'm running into a difficulty. As I mentioned before, the matrix Σ can always be assumed to be diagonal. Then the only possible solution for Σ is
$\mathsf\Sigma = \begin{pmatrix} 1 & 0 & 0\\ 0 &-1 &0\\ 0 &0 &1\end{pmatrix}\times\mathrm{constant}$
But this contradicts the fact that Σ is positive-definite (the -1 would imply that one of the coordinates has negative uncertainty, an absurdity).
Assuming I did all the calculations correctly, this seems to imply that a Gaussian wavefunction is not the minimum uncertainty wavefunction with respect to r and p. On the other hand, it's comparatively trivial to show that it is the minimum uncertainty wavefunction with respect to x and px, y and py, and z and pz individually.
Is there a wavefunction which is the minimum unceratinty wavefunction with both respect to the individual coordinates (e.g. x and px) and with respect to r and p?
Edit It was asked by marek what I meant by "minimum uncertainty with respect to $\mathbf{r}$ and $\mathbf{p}$". To answer this, recall that the generalized uncertainty principle takes the form of $$\sigma_A\sigma_B \geq \frac{1}{2}\left|\langle[A,B]\rangle\right|.$$ Although I'm not entirely sure it's valid to do so, I assumed that to calculate the commutator $[\mathbf{r},\mathbf{p}]$ I could use the formalism of geometric algebra (see Geometric algebra). Then $$\begin{align*} [\mathbf{r},\mathbf{p}]f &= \frac{\hbar}{i}\mathbf{r}\nabla f - \frac{\hbar}{i}\nabla(f\mathbf{r})\\ &= \frac{\hbar}{i}\sum_{jk} \left[x^j\hat{\mathbf{e}}_j\frac{\partial f}{\partial x^k}\hat{\mathbf{e}}^k - \frac{\partial}{\partial x^k}\left(fx^j\hat{\mathbf{e}}_j\right)\hat{\mathbf{e}}^k\right]\\ &= \frac{\hbar}{i}\sum_{jk} \left[ x^j\frac{\partial f}{\partial x^k} \hat{\mathbf{e}}_j\hat{\mathbf{e}}^k - \frac{\partial f}{\partial x^k}x^j\hat{\mathbf{e}}_j\hat{\mathbf{e}}^k - f{\delta^j}_k\hat{\mathbf{e}}_j\hat{\mathbf{e}}^k\right]\\ &= \frac{\hbar}{i} f, \end{align*}$$ where $f$ is an arbitrary function, $x^1,x^2,x^3$ are the position coordinates, and $\hat{\mathbf{e}}_1,\hat{\mathbf{e}}_2,\hat{\mathbf{e}}_3$ are the standard Cartesian basis vectors. Thus, the uncertainty principle for $\mathbf{r}$ and $\mathbf{p}$ takes the form $$\sigma_\mathbf{r}\sigma_\mathbf{p} \geq \frac{\hbar}{2},$$ which means that the minimum uncertainty wavepacket with respect to $\mathbf{r}$ and $\mathbf{p}$ must satisfy $$\sigma_\mathbf{r}\sigma_\mathbf{p} = \frac{\hbar}{2}.$$
-
2
Well, I don't know geometric algebra but this seems definitely fishy. For one thing the object you should get as $[\mathbf r, \mathbf p]$ should be a bivector (if you've written this in indices, it should carry two of them) but you get just a scalar (and you dropped a minus sign too). Well, that bivector can be represented by an identity operator on a 3D vector space too (is this what you mean by the last row?) but you surely can't plug such an object into HUP. You need to get just numbers somehow. – Marek Mar 18 '11 at 22:58
Dagnabit you're right! And you'd think after almost three years of being a math and physics major I wouldn't mess up on a simple dimensions thing. Oh well – Avi Steiner Mar 20 '11 at 4:14
## 1 Answer
It seems that problem here is with mishandling vector quantities. We want to compute things such as $\left<p^2\right>$ but these are in fact $\sum_i \left<p_i^2\right>$ and so the problem decomposes into components where the standard HUP and minimality conditions can be applied. But what you've done is that you applied one-dimensional HUP to $\left<x^2\right>$ and $\left<p^2\right>$ which just can't be right. The correct form of HUP in this case would be $$\sum_i \left<x_i^2\right>\left<p_i^2\right> \geq 3 {\hbar^2 \over 4}$$
So, to reiterate, there is really nothing new to solve in more dimensions as the problem decomposes completely and you can write your solution as $\Psi(x,y,z)$ = $\psi_x(x)\psi_y(y)\psi_z(z)$ with each $\psi_{\alpha}$ a Gaussian from the one-dimensional variant of this problem.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553588628768921, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/14714/what-do-heat-kernels-have-to-do-with-the-riemann-roch-theorem-and-the-gauss-bonne/16770 | ## What do heat kernels have to do with the Riemann-Roch theorem and the Gauss-Bonnet theorem?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I know the following facts. (Don't assume I know much more than the following facts.)
• The Atiyah-Singer index theorem generalizes both the Riemann-Roch theorem and the Gauss-Bonnet theorem.
• The Atiyah-Singer index theorem can be proven using heat kernels.
This implies that both Riemann-Roch and Gauss-Bonnet can be proven using heat kernels. Now, I don't think I have the background necessary to understand the details of the proofs, but I would really appreciate it if someone briefly outlined for me an extremely high-level summary of how the above two proofs might go. Mostly what I'm looking for is physical intuition: when does one know that heat kernel methods are relevant to a mathematical problem? Is the mathematical problem recast as a physical problem to do so, and how?
(Also, does one get Riemann-Roch for Riemann surfaces only or can we also prove the version for more general algebraic curves?)
Edit: Sorry, the original question was a little unclear. While I appreciate the answers so far concerning how one gets from heat kernels to the index theorem to the two theorems I mentioned, I'm wondering what one can say about going from heat kernels directly to the two theorems I mentioned. As Deane mentions in this comments, my hope is that this reduces the amount of formalism necessary to the point where the physical ideas are clear to someone without a lot of background.
-
Just so you know, the version you get out of Atiyah-Singer is actually for algebraic manifolds: it's Hirzebruch-Riemann-Roch. But I'm also pretty sure that the topology is necessary, so you're stuck with working over the complex numbers (though that might not be quite correct) – Charles Siegel Feb 9 2010 at 1:42
8
I really like this question. The usual presentation of the Atiyah-Singer index theorem, as well as the heat kernel proofs, use so much formalism (as shown in the answers below). Surely, most of this formalism simplifies if you are just trying to prove the Gauss-Bonnet theorem on a 2-dimensional surface with a Riemannian metric. – Deane Yang Feb 9 2010 at 5:04
1
I also really like the answers given. +1 to everybody. – Ben Webster♦ Feb 9 2010 at 6:45
3
But all of the answers so far are for the index theorem in general. Could someone provide a less jargon-laden explanation for how to use the heat kernel on a 2-d Riemannian manifold to prove Gauss-Bonnet? – Deane Yang Feb 9 2010 at 13:38
1
@Deane--See 4.1.1 of The Laplacian on a Riemannian Manifold, available for viewing at math.bu.edu/people/sr (near the bottom of the page) – Steve Huntsman Feb 9 2010 at 14:38
show 4 more comments
## 8 Answers
Here is how the heat kernel proof of Atiyah-Singer goes at a high level. Let $(\partial_t - \Delta)u = 0$ and define the heat kernel (HK) or Green function via $\exp(-t\Delta):u(0,\cdot) \rightarrow u(t,\cdot)$. The HK derives from the solution of the heat equation on the circle:
$u(t,\theta) = \sum_n a_n(t) \exp(in\theta) \implies a_n(t) = a_n(0)\cdot \exp(-tn^2)$
For a sufficiently nice case the solution of the heat equation is $u(t,\cdot) = \exp(-t\Delta) * u(0,\cdot)$.
The hard part is building the HK: we have to compute the eigenstuff of $\Delta$ (this is the Hodge theorem). But once we do that, a miracle occurs and we get the
Atiyah-Singer Theorem: The supertrace of the HK on forms is constant: viz.
$Tr_s \exp(-t\Delta) = \sum_k (-1)^k Tr \exp(-t\Delta^k) = const$
For $t$ large, this can be evaluated topologically; for small $t$, it can be evaluated analytically as an integral of a characteristic class.
Edit per Qiaochu's clarification
This article of Kotake (really in here as the books seem to be mixed up) proves Riemann-Roch directly using the heat kernel.
-
1
BTW, I should have mentioned for clarity that $\Delta^k$ is not the $k$th power of the Laplacian, but rather the Laplacian on $k$-forms. – Steve Huntsman Feb 9 2010 at 16:28
Page 145 in Kotake's article also gives the Gauss-Bonnet theorem. – Steve Huntsman Feb 11 2010 at 17:26
I somehow missed your edit; thanks for the answer! – Qiaochu Yuan Mar 4 2010 at 4:30
2
(It's worth noting that what you labelled as the Atiyah-Singer Theorem is really the McKean-Singer formula. The evaluation of the supertrace as the integral of an explicit characteristic class is the Atiyah-Singer theorem.) – Paul Siegel Mar 23 at 14:03
I was a student of McKean, perhaps that explains my take. – Steve Huntsman Mar 23 at 16:00
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Added 2 June:
Since the summary below is already a bit long, I thought I'd add a few lines at the beginning as a guide. The proofs all proceed as follows:
1. Identify the quantity of interest (like the Euler characteristic) as the index of an operator going from an 'even' bundle to an 'odd' bundle.
2. Use Hodge theory to write the index in terms of the dimensions of harmonic sections, i.e., kernels of Laplacians.
3. Use the heat evolution operator for the Laplacians and 'supersymmetry' to rewrite this as a 'supertrace.'
4. Write the heat evolution operator in terms of the heat kernel to express the supertrace as the integral of a local density.
5. Use the eigenfunction expansion of the heat kernel to identify the constant (in time) part of the local density.
Most of this is general nonsense, and the difficult step is 5. By and large, the advances made after the seventies all had to do with finding interpretations of this last step that employed intuition arising from physics.
I suffered over this proof quite a bit in my pre-arithmetic youth and wrote up a number of summaries. A condensed and extremely superficial version is given here, mostly for my own review. If by chance someone finds it at all useful, of course I will be delighted. I apologize that I don't say anything about physical intuition (because I have none), and for repeating parts of the previous nice answers. It's been years since I've thought about these matters, so I will forgo all attempts at even a semblance of analytic rigor. In fact, the main pedagogical reason for posting is that a basic outline of the proof is possible to understand with almost no analysis.
The usual setting has a compact Riemannian manifold $M$, two hermitian bundles $E^+$ and $E^-$, and a linear operator $$P:H^+\rightarrow H^-,$$ where $H^{\pm}:=L^2(E^{\pm})$. With suitable assumptions (ellipticity), $ker(P)$ and $coker(P)$ have finite dimension, and the number of interest is the index: $$Ind(P)=dim(ker(P))-dim(coker(P)).$$ This can also be expressed as $$dim(ker(P))-dim(ker(P^{*})),$$ where $$P^{*}:H^-\rightarrow H^+$$ is the Hilbert space adjoint. A straightforward generalization of the Hodge theorem allows us also to write this in terms of Laplacians $\Delta^+=P^* P$ and $\Delta^-=PP^*$ as $$dim(ker(\Delta^+))-dim(ker(\Delta^-)).$$ Things get a bit more tricky when we try to identify the index with the expression ('supertrace,' so-called) $$Tr(e^{-t\Delta^+})-Tr(e^{-t\Delta^-}).$$ The operator $$e^{-t\Delta^{\pm}}:H^{\pm}\rightarrow H^{\pm}$$ sends a section $f$ to the solution of the heat equation $$\frac{\partial}{\partial t} F(t,x)+\Delta^{\pm}F(t,x)=0$$ ($x$ denoting a point of $M$) at time $t$ with intial condition $F(0,x)=f(x).$ One important part of this is that there are discrete Hilbert direct sum decompositions $$H^+=\oplus_{\lambda} H^+(\lambda)$$ and $$H^-=\oplus_{\mu} H^-(\mu)$$ in terms of finite-dimensional eigenspaces for the Laplacians with non-negative eigenvalues. And then, the identities $$\Delta^-P=PP^{*}P=P\Delta^+$$ and $$\Delta^+P^{*}=P^{*}PP^{*}=P^{*}\Delta^-$$ show that the (supersymmetry) operators $P$ and $P^{*}$ can be used to define isomorphisms between all non-zero eigenspaces of the two Laplacians with a correspondence for eigenvalues as well. Thus, once you believe that the exponential operators are trace class, it's easy to see that the only contributions to the trace are from the kernels of the plus and minus Laplacians. This is the 'easy cancellation' that occurs in this proof. But on the zero eigenspaces, the heat evolution operators are clearly the identity, allowing us to identify the supertrace with the index. To summarize up to here, we have $$Ind(P)=Tr(e^{-t\Delta^+})-Tr(e^{-t\Delta^-}).$$ This identity also makes it obvious that the supertrace is in fact independent of $t>0$.
The proofs under discussion all have to do with identifying this supertrace in terms of local expressions that relate naturally to characteristic classes. The beginning of this process involves first writing the operator $e^{-t\Delta^+}$ in terms of an integral kernel $$K^+_t(x,y)=\sum_i e^{-t\lambda_i } \phi^+_i(x)\otimes \phi^+_i(y)$$ where the $\phi^+_i$ make up an orthonormal basis of eigenvectors for the Laplacian. That is, $$e^{-t\Delta^+}f=\int_M K^+_t(x,y)f(y)dvol(y)=\sum_i e^{-t\lambda_i } \int_M \phi^+_i(x) \langle \phi^+_i(y),f(y)\rangle dvol(y).$$ Formally, this identity is obvious, and the real work consists of the global analysis necessary to justify the formal computation. Obviously, there is a parallel discussion for $\Delta^-$. Now, by an infinite-dimensional version of the formula that expresses the trace of a matrix as a sum of diagonals, we get that $$Tr(e^{-t\Delta^+})=\int_M Tr(K^+_t(x,x))dvol(x)=\int_M \sum_ie^{-t\lambda_i}||\phi^+_i(x)||^2 dvol(x),$$ an integral of local (point-wise) traces, and similarly for $Tr(e^{-t\Delta^-})$. One needs therefore, techniques to evaluate the density
$$\sum_ie^{-t\lambda_i}||\phi^+_i(x)||^2 dvol(x)-\sum_ie^{-t\mu_i}||\phi^-_i(x)||^2 dvol(x).$$
More analysis gives an asymptotic expansion for the plus and minus densities of the form $$a^{ \pm }_{-d/2}(x) t^{-d/2}+a^{ \pm }_{d/2+1}(x) t^{-d/2+1}+\cdots$$ where $d$ is the dimension of $M$.
Up to here the discussion was completely general, but then the proof begins to involve special cases, or at least, broad division into classes of cases. But note that even for the special cases mentioned in the original question, one would essentially carry out the procedure outlined above for a specific operator $P$.
The breakthrough in this line of thinking came from Patodi's incredibly complicated computations for the operator $d+d^*$ going from even to odd differential forms, where one saw that the $$a^{+}_i(x)$$ and $$a^{-}_i(x)$$ canceled each other out locally, that is, for each point $x$, for all the terms with negative $i$. I think it was fashionable to refer to this cancellation as 'miraculous,' which it is, compared to the easy cancellation above. At this point, Patodi could take a limit $$\lim_{t\rightarrow 0}[\sum_ie^{-t\lambda_i}||\phi^+_i(x)||^2 dvol(x)-\sum_ie^{-t\mu_i}||\phi^-_i(x)||^2 dvol(x)],$$ that he identified with the Euler form. This important calculation set a pattern that recurred in all other versions of the heat kernel approach to index theorems. One proves the existence of an analogous limit as $t\rightarrow 0$ and identifies it. The identification as a precise differential form representative for a characteristic class is referred to sometimes as a local index theorem, a statement more refined than the topological formula for the global index. There is even a beautiful version of a local families index theorem that relates eventually to deep work in arithmetic intersection theory and Vojta's proof of the Mordell conjecture.
As I understand it, Gilkey's contribution was an invariant theory argument that tremendously simplified the calculation and allowed a differential form representative for the $\hat{A}$ genus to emerge naturally in the case of the Dirac operator. And then, I believe there is a $K$-theory argument that deduces the index theorem for a general elliptic operator from the one for the twisted Dirac operator.
Experts can correct me if I'm wrong, but from a purely mathematical point of view, essentially all the work on the heat kernel proof was done at this point. Subsequent interpretations of the proof (more precisely, the supertrace), in terms of supersymmetry, path integrals, loop spaces, etc., were tremendously influential to many areas of mathematics and physics, but the mathematical core of the index theorem itself appears to have remained largely unchanged for almost forty years. In particular, the terminology I've used myself above, the super- things, didn't occur at all in the original papers of Patodi, Atiyah-Bott-Patodi, or Gilkey.
Added:
Here is just a little bit of geometric-physical intuition regarding the heat kernel in the Gauss-Bonnet case, which I'm sure is completely banal for most people. The density $$\sum_ie^{-t\lambda_i}||\phi^+_i(x)||^2 dvol(x)-\sum_ie^{-t\mu_i}||\phi^-_i(x)||^2 dvol(x)$$ expresses the heat kernel in terms of orthonormal bases for the even and odd forms. When $t\rightarrow \infty$ all terms involving the positive eigenvalues decay to zero, leaving only contributions from orthonormal harmonic forms. This is one way to to see that the integral of this density, which is independent of $t$, must be the Euler characteristic. On the other hand, as $t\rightarrow 0$, the operator $$K^+_t(x,y)dvol(y)=[\sum_i e^{-t\lambda_i } \phi^+_i(x)\otimes \phi^+_i(y)]dvol(y)$$ literally approaches the identity operator on all even forms (except for the fact that it diverges). That is, the heat kernel interpolates between the identity and the projection to the harmonic forms, in some genuine sense expressing the diffusion of heat from a point distribution to a harmonic steady-state. A similar discussion holds for the odd forms as well. I can't justify this next point even vaguely at the moment, but one should therefore think of $$[K^+_t(x,y)-K^-_t(x,y)]dvol(y)$$ as regularizing the current on $M\times M$ given by the diagonal $M\subset M\times M$. Thus, the integral of $$[TrK^{+}_t(x,x)-TrK^-_t(x,x)]dvol(x)$$ ends up computing a deformed self-intersection number of the diagonal in $M\times M$. From this perspective, it shouldn't be too surprising that the Euler class, representing exactly this self-intersection, shows up.
Added:
I forgot to mention that the Riemann-Roch case is where $$P=\bar{\partial}+\bar{\partial}^*$$ going from the even to the odd part of the Dolbeault resolution associated to a holomorphic vector bundle. The limit of the local density is a differential form representing the top degree portion of the Chern character of the bundle multiplied by the Todd class of the tangent bundle. Perhaps it's worth stressing that these special cases all go through the general argument outlined above.
-
This sounds extremely interesting: are there any references you'd recommend for this sort of approach? – Akhil Mathew Jun 2 2010 at 20:02
My outdated knowledge still relies heavily on the paper of Atiyah, Bott, and Patodi: On the heat equation and the index theorem, Invent. Math. 19 (1973), 279–330. Many nice books seem to have come out in the meanwhile. – Minhyong Kim Jun 2 2010 at 23:49
In my opinion, the most "physical" recasting of the index theorem is the Witten index for supersymmetric quantum theories. It is superficially similar to the heat kernel, but much more general. The Witten index of a supersymmetric quantum mechanical system is the regularised supertrace $$\mathrm{Tr} (-1)^F \exp(-\beta H)$$ where $(-1)^F$ is the grading operator which is $-1$ on fermionic states and $+1$ on bosonic states and $H$ is the hamiltonian. The trace is taken over the Hilbert space of states.
One can show that this does not depend on $\beta$ and hence can be evaluated both at small $\beta$ ("large temperature expansion") or large $\beta$ ("small temperature expansion"). In one expansion one sees that it computes the trace of $(-1)^F$ on zero modes of the hamiltonian, since for a supersymmetric system the dimensions of the bosonic and fermionic positive-energy eigenstates are equal. In the other expansion one writes the Witten index in terms of a functional integral, which (when formally manipulated) becomes a geometric integral. The formal manipulations can be justified as in Getzler's proof of the local Atiyah-Singer index theorem.
The relation with the heat kernel comes from taking a particular supersymmetric model in which the hamiltonian is the laplacian. The relation with the Gauss-Bonnet theorem comes from considering a supersymmetric sigma model in which the hamiltonian is the Hodge laplacian acting on ($L^2$) differential forms. The Witten index then is computing the index of the elliptic operator $d + \delta$ from the odd to the even rank differential forms, which is the Euler characteristic of the manifold.
There are supersymmetric models for which the Witten index computes the index of a generalised Dirac operator as in the original work of Atiyah and Singer.
Witten introduced "his" index in order to study supersymmetry breaking. A nonzero value of the index shows that there exists a vacuum state (=a state of minimal energy) which is invariant under supersymmetry and hence supersymmetry is not (spontaneously) broken.
There is yet another relation between the heat kernel and the index theorem and it comes from a certain regularisation of the functional integral measure as in Fujikawa's celebrated derivation of the chiral anomaly.
-
The index is the number of anomalous "ghost" states in a chiral field theory. Anomalies occur when the classical/quantum symmetry correspondence breaks down under renormalization (but global anomalies are "good" compared to local ones, which in string theory constrain the dimension and fermion content). Atiyah-Singer thus helps to address questions about, e.g. why there are three generations of chiral fermions, why protons don't decay, why the electron mass is small, etc. – Steve Huntsman Feb 9 2010 at 1:20
Global anomalies render the theory inconsistent, so I'm not sure in what sense of the word 'good' you mean that they are "good". Also the dimension in string theory is determined by a local anomaly, whereas if by "fermion content" you mean the GSO projection, this is the modular invariance of the partition function on the torus, which can be rephrased indeed as the absence of a global anomaly. – José Figueroa-O'Farrill Feb 9 2010 at 10:43
I mean "good" in a relative sense, e.g. w/r/t the quark mass in QCD. – Steve Huntsman Feb 9 2010 at 14:40
The statement about supersymmetry breaking in the last is actually wrong. If the Witten index is not zero, which means there is a supersymmetric ground state with zero energy, it does NOT occur that supersymmetry is spontaneously broken. – Satoshi Nawata May 15 2012 at 5:14
@Satoshi: indeed, there is a "not" missing! I will edit. – José Figueroa-O'Farrill May 15 2012 at 19:52
For a completely elementary description of the relation between the Euler characteristic of a domain in $\mathbb{R}^2$ and the heat kernel on the domain, you could start with Mark Kac's "Can you hear the shape of a drum". At the very end of the paper he observes that the constant term in the expansion of the trace of the heat kernel contains topological information, namely the Euler characteristic of the domain. He doesn't prove this, but indicates how you could get this from what he explained before in the paper. He left his formula as a conjecture (in 1966).
Kac, Mark (1966), "Can one hear the shape of a drum?", American Mathematical Monthly 73 (4, part 2): 1–23
-
As I recall (correctly?) this paper of Kac motivated McKean and Singer's work, which then informed the index theorem. – Steve Huntsman Mar 2 2010 at 23:36
By the way, the first heat kernel proof was given - if I am not mistaken - by Peter Gilkey!
The following MSRI workshop also had some introductory remarks about index theory
http://www.msri.org/calendar/workshops/WorkshopInfo/443/show_workshop
I personally would recommend Gerd Grubbs first lecture; what you are looking for is on slide 5 and onwards (up to slide 7) http://jessica2.msri.org/attachments/12917/pages/129170005.htm
Paul Loya's lectures are also interesting, by the way!
edit:
Gilkey writes in
http://mmf.ruc.dk/~Booss/recoll.pdf (an interesting article, you should read it!)
"During the course, he said that there was this wonderful invariant that Bob Seeley had constructed analytically and, ‘here is Bott’s proof of the index theorem, and somebody should actually show that this gives a heat equation proof of the index theorem.’ That struck me as a fun problem, so I went home that night and gave the heat equation proof to the Gauss-Bonnet-Theorem."
-
I'm not sure who was first, but the names that are associated with the heat kernel proof are Gilkey and Atiyah-Bott-Patodi. – Deane Yang Feb 9 2010 at 16:48
Gilkey was supervised by Bott (and Nirenberg)! That's no contradiction! :) – Orbicular Feb 9 2010 at 20:05
Patodi had published four or five papers on the subject before Atiyah-Bott-Patodi. Here is the opening line of the review (MR0290318) of one of the papers: The main purpose of the paper is to give an analytic proof of the Riemann-Roch-Hirzebruch theorem, in the case of a Kähler manifold, through use of the fundamental solution of the heat operator. Thus, the paper is a natural outcome of the author's previously developed method. – Chandan Singh Dalawat Jun 1 2010 at 14:02
For surfaces everything is quite simple: The starting point is of course the McKean Singer formula for the index. Next step is to compute an assymptotic expansion of the heat kernel, but this can be done for surfaces by hand (for the terms which are important for the index, i.e. up to order 1): it is more or less the curvature of the corresponing bundle-> integrating this up gives you an heat equation proof of RR or Gauss Bonnet. (Details can be found in Roe s Elliptic operators,... book)
-
This is a wonderful question, but I think people are having a hard time answering it because even in the simplest settings there is a lot going on and it is not clear what not a lot of background means. The simplest case would have to be proving GBC for a surface (nothing I say will be any easier for a surface, unfortunately!). First you have to know that heat kernel here means heat kernel for the Hodge Laplace operator, which is the Laplace operator on forms $(d+d^*)^2.$ Second it involves the supertrace of the heat kernel, which means the trace, except you add a minus sign for the parts taking odd forms to odd forms. A previous post mentions McKean and Singer, whose argument I will not repeat because it cannot be improved upon, but it argues that the eigenforms of this Laplacian with nonnegative eigenvalue cancel out, so only the $0$ eigenfunctions contribute, so the quantity is $t$-independent and in fact the Euler number. But if it is $t$-independent, we may as well look at this quantity as $t \to 0.$ Since the heat kernel is approaching a delta function on the diagonal, the small $t$ behavior is a local question, and can be addressed by comparing it to the flat space heat kernel. This reasonably straightforward calculation gets you the curvature, or in higher dimensions the Pfaffian of the of the curvature (the supertrace itself contained an integral which I have not mentioned explicitly). If you are looking for a principle or technique to bring with you to other problems, I would say it is this: Show some quantity you can compute from the time evolution kernel is somehow topological and therefore time independent, then equate the small time calculation of it to the large time calculation and see a connection between two things that look unrelated.
-
I am not able to say much regarding how to prove RR by Atiyah-Singer theorem, I know Gauss-Bonnet follows naturally from Atiyah-Singer theorem. Here is a heat kernel proof of Atiyah-Singer in case you are still looking for relevant material.
-
In the view of the downvote I should justify myself that this article gives a proof of RR via Atiyah-Singer index theorem, which is why I cannot say much in the answer I wrote. – Changwei Zhou May 16 2012 at 2:24
If that's what there is in the notes, then I also don't understand why the downvotes. – Amin Jun 15 at 13:48
It is in page 36. – Changwei Zhou Jun 15 at 23:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 27, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277496337890625, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/137951-geometric-series-question.html | Thread:
1. Geometric series question
A large school board established a phone tree to contact all of its employees in case of emergencies. Each of the 3 superintendents calls 3 employees who each in turn calls 3 other employees and so on. How many rounds of phone calls are need to notify all 9840 employees?
The back of the book says the answer is 8 rounds but I have no idea how to get that Please help
A large school board established a phone tree to contact all of its employees in case of emergencies. Each of the 3 superintendents calls 3 employees who each in turn calls 3 other employees and so on. How many rounds of phone calls are need to notify all 9840 employees?
The back of the book says the answer is 8 rounds but I have no idea how to get that Please help
After $n$ rounds, the number of people contacted (including the original $3$) is:
$3 + 3\times3+3\times3^2+3\times3^3...+3\times3^{n-1}$
which is a Geometric Series with:
first term, $a = 3$; common ratio $r = 3$.
The sum to $n$ terms, $S_n$ is given by:
$S_n = \frac{a(r^n-1)}{r-1}$
$=\frac{3(3^n-1)}{2}$
We want $S_n \ge 9840$. So:
$\frac{3(3^n-1)}{2}\ge9840$
$\Rightarrow 3^n-1\ge6560$
$\Rightarrow n = 8$
Grandad | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542143940925598, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/18780/is-there-a-sensible-notion-of-abstract-constructible-space | ## Is there a sensible notion of abstract constructible space?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the past by the term "variety" people understood a subset of projective space locally closed for the Zariski topology. Now we have a more natural notion of abstract algebraic variety, i.e. a scheme that is so and so, and we can conceive non quasi-projective varieties.
Now we have the concept of constructible subset of a variety (or of a scheme), i.e. a finite union of locally closed subsets (subschemes). We know that the image of a morphism of varieties may fail to be a subvariety of the target, nevertheless it's always a constructible subset thereof.
Is there a reasonable notion of "abstract constructible space"? And would it be of any utility in algebraic geometry?
Edit: A side question. If we have a map $f:X\to Y$ of -say- varieties, we can put a closed subscheme structure on the (Zariski) closure $Z$ of $f(X)$, as described in Hartshorne's book. On the other hand we can consider the ringed space (that will not be, in genberal, a scheme) $W$ which is the quotient of $X$ by the equivalence relation induced by $f$. Will there be any relation between $Z$ and $W$?
-
en.wikipedia.org/wiki/Constructible_sheaf Presumably then a constructible geometric space is a ringed space where the structure sheaf satisfies this property. – Harry Gindi Mar 19 2010 at 18:21
Taking quotients by equivalence relations is not even really the right idea for locally ringed spaces. Rather, the way to extend taking quotients is to consider schemes as sheaves on the opposite category of commutative rings with the étale topology (functors of points). When we take quotients in this category by étale equivalence relations, we end up with algebraic spaces, which don't really have an interpretation as locally ringed spaces. If we want to take more quotients but still end up outside of our category, this is when we generalize further to stacks. – Harry Gindi Mar 20 2010 at 6:55
@fpqc: My guess is that underlying your suggestion for a definition is the belief that constructible subsets of varieties have the property you suggest. But given a constructible subset of a variety how do you propose to put a ringed space structure on it so that the structure sheaf becomes a constructible sheaf? – Kevin Buzzard Mar 20 2010 at 9:23
My first comment was really just a guess, because I can't find his definition of a constructible scheme (yes, I looked for about 15 minutes). Every definition I found had this requirement about a twisted locally constant sheaf. My second comment is true though. – Harry Gindi Mar 20 2010 at 15:00
1
@fpqc: as far as I know there is no such thing as a constructible scheme, so it doesn't surprise me that you couldn't find a definition. – Kevin Buzzard Mar 20 2010 at 17:25
show 6 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332884550094604, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/27580/list | Return to Answer
Post Undeleted by Greg Kuperberg
2 added 324 characters in body
Ian Morris has it essentially correct in his comment. If you solve the equation `$x = (\sin \pi y/2)^2$` for $y$, then the orbit of $y \in \mathbb{R}/\mathbb{Z}$ under `$y \mapsto 2y$` is dense if and only if $y$ is 2-normal. In other words, if every finite binary string appears in the binary expansion of $y$. If you look at the Wikipedia page for Now, this is weaker than being normal numbersin base 2, it's clear because that there won't be requires that every binary string appears equally often, not just that it appears. Let's call such a simpler description than number "topologically 2-normal" (or 2-dense could be another name), because the 2-normality condition is equivalent to saying that the orbit of any proven example$y$ is not just dense, but ergodic. You My impression is that not much more is known about topologically normal numbers than about normal numbers. For instance, you can certainly conjecture that any reasonably simple choice of $x$ that makes $y$ irrational also makes $y$ 2-normalalgebraic number is topologically normal in base 2 (or in any other base), but such numbers have generally not been proven to be normalit doesn't look like it is known. In any case, topological normality is the heart of the question.
Post Deleted by Greg Kuperberg
1
Ian Morris has it essentially correct in his comment. If you solve the equation `$x = (\sin \pi y/2)^2$` for $y$, then the orbit of $y \in \mathbb{R}/\mathbb{Z}$ under `$y \mapsto 2y$` is dense if and only if $y$ is 2-normal. In other words, if every finite binary string appears in the binary expansion of $y$. If you look at the Wikipedia page for normal numbers, it's clear that there won't be a simpler description than that of any proven example. You can certainly conjecture that any reasonably simple choice of $x$ that makes $y$ irrational also makes $y$ 2-normal, but such numbers have generally not been proven to be normal. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609794616699219, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/3967/rsa-encryption-input-range-plaintexts-that-map-to-ciphertexts?answertab=active | # RSA encryption input range - plaintexts that map to ciphertexts?
According to the wiki article on the RSA encryption function, the valid range of input $m$ is $0 \leq m \lt n$. However I have found that the following values of $m$ always return themselves when encrypted:
• $0$
• $1$
• $n - 1$
• $n - 2$
Have I implemented the algorithm incorrectly or am I misreading the wiki article?
-
## 1 Answer
Yes, this is textbook RSA, so by definition:
$0^e \equiv 0 \pmod{n}$
$1^e \equiv 1 \pmod{n}$
$(n - 1)^e \equiv (-1)^e \equiv -1 \equiv n - 1 \pmod{n}$
(since $e$ must be odd as $\varphi{(n)}$ is even and thus $2$ has no modular inverse modulo $\varphi{(n)}$)
This is (obviously) bad since an observer can immediately deduce the plaintext for those messages even without knowing the decryption exponent, but normally you use padding in real life RSA which makes those kind of inputs essentially impossible.
However you are wrong for $n - 2$. It does not necessarily encrypt to itself. How did you get that? You must be making some mistake in your implementation, or perhaps got lucky with a few small inputs.
Counterexample, $n = 377$, $e = 17$ ($d = 257$, $\varphi{(377)} = 336$):
$(n - 2)^e \equiv 375^{17} \equiv 124 \pmod{377}$
Are you sure you are calculating $e$ right?
-
Indeed. And that's why RSA is used with padding, to ensure that values like that cannot occur, and to ensure that the same message will have different encryptions at different times (which is another desirable property: the ciphertext leaks as little information as possible). – Henno Brandsma Oct 7 '12 at 14:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288389682769775, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/99221?sort=oldest | ## relation with jacobifields in a small neighbourhood
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
hi,
I have the following question: Let $(M,g)$ be a complete Riemannian manifold with all sectional curvatures non-positive. Let $p \in M$ and consider the function $d(x)=dist_{g}(x,p)$ in a sufficiently small neigbourhood of $p$ (one can assume that this neigbourhood is actually given by normal coordinates, hence by the exponential map $exp_{p}$). Let $\gamma : [0, \epsilon] \rightarrow M$ be a geodesic in this neighbourood starting at $p$. Let furthermore $J$ be a Jacobifield along with $J(0) = 0$ and $\frac{D}{dt}|_{t=0}J = v$ and $J(\epsilon) = w \not= 0$ and orthogonal to $\gamma$. Can one then make the following approximation: $\frac{d(\gamma(t))}{2} \cdot \frac{d}{dt}|J|^{2} \geq |J|^{2}$ ? If yes, why is this so? Hope for answers and tanks in advance.
greetings pascal
-
## 1 Answer
Your original question is false for the round sphere, since for the unit vector $v$ orthogonal to $\gamma'(0)$ you get $|J(t)|^2=\sin^2(t)$, while $\tan(t)>t$ for $t\in (0, \pi/2)$.
It follows immediately from the Taylor expansion for $|J(t)|$ that your inequality holds if curvature of $M$ is negative. The inequality is probably also true for manifolds of nonpositive curvature, but would require a bit more work.
Edit: Here is the proof of the inequality in the case of nonpositive sectional curvature. I will assume that $J'(0)$ is a unit vector orthogonal to $\gamma'(0)$ (since the proof in the case of the tangential Jacobi field $t\gamma'(t)$ is clear: you get the equality). Let $v(t)=|J(t)|^2$ and let $\tilde{v}=|\tilde{J}(t)|^2$, where $\tilde{J}$ is a Euclidean Jacobi field (so that $\tilde{J}(0)=0$, and $\tilde{J}'(0)$ is also a unit vector orthogonal to $\tilde{\gamma}'(0)$, where $\tilde{\gamma}$ is a Euclidean geodesic). Then you get the comparison inequality: $$v' \tilde{v} \ge v \tilde{v}'$$
for all $t$, see do Carmo's book "Riemannian Geometry", proof of Rauch comparison theorem, pages 216-217. You get: $\tilde{v}= t^2, \tilde{v}'=2t$ and the above comparison inequality becomes the inequality $$\frac{t}{2} v'\ge v$$ for all $t$, which is exactly the inequality that you are asking for (since for small $t$, $d(\gamma(t))=t$).
-
I see. so one will need some asumptions in the curvatures. but what if the curvatures are all non-positive. I will edit this. – pascal Jun 10 at 12:37
I dont see how the curvatures comes in here ? – pascal Jun 10 at 12:40
Curvature comes via Jacobi equation. – Misha Jun 10 at 13:39
yes but in this case can it be applied to the statement above ? – pascal Jun 10 at 13:55
@Pascal, look at the Taylor expansion for $|J(t)|^2$ at $t=0$. Curvature will appear as the degree 4 term. – Misha Jun 10 at 14:24
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242956042289734, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/286563/is-goedel-term-in-incomleteness-theorem-both-true-and-unproveable?answertab=oldest | # Is Goedel term (in incomleteness theorem) both true and unproveable?
In Goedel incompleteness theorem is Goedel term both true and unproveable, or just unproveable and truth neutral?
Can we add Goedel term to the theory as axiom and get new theory?
Can we add Goedel term negation to the theory and get a new theory? What can be said abot this last one theory: will it be inconsistent but this inconsistency will never manifestate?
-
1
I think, by a 'Goedel term' $\phi$ you mean independent from the axiom system, meaning that extending that either by $\phi$ or by $\lnot\phi$ it will stay consistent. – Berci Jan 25 at 10:59
@Berci: but it is also important to note that extending by $\not \phi$ produces a theory that is not sound since we already know that $\phi$ is true (semantically). – Marek Jan 25 at 11:56
## 2 Answers
I am interpreting your question as being about the usual Gödel sentence $\theta$ which is interpreted as saying "there is no proof of $\theta$ from $\text{PA}$," or perhaps more formally as "$\neg ( \exists n ) ( \text{Proof}(n,\ulcorner \theta \urcorner) )$," where $\text{Proof}(x,y)$ is the predicate asserting that $x$ codes a proof (from $\text{PA}$) of the sentence coded by $y$. Note that $\theta$ has the property that if $\text{PA}$ is ($\omega$-)consistent, then $\text{PA} \not\vdash \theta$ and $\text{PA} \not\vdash \neg \theta$.
(As a disclaimer of sorts, whenever I speak of "proof" (in English) below, I mean a "proof from $\text{PA}$.")
In a certain sense, $\theta$ is neither of the options you list... at least not without some extra assumptions. As $\text{PA}$ is either consistent or inconsistent, let's look at these cases separately:
• If $\text{PA}$ is consistent (which it is probably safe to say most mathematicians believe), then as $\text{PA} \not\vdash \theta$ it follows that $\theta$ speaks the truth about itself (as there is no proof of $\theta$ from $\text{PA}$ no natural number can encode a proof of $\theta$ from $\text{PA}$).
• If $\text{PA}$ is inconsistent, then $\text{PA}$ proves everything, and in particular $\text{PA} \vdash \theta$, and so the assertion that "$\theta$ has no proof in $\text{PA}$" is false, i.e., $\theta$ is false. (We can find a proof of $\theta$ from $\text{PA}$, and convert this proof into a number witnessing $( \mathbb{N} , ... ) \models \neg \theta$.)
To summarise: if $\text{PA}$ is consistent, then $\theta$ is an unprovable sentence which is true; if $\text{PA}$ is inconsistent, then $\theta$ is a provable sentence which is false.
The first case is the interesting one for the remainder of your questions. (If $\text{PA}$ is inconsistent, then so are $\text{PA} + \theta$ and $\text{PA} + \neg \theta$.) Recall that if $T$ is any theory and $\phi$ is any sentence, then $T + \phi$ is consistent iff $T \not\vdash \neg \phi$. So if $\text{PA}$ is consistent we have both $\text{PA} \not\vdash \theta$ and $\text{PA} \not\vdash \neg\theta$, and so both $\text{PA} + \neg \theta$ and $\text{PA} + \theta$ are consistent.
Perhaps the more interesting sentence regarding the second and third questions is $\text{Con} ( \text{PA} )$ which expresses the consistency of $\text{PA}$; something to the effect of $\neg ( \exists n ) ( \text{Proof} ( n , \ulcorner 0 = 0 \wedge \neg 0 = 0 \urcorner )$. This is another sentence known to be independent of $\text{PA}$, provided that $\text{PA}$ is consistent. Following similar reasoning to the above, if $\text{PA}$ is consistent, then $\text{Con} ( \text{PA} )$ is a true (unprovable) sentence, and if $\text{PA}$ is inconsistent, then $\text{Con} ( \text{PA} )$ is a false (provable) sentence. Again, in the former case both $\text{Con} ( \text{PA} )$ and $\neg \text{Con} ( \text{PA} )$ may be appended to $\text{PA}$ to yield a consistent theory.
Looking at the consistency of $\text{PA} + \neg \text{Con} ( \text{PA} )$, recall that this just means, via Gödel's Completeness Theorem, that it has some model $\mathcal{M}$. As $\mathcal{M} \models \neg \text{Con} ( \text{PA} )$, then there is some $a \in \mathcal{M}$ such that $\mathcal{M} \models \text{Proof} ( a , \ulcorner 0=0 \wedge \neg 0=0 \urcorner )$, i.e., $\mathcal{M}$ "thinks" that $a$ codes a proof of $0=0 \wedge \neg 0=0$. However this $a$ will not correspond to any real natural number, so we cannot translate this object into a real proof of $0=0 \wedge \neg 0=0$. ($\mathcal{M}$ will be a nonstandard model of $\text{PA}$, and will contain objects which you can think of as "infinitely big natural numbers;" $a$ will be one of these.)
-
You say "If $\text{PA}$ is consistent ... it followes that $\theta$ speaks the truth about itself" and "If $\text{PA}$ is consistent ... then $\text{PA}+\neg \theta$ is consistent". Is the following false? "If $\text{PA}+\neg \theta$ is consistent it followes that $\theta$ speaks the truth about itself". I ask because if this is not the case, then it seems the expression "speaks the truth about itself" is pretty meaningless. – Nick Kidman Jan 25 at 12:17
@NickKidman: I am a bit confused about exactly what you are asking, but perhaps this clears up something. Note that in all of these matters we give special preference to the system of "real" natural numbers $\mathcal N=(\mathbb N,\ldots)$, and when we mention "truth" it is with respect to this system. Now if $\text{PA}$ is consistent, then $\mathcal N\models\theta$ (no real natural number codes a proof of $\theta$), and so while $\text{PA}+\neg\theta$ is consistent, the system $\mathcal N$ cannot be a model of it. As in my very last paragraph models of this system will be nonstandard. – Arthur Fischer Jan 25 at 13:16
Mhm, okay I get an idea of the argumentation, although I feel thei "real N" thing is a little fuzzy, as I don't know how to know when I got the real one. – Nick Kidman Jan 25 at 13:54
@NickKidman: If you want to talk about "truth" you have to talk about "truth" within a model, so without some model asking about the "truth" of the Gödel sentence $\theta$ is meaningless. Interpreted in second-order logic the Peano axioms are actually categorical, and so they pick out an (up to isomorphism) unique model, which we can call the real natural numbers. [cont...] – Arthur Fischer Jan 25 at 14:46
– Arthur Fischer Jan 25 at 14:46
I am not an expert on these matters, but I think I can answer all of your questions, so here goes.
1. We assume the theory $T$ is strong enough to prove arithmetical truths; the technical qualifications are not that important here. The Goedel term $G$ for a theory $T$ informally states that '$G$ cannot be proved within $T$'. If $T$ is consistent (which we assume, otherwise $T$ would not be very interesting) then $T$ cannot prove $G$ (we'd get a contradiction). Therefore the term $G$ is actually true $-$ it really can't be proved within $T$.
2. We can add $G$ to $T$ as an axiom to obtain new theory $T'$. There is no problem doing this. We knew $G$ was true, so we are just enlarging $T$, so that it its more powerful and can prove more true statements about the world. $T'$ can now prove $G$ (trivially) but it is still not complete since Goedel's theorem produces a new true term $G'$ that $T'$ can't prove.
3. We can add $\neg G$ to $T$ as an axiom to obtain new theory $T'$. This new theory will still be consistent if $T$ was because $T$ itself can't prove $G$ and obviously $\lnot G$ won't make proving $G$ any easier. Nevertheless, we know that $G$ was true. Therefore this new theory $T'$ is not sound $-$ it is proving theorems that are not true.
-
What does "not sound" mean? – Suzan Cioc Jan 25 at 12:27
@Suzan: in logic you have two sides: syntactic and semantic. Syntactic is what you can prove formally, while semantic is what is really true (in some model). Ideal theory for a given model should be both sound (you can only prove syntactically what is true semantically) and complete (you can prove syntactically everything that is true semantically). So by adding $\neg G$ you obtain a theory that is not sound anymore (relative to the model for $T$ you were working with previously). – Marek Jan 25 at 13:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 125, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9571070671081543, "perplexity_flag": "head"} |
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.bams/1183485660 | ### Determination of all systems of $\infty ^4$ curves in space in which the sum of the angles of every triangle is two right angles
Jesse Douglas
Source: Bull. Amer. Math. Soc. Volume 29, Number 8 (1923), 356-366.
First Page:
Full-text: Open access | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.814550518989563, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/74450?sort=votes | ## proper action and amenable action
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We say that an action of a (discrete) group G on a locally compact space X is called proper if the map from $G\times X$ to $X\times X$ defined by $(g,x)\mapsto (gx,x)$ is proper. Why is a proper action amenable? (see On the Baum-Connes assembly map for discrete groups-Alain Valette, proof of lemma 2.13). If this is a case, the full crossed product and reduced product for $C_0(X)$ are isomorphic.
-
## 1 Answer
Look at that paper by C. Anantharaman-Delaroche: http://www.univ-orleans.fr/mapmo/membres/anantharaman/publications/Exactness02.pdf
In Prop. 2.2, point (2), you find an equivalent condition for amenability of the $G$-action on $X$, in terms of the existence of a net $(g_i)$ of continuous, non-negative functions on $X\times G$. Now, if $X$ is a proper $G$-space, you find a Bruhat function on $X$, i.e. a continuous non-negative function $h$ on $X$ such that $\sum_{t\in G}h(t^{-1}x)=1$. Define then $g_i(x,t)=h(t^{-1}x)$. If I'm not mistaken, the conditions in Anantharaman's result are satisfied.
-
thanks a lot. Bruhat function exists if proper space X is also G-compact? what about converse? Is amenable action proper? – m07kl Sep 3 2011 at 20:53
OK, I see. For a locally compact paracompact Hausdorff space proper action is amenable. To see this if X is G-compact, then we can choose Bruhat function to be compactly supported. If X is not G-compact, Bruhat function comes from G-compact case by partition of unity since compact paracompact Hausdorff implies partition of unity. In this case Bruhat function is not compactly supported, but the intersection of support of Bruhat function with any G-compact set in X is compact. (We need also Usyhson's lemma to construct Bruhat function, but paracompact Hausdorff space is normal) – m07kl Sep 3 2011 at 21:47
This implies also that for a proper G-C*-algebra the reduced and full crossed products coincide by Theorem 5.3 in C. Anantharaman-Delaroche – m07kl Sep 3 2011 at 21:59
1
About the converse: if $G$ is amenable and infinite, the action of $G$ on a point is amenable but not proper. – Alain Valette Sep 3 2011 at 22:19
thanks for answer – m07kl Sep 4 2011 at 13:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.846340000629425, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/15664/how-to-test-for-differences-between-two-group-means-when-the-data-is-not-normall | How to test for differences between two group means when the data is not normally distributed?
I'll eliminate all the biological details and experiments and quote just the problem at hand and what I have done statistically. I would like to know if its right, and if not, how to proceed. If the data (or my explanation) isn't clear enough, I'll try to explain better by editing.
Suppose I have two groups/observations, X and Y, with size $N_x=215$ and $N_y=40$. I would like to know if the means of these two observations are equal. My first question is:
1. If the assumptions are satisfied, is it relevant to use a parametric two-sample t-test here? I ask this because from my understanding its usually applied when the size is small?
2. I plotted histograms of both X and Y and they were not normally distributed, one of the assumptions of a two-sample t-test. My confusion is that, I consider them to be two populations and that's why I checked for normal distribution. But then I am about to perform a two-SAMPLE t-test... Is this right?
3. From central limit theorem, I understand that if you perform sampling (with/without repetition depending on your population size) multiple times and compute the average of the samples each time, then it will be approximately normally distributed. And, the mean of this random variables will be a good estimate of the population mean. So, I decided to do this on both X and Y, 1000 times, and obtained samples, and I assigned a random variable to the mean of each sample. The plot was very much normally distributed. The mean of X and Y were 4.2 and 15.8 (which were the same as population +- 0.15) and the variance was 0.95 and 12.11.
I performed a t-test on these two observations (1000 data points each) with unequal variances, because they are very different (0.95 and 12.11). And the null hypothesis was rejected.
Does this make sense at all? Is this correct / meaningful approach or a two-sample z-test is sufficient or its totally wrong?
4. I also performed a non-parametric Wilcoxon test just to be sure (on original X and Y) and the null hypothesis was convincingly rejected there as well. In the event that my previous method was utterly wrong, I suppose doing a non-parametric test is good, except for statistical power maybe?
In both cases, the means were significantly different. However, I would like to know if either or both the approaches are faulty/totally wrong and if so, what is the alternative?
-
2 Answers
The idea that the t-test is only for small samples is a historical hold over. Yes it was originally developed for small samples, but there is nothing in the theory that distinguishes small from large. In the days before computers were common for doing statistics the t-tables often only went up to around 30 degrees of freedom and the normal was used beyond that as a close approximation of the t distribution. This was for convenience to keep the t-table's size reasonable. Now with computers we can do t-tests for any sample size (though for very large samples the difference between the results of a z-test and a t-test are very small). The main idea is to use a t-test when using the sample to estimate the standard deviations and the z-test if the population standard deviations are known (very rare).
The Central Limit Theorem lets us use the normal theory inference (t-tests in this case) even if the population is not normally distributed as long as the sample sizes are large enough. This does mean that your test is approximate (but with your sample sizes, the appromition should be very good).
The Wilcoxon test is not a test of means (unless you know that the populations are perfectly symmetric and other unlikely assumptions hold). If the means are the main point of interest then the t-test is probably the better one to quote.
Given that your standard deviations are so different, and the shapes are non-normal and possibly different from each other, the difference in the means may not be the most interesting thing going on here. Think about the science and what you want to do with your results. Are decisions being made at the population level or the individual level? Think of this example: you are comparing 2 drugs for a given disease, on drug A half the sample died immediatly the other half recovered in about a week; on drug B all survived and recovered, but the time to recovery was longer than a week. In this case would you really care about which mean recovery time was shorter? Or replace the half dying in A with just taking a really long time to recover (longer than anyone in the B group). When deciding which drug I would want to take I would want the full information, not just which was quicker on average.
-
Thank you Greg. I assume there's nothing wrong with the procedure per-se? I understand that I might not be asking the right question, but my concern is equally about the statistical test/procedure and understanding itself given two samples. I'll check if I am asking the right question and come back with questions, if any. Maybe if I explain the biological problem, it would help with more suggestions. Thanks again. – Arun Sep 16 '11 at 20:08
One addition to Greg's already very comprehensive answer.
If I understand you the right way, your point 3 states the following procedure:
• Observe $n$ samples of a distribution $X$.
• Then, draw $m$ of those $n$ values and compute their mean.
• Repeat this 1000 times, save the corresponding means
• Finally, compute the mean of those means and assume that the mean of $X$ equals the mean computed that way.
Now your assumption is, that for this mean the central limit theorem holds and the corresponding random variable will be normally distributed.
Maybe let's have a look at the math behind your computation to identify the error:
We will call your samples of $X$ $X_1,\ldots,X_n$, or, in statistical terminology, you have $X_1,\ldots, X_n\sim X$. Now, we draw samples of size $m$ and compute their mean. The $k$-th of those means looks somehow like this:
$$Y_k=\frac{1}{m}\sum_{i=1}^m X_{\mu^k_{i}}$$
where $\mu^k_i$ denotes the value between 1 and $n$ that has been drawn at draw $i$. Computing the mean of all those means thus results in
$$\frac{1}{1000}\sum_{k=1}^{1000} \frac{1}{m}\sum_{i=1}^m X_{\mu^k_{i}}$$
To spare you the exact mathematical terminology just take a look at this sum. What happens is that the $X_i$ are just added multiple times to the sum. All in all, you add up $1000m$ numbers and divide them by $1000m$. In fact, you are computing a weighted mean of the $X_i$ with random weights.
Now, however, the Central Limit Theorem states that the sum of a lot of independent random variables is approximately normal. (Which results in also being the mean approx. normal).
Your sum above does not produce independent samples. You perhaps have random weights, but that does not make your samples independent at all. Thus, the procedure written in 3 is not legal.
However, as Greg already stated, using a $t$-test on your original data may be approximately correct - if you are really interested at the mean.
-
Thank you. It seems t-test already takes care of the problem using CLT (from greg's reply which I overlooked). Thanks for pointing that out and for the clear explanation of 3) which is what I actually wanted to know. I'll have to invest more time to grasp these concepts. – Arun Sep 17 '11 at 12:37
1
Keep in mind that the CLT performs differently well depending on the distribution at hand (or, even worse, the expected value or the variance of the distribution do not exist - then CLT is not even valid). If in doubt it is always a good idea to generate a distribution that looks similar to the one you observed and then simulate your test using this distribution a few hundred times. You will get a feeling on the quality of the approximation CLT supplies. – Thilo Sep 17 '11 at 18:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9678954482078552, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/155528/a-center-of-a-graph-for-example-a-tree-lies-on-its-longest-path | # A center of a graph (for example a tree) lies on its longest path
Prove a center of a tree (or if not much harder, of any graph) lies on the longest path.
(I encountered this when I was reading an alternative proof for the property :"a tree has at most two centers")
-
## 2 Answers
I think it goes like this. Let $c$ be the center. Let $P$ be a longest path. If $c$ is not on $P$ then there is a path from $c$ to some vertex $v$ on $P$ (I'm assuming the graph is connected, else I think the whole concept of center doesn't apply). Now I think you can prove that $v$ is more central than $c$, that is, that if the distance from $v$ to the farthest point exceeds the distance from $c$ to its farthest point then $c$ is on a path at least as long as $P$, contradiction.
I acknowledge that I have left some details to fill in.
-
Thanks! I constructed a longer path based on the assumption. – user31899 Jun 9 '12 at 4:55
You might find these items of interest: http://orion.math.iastate.edu/axenovic/Papers/Path-Transversals.pdf and http://www.math.uiuc.edu/~west/openp/pathtran.html
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486889839172363, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/187877-synthetic-division.html | Thread:
1. Synthetic division
Find the values of real numbers $a$ and $b$ such that:
$x^4 +4x^3+ax^2-b$ is divisible by both $(x-1)$ and $(x+2)$
Do I need to extract the factors (x-1) and (x+2) FIRST and then try to solve for a and b or how do I start?
Also, the answers provided are a = 7 and b = 12 but when I try to divide $(x-1)$ into $x^4+4x^3+7x^2-12$ I get a remainder of 1 which means that it does not divide it and a and b are incorrect values right?
2. Re: Synthetic division
If you make two horner scheme's, one with divisor 1 and the other one with divisor -2 (and at the end let the remainder =0) then you'll come to two simultaneous equations with variables a and b which you can solve directly.
(Notice: the coefficient of x is 0).
3. Re: Synthetic division
Ahh turns out I was just making errors in my synthetic division :S
Another method I used to get a = 7 and b = 12 is to just sub 1 and -2 into the original equation for x and solve these two equations simultaneously. Then I subbed these values in and divided by (x-1) and then (x+2) to get zero remainder solutions.
Why does subbing in these values (1 and -2) for x allow you to find a and b? I can see that it works and I can see that they are derived from the (x-1) and (x+2) terms but I don't see the logic behind it.
Thanks.
4. Re: Synthetic division
So have you tried it with the horner scheme?
1) divisor 1, the remainder becomes: $-b+a+5$
And this remainder has to be equal to 0 if the polynomial is divisible by the factor (x-1).
2) divisor -2, the remainder becomes: $-b+4a-16$
And this remainder has to be equal to 0 if the polynomial is divisible by the factor (x+2).
Therefore we get a system of simultaneous equations, if you multiply the second (or first) equation with a factor -1, then you can cancel out the variable b out and so find a and afterwards find b.
5. Re: Synthetic division
I am watching a video on it now I had never heard of it before.
6. Re: Synthetic division
But offcourse like you already said a more easy way is just entering the zero's into the equation and so you get the same system:
1) 1 is a zero, so: $1+4+a-b=0 \Leftrightarrow 5+a-b=0$
2) -2 is a zero, so: $16-32+4a-b=0 \Leftrightarrow -16+4a-b=0$
So the same system . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370327591896057, "perplexity_flag": "head"} |
http://scientopia.org/blogs/skullsinthestars/2010/05/04/singular-optics-light-chasing-its-own-tail/ | Just another Scientopia Blogs site
# Singular Optics: Light chasing its own tail
(Title stolen shamelessly from my postdoctoral advisor, who I assume will forgive me.)
As I've noted numerous times in previous posts, one of the fundamental properties that characterizes wave behavior (i.e. that makes a wave a wave) is wave interference. When two or more waves combine, they produce local regions of higher brightness (constructive interference) and lower brightness (destructive interference), the latter involving a partial or complete "cancellation" of the wave amplitude.
Researchers have long noted that the regions of complete destructive interference of wavefields, where the brightness goes exactly to zero, have a somewhat regular geometric structure, and that the wavefield itself has unusual behavior in the neighborhood of these zeros. In the 1970s this structure and behavior was rigorously described mathematically, and further research on this and related phenomena has become its own subfield of optics known as singular optics. Singular optics has introduced a minor "paradigm shift" of sorts to theoretical optics, in which researchers have learned that the most interesting parts of a light wave are often those places where there is the least amount of light!
In this post we'll discuss the basic ideas of singular optics; to begin, however, we must point out that most people have the wrong idea of what a "typical" interference pattern looks like!
The canonical demonstration of interference is Young's double slit experiment, in which a spatially coherent light beam is incident upon a pair of holes (slits or small "pinholes") in an opaque screen. The light emanating from the two holes interferes as it propagates beyond, and the interference pattern can be observed on a second screen:
As can be seen from the second image above, the interference pattern observed on the screen is a set of bright vertical bands (constructive interference) separated by dark vertical bands (destructive interference). It can be shown that there are vertical lines in the center of the dark vertical bands on which the light intensity is exactly zero: these lines are regions of complete destructive interference. They are highlighted in red in the following figure:
If we move the measurement screen, we find that the interference pattern remains essentially unchanged: in three-dimensional space, therefore, the regions of complete destructive interference are vertically-oriented surfaces.
This experiment is is so familiar that we tend to think of "bright and dark bands" whenever the word "interference" is used; but does every interference experiment produce a similar result?
Let us make the simplest possible change to Young's interferometer that we can imagine: we add a single extra hole, making it a "Young's three-pinhole interferometer"! The system and a typical interference pattern is illustrated below:
The intensity on the screen is a hexagonal pattern of bright spots interlaced with a hexagonal pattern of dark spots! It can be shown that the centers of the darkest spots in the pattern are points of complete destructive interference; these zeros are illustrated as red dots for clarity below:
If we move the measurement screen, we find that the basic form of the interference pattern doesn't change: in three-dimensional space, therefore, the regions of complete destructive interference are horizontally-oriented lines, and the pattern is fundamentally different than that of Young's two pinhole interferometer.
What happens if we add even more pinholes? The overall structure of the interference pattern will change, but the regions of complete destructive interference are still lines. It seems that a "typical" interference pattern is one in which lines of zero intensity are present. In technical terms, it is said that zero lines are a generic feature of wavefields; the zero surfaces of Young's interference experiment are non-generic.
Under what circumstances does one see a "generic" interference pattern? A very familiar example is the speckle pattern produced when a laser beam reflects off of a rough surface (picture from Wikipedia, brightness adjusted):
The countless dark spots are points of complete destructive interference, which are lines of zeros in three-dimensions. The rough surface scatters light in all directions with different phases, acting in essence as a Young interferometer with a large number of randomly-placed "holes". Natural light does not produce such a speckle pattern because of its low spatial and temporal coherence.
The really intriguing aspect of the lines of complete destructive interference, however, is the behavior of the phase of the light wave in the neighborhood of these lines. To describe this, we first briefly review what we mean by the phase of a wave, restricting ourselves to single-frequency (monochromatic) waves.
Let us start by considering monochromatic waves on a string, as might be generated by waving the end of the string up and down in a regular manner. A snapshot of the string at an instant of time would look something like this, where the arrow indicates the overall direction of motion:
Because the "waving" of the string repeats itself regularly, we can characterize its state by an angle between 0 and 360°, with 0 representing the situation when the wave is at its peak:
This angle is what we refer to as the phase $latex \phi$ of the wave. At the wave is at its highest; at the wave is at its lowest. At the wave has no height but is moving upward; at the wave also has no height but is moving downward. For future reference, we refer to those regions for which as the wavefronts of the wave.
We can also define a phase for an optical wave in three-dimensional space. The simplest is a wave where the wavefronts are equally spaced planes in 3-D space; such a wave is known as a plane wave (here shown moving to the left):
A more complicated wave, such as light that has been distorted by passing through a piece of curved glass, will in general have curved wavefronts. For instance, a top-down view of the behavior of the wavefronts on passing through a lens is illustrated crudely below:
Light is slowed down inside the glass of the lens, and the part of the wave that travels the farthest in the glass is delayed the longest, resulting in a net curvature of the wavefronts.
So what do the wavefronts look like around a generic zero line? There are loosely two possibilities, depending on the orientation of the zero line with respect to the overall direction of the wave.
Let us first consider the case where the zero line lies in the same direction the wave is traveling. If one looks at the wavefronts near the zero line, they actually form a helix around the central line:
If we were to make a movie of the motion of the wavefronts, we would find that they twist upwards in a right-handed sense, like a drill bit or the tip of a screw as it is screwed in. This structure is referred to as a screw dislocation.
We can also consider the case where the zero line is horizontal to the direction of motion. The wavefronts near the zero line have the following (rough) appearance:
If we look at a cross-section of this wave, we find that the zero line marks the beginning of a new wavefront which is wedged between two other wavefronts:
The blue dot indicates the location of the zero line. This sort of structure is referred to as an edge dislocation.
There also can exist mixed dislocations, which are partly like a screw dislocation and partly like an edge dislocation; such structures occur when the zero line is intermediate between parallel and perpendicular to the direction of wave propagation.
The term dislocation is taken from the physics of crystal structures; the edge and screw dislocations of light are analogous to the types of defects that can occur in crystals. An edge dislocation in a crystal is the case when an extra row of atoms is "wedged" into an otherwise regular crystal structure (source):
A screw dislocation in a crystal appears when the crystal is "torn" and the rows are shifted along the line of the tear. The picture below shows an edge dislocation in the top row and a screw dislocation in the bottom row (source):
This analogy between dislocations in crystals and the structure of the phase near zero lines of light intensity was first noticed by Nye and Berry in their seminal 1974 paper1 on the subject, "Dislocations in wave trains."
In addition to studying the structure of the wavefronts, we may look at the behavior of the phase of the field near the zero lines. Let us plot the phase in a plane perpendicular to the zero line; it turns out the image is the same for an edge or a screw dislocation:
The phase increases continuously as one traces a counterclockwise path around the zero point in the center. All possible values of the phase come together at that central point, however, implying that there is no unique value of the phase along the zero line. For this reason, the zero line is referred to as a phase singularity, and the general study of such objects is referred to as singular optics.
So the zero line can be referred to as a dislocation line, or a phase singularity: let's give it one more name! Referring again to the screw dislocation, we noted that, as time progresses, the wavefronts circulate or swirl around the zero line: in a very real sense, the light wave is "going in circles" around that central axis, very much like water swirling around a drain. For this reason, these structures are also referred to as optical vortices.
These optical vortices are the generic, or typical, features of a wave interference pattern. For instance, the following figure shows the intensity and phase of a speckle pattern produced by the interference of four randomly-oriented plane waves:
We can see numerous points of zero intensity, some of which are circled for clarity. We can see the corresponding circles on the phase plot encircle structures that match the picture of the single vortex whose phase structure was depicted earlier.
Note the difference between the vortices in the solid circles and the vortices in the dashed circles. If one follows a counterclockwise path along the dashed circle, the phase increases by 360°, but if one follows a counterclockwise path along a solid circle, the phase decreases by 360°. Evidently there are two types of vortices; we dub the former right-handed vortices, and the latter left-handed vortices. It is also possible to create vortices of higher order, where the phase increases or decreases by an integer multiple of 360° as one traces a counterclockwise path; these vortices are non-generic, and do not appear "naturally".
These vortices have a number of remarkable properties. First, it is to be noted that the generic vortices are stable under perturbations of the wavefield; that is, small modifications of wavefield do not make vortices "disappear", or change their handedness. Such modifications might involve transmission or reflection of the wavefield through a slightly distorted piece of glass, or the increase/decrease of the aperture through which a light wave is transmitted2. The location of the vortex in the wavefield and its orientation relative to the direction of motion might change, but the vortex itself will be fundamentally unchanged.
Second, we may associate a discrete "charge", referred to as the topological charge, with any particular vortex. This topological charge is simply the number of times the phase increases by 360° as one circles the vortex line; it can be a positive or negative value.
Third, related to the stability of vortices, this topological charge is conserved: the only way that vortices can disappear from a wavefield is if two vortices of opposite charge come together and annihilate. Similarly, the only way new vortices can appear in a wavefield is in a "creation event" involving oppositely charged pairs.
This pair creation/annihilation is strikingly analogous to the creation/annihilation of fundamental particles in high-energy physics. Like vortices, fundamental particles can only be annihilated if they are brought together with their corresponding antiparticle, e.g. an electron is annihilated when brought together with a positron. No modern scientists have looked at vortex behavior as the basis of a unified theory of particle physics, but the idea has some history associated with it3.
For experimental studies and practical applications, to be discussed momentarily, it is relatively straightforward to "imprint" a laser beam with a single optical vortex at its center. We have already noted above how a lens bends wavefronts by selectively slowing different parts of the wavefront; a vortex beam can be created by passing a laser beam through a spiral phase plate, an example of which is shown below (source):
This particular plate is designed to create a vortex in electromagnetic waves with a millimeter-scale wavelength; for optical waves, the plate must be much smaller (and more transparent), and typically can only be fabricated crudely with a "spiral staircase" shape. If designed with the appropriate thickness, the wave passing through the thickest part of the plate ends up a full 360° out of phase with the wave that passes through the thinnest part. The result is a so-called "donut" laser mode, which has a head-on intensity and phase pattern as follows:
The phase should look familiar; it is simply the phase pattern of a single vortex, as depicted earlier.
So why are we interested in such vortex laser beams, and optical vortices in general? One of the most fascinating observations is that vortex beams with a net nonzero topological charge possess orbital angular momentum.
In a recent and very nice post, Chad at Uncertain Principles discussed an early experiment demonstrating that circularly polarized light possesses "spin" angular momentum. This angular momentum can actually be used to spin objects -- the light imparts a "twist" to the object. As we have seen, an optical vortex also has a "twist" associated with its phase, and it can be shown that it possesses orbital angular momentum. The difference between "spin" and "orbital" angular momentum can be understood by considering the motion of the Earth around the sun: the Earth's rotation on its axis is its "spin" angular momentum and its orbit around the Sun is its "orbital" angular momentum.
This orbital angular momentum can also be used to rotate objects, in particular microscopic particles. In a supplemental technique to optical tweezing, in which a focused light beam is used to trap and move particles, a vortex beam can be used as an optical spanner to rotate the particles. The most clever application of this effect that I've seen to date was presented in a 2004 paper in Optics Express entitled, "Microoptomechanical pumps assembled and driven by holographic optical vortex arrays"4. (The paper should be open access, and includes experimental videos.) The paper describes a microscopic liquid pump driven by six vortices. The vortex beams trap microscopic beads and force them to move in a circular path; the motion of the beads, in turn, forces liquid through the channel between the vortices:
I haven't seen any follow-ups to this work in recent years, but the possibility of making microscopic light driven machines is clearly an intriguing one.
An understanding of vortices is also important for so-called "phase retrieval" problems. The phase of a light field contains much important information for imaging and sensing applications, but visible light oscillates too rapidly for this phase to be directly measured. Numerous computational techniques have been devised to estimate the phase from measurements of the intensity of the field, but these techniques are often thwarted by the presence of optical vortices (the phase structure of a field with many vortices ends up looking like a very confusing multilevel parking garage).
The discreteness of topological charge and the stability of optical vortices under perturbation has made a number of researchers suggest that vortices might be used as digital "bits" in a free-space optical communications system. In principle, the use of vortices of different order would allow one to send multiple bits of information simultaneously in a communications channel, and the stability of the vortices could be used to transmit information without significant loss through atmospheric turbulence. (I've done some work on optical vortices through turbulence with this application in mind5.)
Finally, we note that the study of singular optics provides a unifying framework for understanding many aspects of wave phenomena beyond the optical phase. For instance, it has been known since the 1950s that vortices may also appear in the power flow of electromagnetic waves. An example of this is from another paper of mine6 which studies the power flow of light in the immediate neighborhood of a slit in a metal plate:
The arrows indicate the direction of power flow, and the color indicates the amount of power flow. At the points a, b, c and d it can clearly be seen that the power flow has a vortex behavior, and the center is a zero of power flow.
In optics, phase singularities can also be found in correlation functions and even in the description of the polarization of a light wave. Singularities can also be found in other types of waves, such as acoustic waves, quantum mechanical wave functions, and even the periodic oscillations of the tides! An early example of this was a paper by Proudman and Doodson, "The principal constituent of the tides of the North Sea," Phil. Trans. Roy. Soc. A 224 (1924), 185-219, which shows clear vortices in the daily tides of the North Sea! A modern illustration of these tides, showing a single vortex in the upper right corner where the tidal (phase) lines converge, is below (source):
Overall, singular optics gives researchers another way to look at and think about fields, understanding and characterizing them by the points where there is the least amount of light!
**************************************
1J.F. Nye and M.V. Berry, "Dislocations in wave trains," Proc. R. Soc. Lond. A 336 (1974), 165-190.
2 A classic paper demonstrating the creation/annihilation of vortices examined the structure of a focused wavefield as the aperture of the lens was changed; see G. P. Karman, M. W. Beijersbergen, A. van Duijl, and J. P. Woerdman, "Creation and annihilation of phase singularities in a focal field," Opt. Lett. 22 (1997), 1503-1505.
3 In the mid-1800s, before there was any understanding of atomic structure, Lord Kelvin suggested that atoms might in fact be vortices in the light-carrying aether! He was inspired by observations of vortices of smoke rings, and he would experimentally demonstrate their remarkable stability when he talked on the subject. It is another fascinating "what-if" story to wonder what the history of atomic physics might have looked like if antiparticles had been discovered while the "vortex atom" model was still seriously considered.
4 Kosta Ladavac and David Grier, "Microoptomechanical pumps assembled and driven by holographic optical vortex arrays," Opt. Exp. 12 (2004), 1144-1149.
5 G. Gbur and R.K. Tyson, "Vortex beam propagation through atmospheric turbulence and topological charge conservation," J. Opt. Soc. Am. A 25 (2008), 225.
6 H.F. Schouten, T.D. Visser, G. Gbur, D. Lenstra and H. Blok, "Creation and annihilation of phase singularities near a sub-wavelength slit", Optics Express 11 (2003), 371.
9 responses so far
• Thony C. says:
Fascinating post Dr Skyskull, thank you.
• skullsinthestars says:
Thanks! (I was starting to wonder if anyone had read the post besides the large body of spammers that seem to have recently besieged my blog.)
• agm says:
I'd say at least two people read this, besides yourself.
• skullsinthestars says:
Yay! I'll call it a well-read post.
• Thony C. says:
Might it be that the spam bots were triggered by the expression "chasing tail"?
• skullsinthestars says:
Shhh!!! Are you trying to bring them back??!!1!
• Blogs says:
The Weekly Smörgåsbord #10...
The dearth of history of science post continues. So this week’s Smöråsbord is a bit short. Nonetheless, here are few interesting posts from the past week: Singular Optics: Light Chasing its Own Tail — If you have any interest in o...
• [...] Stars: Mythbusters were scooped — by 130 years! (Archimedes death ray) Skulls in the Stars: Singular Optics: Light chasing its own tail Skulls in the Stars: Shocking: Michael Faraday does biology! (1839) Skulls in the Stars: [...]
• hope says:
Very good post.
Sometimes I think that our unverse (just 3-D) is just one portion of energy.
In its one side, the energy is shrinked in a small hole, and then diffracted from that hole.
So the focus of the diffraction is the socalled the center of the universe and the sub-focuses are different 'sunes'. The least energy points are socalled darkholes.
So where is focus, there is darkhole.
Both of them exist in a balance.
In the other side, all energy go together back into the small hole.
Then diffraction.
Very like light chasing its own tail.
^-^
Thank you again.
• Scientopia Blogs | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229563474655151, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/170009/proving-fracabb2b1aba5b1-fracbcc2c1bcb5c1-f | # proving :$\frac{(ab+b)(2b+1)}{(ab+a)(5b+1)}+\frac{(bc+c)(2c+1)}{(bc+b)(5c+1)}+\frac{(ca+a)(2a+1)}{(ca+c)(5a+1)}\ge\frac{3}{2}$
Let $a,b,c, >0$ be real numbers such that $$\frac{1}{a^2}+\frac{1}{b^2}+\frac{1}{c^2}\ge3$$
How to prove that :
$$\frac{(ab+b)(2b+1)}{(ab+a)(5b+1)}+\frac{(bc+c)(2c+1)}{(bc+b)(5c+1)}+\frac{(ca+a)(2a+1)}{(ca+c)(5a+1)}\ge\frac{3}{2}$$
-
I think there is some problem with first and second term. Match them with the pattern of third term. – Avatar Jul 12 '12 at 17:11
I don't see any problem. – Aneesh Karthik C Jul 12 '12 at 17:14
The title didn't match the statement in the body. I'll edit. – Robert Israel Jul 12 '12 at 17:14
now it's fine :) – Avatar Jul 12 '12 at 17:19
– Unoqualunque Sep 2 '12 at 11:06
## 2 Answers
Let $a=\frac{1}{x} ,b=\frac{1}{y} ,c=\frac{1}{z}$
$\frac{(x+1)(y+2)}{(y+1)(y+5)} + \frac{(y+1)(z+2)}{(z+1)(z+5)}+\frac{(z+1)(x+2)}{(x+1)(x+5)} \geq \frac{3}{2}$
note that $\frac{y+2}{(y+1)(y+5)}\ge \frac{3}{4(y+2)}$
then is enough to prove that $\frac{x+1}{y+2} + \frac{y+1}{z+2} + \frac{z+1}{x+2} \geq 2$
by Cauchy-Schwarz
$\frac{x+1}{y+2} + \frac{y+1}{z+2} + \frac{z+1}{x+2} =$
$\frac{(x+1)^2}{ (x+1)(y+2) }+\frac{(y+1)^2}{(y+1)(z+2)}+\frac{(z+1)^2}{(z+1)(x+2)}\geq \frac{((x+1)+(y+1)+(z+1))^2}{(x+1)(y+2) +(y+1)(z+2) +(z+1)(x+2) } \geq 2$
-
+1, elegant and correct. – Qmechanic Jul 20 '12 at 10:11
I) Let us define for $x\geq 0$ the rational function
$$\tag{1} f(x)~:=~ \frac{(x+1)(x+5)}{x+2}~=~ x+4 - \frac{3}{x+2}~>~0, \qquad x\geq 0.$$
It is not hard to see that $f$ is positive, monotonically increasing
$$\tag{2} f'(x)~=~ 1 + \frac{3}{(x+2)^2}~>~0, \qquad x\geq 0,$$
and concave for $x\geq 0$. The tangent at $x=1$ is above the concave function,
$$\tag{3} f(x) ~\leq~ f(1) +(x-1) f'(1)~=~\frac{4}{3}(x+2), \qquad x\geq 0.$$
II) By going to reciprocal variables OP's inequality (v7) can be written as
$$\tag{4} \forall x,y,z\geq 0:\quad x^2 +y^2 +z^2 \geq 3 \quad\Rightarrow\quad \frac{x+1}{f(y)} + \frac{y+1}{f(z)} + \frac{z+1}{f(x)} ~\stackrel{?}{\geq}~ \frac{3}{2}.$$
III) If we could prove that
$$\tag{5} \forall x,y,z\geq 0:\quad x^2 +y^2 +z^2 \geq 3 \quad\Rightarrow\quad \frac{x+1}{y+2} + \frac{y+1}{z+2} + \frac{z+1}{x+2} ~\stackrel{?}{\geq}~ 2,$$
then eq. (4) would follow from eqs. (3) and (5).
IV) Equation (5) can be rewritten as
$$\tag{6} \forall x,y,z\geq 0:\quad x^2 +y^2 +z^2 \geq 3 \quad\Rightarrow \quad g(x,y,z)~\stackrel{?}{\geq}~ 0,$$
where
$$g(x,y,z) ~:=~-2(x+2)(y+2)(z+2)+ \sum_{{\rm cycl.}~ x,y,z}(x+1)(x+2)(y+2)$$ $$~=~ -2xyz -4 +\sum_{{\rm cycl.}~ x,y,z}\left(x^2y+2x^2-xy\right)$$ $$\tag{7} ~=~ 2\underbrace{\left(-xyz+\frac{1}{3}\sum_{{\rm cycl.}~ x,y,z}x^2y\right)}_{\geq 0 ~\text{because of (AG)}.} +\underbrace{\left(-3+\sum_{{\rm cycl.}~ x,y,z}x^2\right)}_{\geq 0}+h(x,y,z),$$
where
$$\tag{8} h(x,y,z) ~:=~ -1+\sum_{{\rm cycl.}~ x,y,z}\left(\frac{1}{3}x^2y+x^2-xy\right).$$
In equation (7) we have used the inequality of arithmetic and geometric means (AG).
V) If we could prove that
$$\tag{9} \forall x,y,z\geq 0:\quad x^2 +y^2 +z^2 \geq 3 \quad\Rightarrow \quad h(x,y,z)~\stackrel{?}{\geq}~ 0,$$
then eq. (6) would follow from eqs. (7) and (9).
VI) The function $h$ has no internal stationary points because the radial derivative is strictly positive:
$$r \frac{\partial h(x,y,z)}{\partial r} ~=~x \frac{\partial h(x,y,z)}{\partial x}+y \frac{\partial h(x,y,z)}{\partial y}+z \frac{\partial h(x,y,z)}{\partial z}$$ $$\tag{10}~=~\sum_{{\rm cycl.}~ x,y,z}\left(x^2y+2x^2-2xy\right) ~=~\sum_{{\rm cycl.}~ x,y,z}\left(x^2y+(x-y)^2\right)~>0.$$
Thus we may restrict to the sphere $x^2+y^2+z^2=3$ from now on. In other words, if we could prove that $$\tag{11} \forall x,y,z\geq 0:\quad x^2 +y^2 +z^2 = 3 \quad\Rightarrow \quad h(x,y,z)~\stackrel{?}{\geq}~ 0,$$
then eq. (9) would follow.
VII) Let us rewrite
$$\tag{12} h(x,y,z) ~=~ 2 + \underbrace{\left(-3+\sum_{{\rm cycl.}~ x,y,z}x^2\right)}_{= 0} - j(x,y,z),$$
where
$$\tag{13} j(x,y,z) ~:=~ \sum_{{\rm cycl.}~ x,y,z}k(x)y,$$
and where the concave function $k$ is
$$\tag{14} k(x) ~:=~ x(1-\frac{x}{3})$$
with derivative $k'(x) = 1-\frac{2x}{3}$. The tangent at $x=1$ is above the concave function,
$$\tag{15} k(x) ~\leq~ k(1) +(x-1) k'(1)~=~\frac{x+1}{3}.$$
If we could prove that
$$\tag{16} \forall x,y,z\geq 0:\quad x^2 +y^2 +z^2 = 3 \quad\Rightarrow \quad j(x,y,z)~\stackrel{?}{\leq}~ 2,$$
then eq. (11) would follow.
VIII) Proof of eq. (16):
$$\tag{17} j(x,y,z) ~\leq~\frac{1}{3}\sum_{{\rm cycl.}~ x,y,z}(x+1)y ~\leq~2.$$
The first inequality follows from eqs. (13) and (15). The second inequality follows because
$$\tag{18} \sum_{{\rm cycl.}~ x,y,z}xy ~\leq~ \sum_{{\rm cycl.}~ x,y,z}x^2 ~=~3,$$
and because $(x,y,z)=(1,1,1)$ is the maximum point for the function $(x,y,z)\mapsto x+y+z$ on the sphere $x^2+y^2+z^2=3$.
IX) Working backwards through the chain of arguments, we have proven OP's inequality.
-
This is not the shortest proof, but at least it is elementary and correct. – Qmechanic Jul 20 '12 at 10:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 23, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152174592018127, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2009/04/17/inner-products-and-angles/?like=1&_wpnonce=d19428bcb3 | # The Unapologetic Mathematician
## Inner Products and Angles
We again consider a real vector space $V$ with an inner product. We’re going to use the Cauchy-Schwarz inequality to give geometric meaning to this structure.
First of all, we can rewrite the inequality as
$\displaystyle\frac{\langle v,w\rangle^2}{\langle v,v\rangle\langle w,w\rangle}\leq1$
Since the inner product is positive definite, we know that this quantity will be positive. And so we can take its square root to find
$\displaystyle-1\leq\frac{\lvert\langle v,w\rangle\rvert}{\langle v,v\rangle^{1/2}\langle w,w\rangle^{1/2}}\leq1$
This range is exactly that of the cosine function. Let’s consider the cosine restricted to the interval $\left[0,\pi\right]$, where it’s injective. Here we can define an inverse function, the “arccosine”. Using the geometric view on the cosine, the inverse takes a value between $-1$ and ${1}$ and considers the point with that $x$-coordinate on the upper half of the unit circle. The arccosine is then the angle made between the positive $x$-axis and the ray through this point, as a number between ${0}$ and $\pi$.
So let’s take this arccosine function and apply it to the value above. We define the angle $\theta$ between vectors $v$ and $w$ by
$\displaystyle\cos(\theta)=\frac{\lvert\langle v,w\rangle\rvert}{\langle v,v\rangle^{1/2}\langle w,w\rangle^{1/2}}$
Some immediate consequences show that this definition makes sense. First of all, what’s the angle between $v$ and itself? We find
$\displaystyle\cos(\theta)=\frac{\lvert\langle v,w\rangle\rvert}{\langle v,v\rangle^{1/2}\langle v,v\rangle^{1/2}}=1$
and so $\theta=0$. A vector makes no angle with itself. Secondly, what if we take two vectors from an orthonormal basis $\left\{e_i\right\}$? We calculate
$\displaystyle\cos(\theta_{ij})=\frac{\lvert\langle e_i,e_j\rangle\rvert}{\langle e_i,e_i\rangle^{1/2}\langle e_j,e_j\rangle^{1/2}}=\delta_{ij}$
If we pick the same vector twice, we already know we get $\theta_{ii}=0$, but if we pick two different vectors we find that $\cos(\theta_{ij})=0$, and thus $\theta_{ij}=\frac{\pi}{2}$. That is, two different vectors in an orthonormal basis are perpendicular, or “orthogonal”.
### Like this:
Posted by John Armstrong | Algebra, Geometry, Linear Algebra
## 15 Comments »
1. [...] vector space with an inner product. We used the Cauchy-Schwarz inequality to define a notion of angle between two [...]
Pingback by | April 21, 2009 | Reply
2. [...] now we do get a notion of length, defined by setting as before. What about angle? That will depend directly on the Cauchy-Schwarz inequality, assuming it holds. We’ll check [...]
Pingback by | April 22, 2009 | Reply
3. [...] Now that we have a real or complex inner product, we have notions of length and angle. This lets us define what it means for a collection of vectors to be “orthonormal”: [...]
Pingback by | April 28, 2009 | Reply
4. Dear John,
The law of cosines argument is also nice (especially since you now have the polarization identities): ||u-v||^2 = ||u||^2 + ||v||^2 + 2||u|| ||v|| cos @. Then 2=||u-v||^2 – ||u||^2 – ||v||^2 = 2||u|| ||v|| cos @, and you divide both sides by 2||u|| ||v||.
There is an interesting way to define cosine in terms of an inner product (though it’s not completely germane to this topic). Since the group of plane rotations SO_2(R) is an infinite group with subgroups of arbitrary finite order, it makes a good definition for the group of angles. Usually angles are written additively, so we define the group of angles A to be SO_2(R) written additively. Obviously A and SO_2(R) are isomorphic. If T is the image of an angle @, then we define cos @ := , where x is a fixed unit vector. If x and y are vectors, then there is a rotation T (in the plane spanned by x and y) such that y = Tx, and so /(||x|| ||y||) is the cosine of the angle corresponding to T.
-Brendan
Comment by Brendan Murphy | April 29, 2009 | Reply
5. Sorry, it didn’t like the symbols I used for inner product.
The third line should read: “2(u,v)=||u-v||^2 – ||u||^2 – ||v||^2 = 2||u|| ||v|| cos @”.
The second paragraph should end with: “If T is the image of an angle @, then we define cos @ :=(Tx,x) , where x is a fixed unit vector. If x and y are vectors, then there is a rotation T (in the plane spanned by x and y) such that y = Tx, and so (x,y)/(||x|| ||y||) is the cosine of the angle corresponding to T.”
Comment by Brendan Murphy | April 29, 2009 | Reply
6. Brendan, that’s a good observation about $\mathrm{SO}_2(\mathbb{R})$, but since I haven’t yet defined the group…
Comment by | April 29, 2009 | Reply
7. [...] be orthonormal, we get a real inner product on complex numbers, which in turn gives us lengths and angles. In fact, this notion of length is exactly that which we used to define the absolute value of a [...]
Pingback by | May 26, 2009 | Reply
8. [...] geometrically. We use the inner product to define a notion of (squared-)length and a notion of (the cosine of) angle . So let’s transform the space by and see what happens to our inner product, and thus to [...]
Pingback by | July 27, 2009 | Reply
9. [...] First of all, we’re going to assume our space comes with a positive-definite inner product, but it doesn’t really matter which one. We’re choosing a positive-definite form with signature instead of a form with some negative-definite or even degenerate portion — where we’d get s or s along the diagonal in an orthonormal basis — because we want every direction to behave the same as every other direction. More general signatures will come up when we talk about more general spaces. But we do want to be able to talk in terms of lengths and angles. [...]
Pingback by | September 22, 2009 | Reply
10. [...] Thus we also find that . And we can interpret this inner product in terms of the length of and the angle between and [...]
Pingback by | October 5, 2009 | Reply
11. [...] ) times the sine of the angle between the two vectors. To calculate this angle we again use the inner product to find that its cosine is , and so its sine is . Multiplying these all together we find a height [...]
Pingback by | November 5, 2009 | Reply
12. [...] do we mean by “perpendicular”? It’s not just in terms of the “angle” defined by the inner product. Indeed, in that sense the parallelograms and are perpendicular. No, we want [...]
Pingback by | November 9, 2009 | Reply
13. [...] know a lot about the relation between the inner product and the lengths of vectors and the angle between them. Specifically, we can [...]
Pingback by | January 28, 2010 | Reply
14. [...] inner product gives us a notion of length and angle. Invariance now tells us that these notions are unaffected by the action of . That is, the vectors [...]
Pingback by | September 27, 2010 | Reply
15. [...] us measure things. Specifically, since is an inner product it gives us notions of the length and angle for tangent vectors at . We must be careful here; we do not yet have a way of measuring distances [...]
Pingback by | September 20, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222147464752197, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2010/11/15/tensors-over-the-group-algebra-are-invariants/?like=1&source=post_flair&_wpnonce=a832ba55c5 | # The Unapologetic Mathematician
## Tensors Over the Group Algebra are Invariants
It turns out that we can view the space of tensors over a group algebra as a subspace of invariants of the space of all tensors. That is, if $V_G$ is a right $G$-module and ${}_GW$ is a left $G$-module, then $V\otimes_G W$ is a subspace of $V\otimes W$.
To see this, first we’ll want to turn $V$ into a left $G$-module by defining
$\displaystyle g\cdot v=vg^{-1}$
We can check that this is a left action:
$\displaystyle\begin{aligned}g\cdot(h\cdot v)&=g\cdot(vh^{-1})\\&=vh^{-1}g^{-1}\\&=v(gh)^{-1}\\&=(gh)\cdot v\end{aligned}$
The trick is that moving from a right to a left action reverses the order of composition, and changing from a group element to its inverse reverses the order again.
So now that we have two left actions by $G$, we can take the outer tensor product, which carries an action by $G\times G$. Then we pass to the inner tensor product, acting on each tensorand by the same group element. To be more explicit:
$g\cdot(v\otimes w)=(vg^{-1})\otimes(gw)$
Now, I say that being invariant under this action of $G$ is equivalent to the new relation that holds for tensors over a group algebra. Indeed, if $(vg)\otimes w$ is invariant, then
$\displaystyle(vg)\otimes w=(vgg^{-1})\otimes(gw)=v\otimes(gw)$
Similarly, if we apply this action to a tensor product over the group algebra we find
$\displaystyle g\cdot(v\otimes w)=(vg^{-1})\otimes(gw)=v\otimes(g^{-1}gw)=v\otimes w$
so this action is trivial.
Now, we’ve been playing it sort of fast and loose here. We originally got the space $V\otimes_GW$ by adding new relations to the space $V\otimes W$, and normally adding new relations to an algebraic object gives a quotient object. But when it comes to vector spaces and modules over finite groups, we’ve seen that quotient objects and subobjects are the same thing.
We can get a more explicit description to verify this equivalence by projecting onto the invariants. Given a tensor $v\otimes w\in V\otimes_GW$, we consider it instead as a tensor in $V\otimes W$. Now, this is far from unique, since many equivalent tensors over the group algebra correspond to different tensors in $V\otimes W$. But next we project to the invariant
$\displaystyle\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}(vg^{-1})\otimes(gw)$
Now I say that any two equivalent tensors in $V\otimes GW$ are sent to the same invariant tensor in $(V\otimes W)^G$. We check the images of $(vg)\otimes w$ and $v\otimes(gw)$:
$\displaystyle\begin{aligned}\frac{1}{\lvert G\rvert}\sum\limits_{h\in G}((vg)h^{-1})\otimes(hw)&=\frac{1}{\lvert G\rvert}\sum\limits_{h\in G}(v(gh^{-1}))\otimes((hg^{-1}g)w)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{k\in G}(vk^{-1})\otimes(k(gw))\end{aligned}$
To invert this process, we just consider an invariant tensor $v\otimes w$ as a tensor in $V\otimes_GW$. The “fast and loose” proof above will suffice to show that this is a well defined map $(V\otimes W)^G\to V\otimes_GW$. To see it’s an inverse, take the forward image and apply the relation we get from moving it back to $V\otimes_GW$:
$\displaystyle\begin{aligned}\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}(vg^{-1})\otimes(gw)&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}v\otimes(g^{-1}gw)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}v\otimes w\\&=v\otimes w\end{aligned}$
And so we’ve established the isomorphism $V\otimes_GW\cong(V\otimes W)^G$, as desired.
## 1 Comment »
1. [...] algebra and take a more solid pass at calculating its dimension. Key to this approach will be the isomorphism [...]
Pingback by | November 16, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9016960263252258, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/115430?sort=oldest | ## What is the geometry of an undecidable diophantine equation?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As an arithmetic algebraic geometer of the highest moral fiber, I am trained to look at Diophantine equations in terms of the geometry of the corresponding scheme. For instance, if the Diophantine equation comes from a curve, I know that I should compute the genus, and do various things depending on if the genus is $0$, $1$, or $\geq 2$.
But I know very little about what the limits of this geometric approach are. I only know that there exist undecidable Diophantine equations, or families of Diophantine equations. I do not know what their geometry is like!
Do undecidable Diophantine equations, or families of equations, have interesting geometric properties? Can we compute basic geometric invariants like the Hodge diamond, Kodaira dimension, etc.? Are they pathological in every way, or do some of them have properties that might give a naive geometer hope about finding solutions? What would Noam Elkies try to do if he were asked to solve them and did not know they were undecidable, and why would he be stymied?
-
11
Title should be "The geometry of undecidable diophantine equations: WWNED?" – R Hahn Dec 4 at 18:32
3
Maybe we need a Noam Elkies tag as well? :-) – Todd Trimble Dec 4 at 19:27
2
If I go add ask-noam to 50 questions where it would be appropriate, do I get the taxonomist badge? – Will Sawin Dec 4 at 22:39
2
I never checked this myself, but Mazur told a number of us many years ago that he had looked at Matiyasevich's equations, and found that they all had plenty of rational solutions. Someone should ask him about this. Alternatively, you could look at the equations yourself. This doesn't answer your question, but it's a start. – Minhyong Kim Dec 5 at 0:17
1
And Minhyong, while you're here, I remember reading in one of your survey something that I liked, and that seems related to the question: the hope that the question of which algebraic curves over $\mathbb Q$ has a rational solution might be decidable -- even if for general variety, it is probably not (though we don't know it for sure yet). – Joël Dec 5 at 2:13
show 5 more comments
## 3 Answers
You have a typical recursively enumerable set S of integers, and a set X of lattice points cut out by a multivariate polynomial. We are talking about S being the projection (onto one axis) of X. Given that S can be "pretty bad", can one say anything except that X is "presumably worse"? None of the interest lies in any finite segment of S.
To put it another way, possibly more interesting to geometers, the analogy with constructible sets fails here. There is some hint in the history that Hilbert was misled by elimination theory into thinking that Godel's incompleteness theorem (or suchlike) couldn't be the case. (I'm not putting this very well, and it is disrespectful in a way to Hilbert.) Anyway algebraic geometers "know" that projection doesn't really innovate that much in the way of geometry, and logicians "know" the precise opposite in terms of logic. So at the very least the intuitions are in tension.
(Maybe the mistake of thinking that "geometry of undecidable diophantine systems" was a kind of impossible object, and therefore all r.e. sets would turn out to be recursive, is a mistake intelligent enough to attribute to Hilbert. As a kind of Fundamental Theorem of Proof Theory.)
-
I'm not interested in the set $X$ of lattice points, but in the whole set of complex points. This should hopefully be a more manageable object, due to the first-order decidability of complex stuff. – Will Sawin Dec 5 at 6:42
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Though I don't have a full answer to your question, the following remarks may help.
Let's distinguish between (1) explicit examples of systems of Diophantine equations that are known to be undecidable, and (2) a system of Diophantine equations that has the property of being undecidable, whether we know it or not.
Regarding number (1), Jones has written down some explicit examples. The main practical difficulty with computing any invariants of these examples is their size. The basic example in the paper has 28 variables and degree $5^{60}$, which can be rewritten as a system in 58 variables and degree 2. My guess is that any Groebner basis algorithm would crash on this example, not because of any particular pathology, but because of its size. However, maybe I'm wrong. The example is written out explicitly so you could try playing with it yourself.
Regarding number (2), although I'm not aware of any precise theorems to this effect, I think that the intuition of most logicians and computability theorists is that "most" systems of Diophantine equations are undecidable. (This is related to the intuition that "most" computably enumerable sets are not computable.) If this intuition is correct, then undecidable Diophantine equations aren't "special"; it's just the reverse—the decidable ones are the ones that are special. Then your question is not very different from the question, what is the geometry of a "random" set of Diophantine equations? So it becomes more of a question in computational commutative algebra than a question in computability theory.
-
This reminds me of the tone of Weinberger's book "Computers, Rigidity, and Moduli": books.google.com/books?id=CtvmQiuOKSEC – Steve Huntsman Dec 4 at 20:46
Number of variables and degree is a clumsy way of describing geometry. I know that people in this field like to write systems $f_1=f_2=\cdots=f_N=0$ as a single system $f_1^2+f_2^2+\cdots + f_N^2$. If we undo all such obvious tricks to get back to a system of equations, then what is the dimension of the complex solution space? – David Speyer Dec 4 at 21:30
@David: Jones's system has 28 variables and 18 equations in its original formulation. Of course this doesn't necessarily mean that the dimension is 10. I don't know how to determine the dimension without first computing a Groebner basis. – Timothy Chow Dec 4 at 22:35
I don't think this accurately reflects my question. There are many ways in which an algebraic variety can be special: low dimension, low kodaira dimension, etc. I would like to know about algebraic varieties which are "general" in that there diophantine problems are undecidable, but "special" in that they have some other nice property. There are two ways at getting at the existence of these things: showing that varieties with a certain set of nice properties have decidable Dipohantine problems (I'm aware of some conjectures to this effect), and the reverse: finding varieties with certain – Will Sawin Dec 4 at 22:44
sets of nice properties that do not have decidable Diophantine problems. I am interested in the second approach. I am disheartened by the dimension difficulty. That might entirely kill this question, but hopefully there is something interesting that can still be said. – Will Sawin Dec 4 at 22:46
show 4 more comments
This is just a long comment, rather than an answer to Will's question. There are "standard" algorithmically unsolvable problems in combinatorial group theory: Word problem, conjugacy problem, triviality problem for the group, isomorphism problem for a pair of groups, etc. The most fruitful line of arguments relating logic to geometry in group theory is to find geometric conditions on finitely-presented group(s) which are sufficient for solvability of these problems. It turned out that, appropriately defined hyperbolicity is a geometric condition implying algorithmic solvability of all the "standard" problems. Furthermore, it turned out that hyperbolicity is, probabilistically speaking, "generic". Moreover, one of the oldest algorithms for solving word problem (Dehn's algorithm/linear isoperimetric inequality) is one of many equivalent definitions of hyperbolicity. In view of this, I would be looking for algebro-geometric features which are sufficient for algorithmic solvability of Diophantine equations, rather than the other way around. However, since I am neither a number-theorist nor a logician, I do not know what they could be.
-
I think you're right that the other way makes more sense, but I think there is a lot of work of this nature, even if it doesn't make explicit references to computation. Methods for solving Diophantine equations are a huge, huge field, and I already know many key ideas. That's why I'm interested in the reverse. – Will Sawin Dec 5 at 4:15
1
@Will: Could you add some (interesting) examples of geometric conditions implying solvability of Diophantine equations to your question? I think, it would be quite valuable (for people like me). One thing I find surprising is Timothy Chow's comment that unsolvability is expected to be generic: This contradicts my intuition coming from group theory. – Misha Dec 5 at 4:55
I don't know much about explicit decidability results. Here is an interesting discussion of a conjectural one: terrytao.wordpress.com/2007/05/04/… – Will Sawin Dec 5 at 6:35
I think the Hasse principle is very strongly related to decidability, but I'm not sure if it always implies decidability. – Will Sawin Dec 5 at 6:38
@Misha: The content of the MRDP theorem is that every computably enumerable set is Diophantine. This is a lot stronger than the "mere" statement that it's undecidable whether a Diophantine system has a solution. Now, it's true that converting a computably enumerable set to a Diophantine set is a somewhat delicate process, so it's not ruled out that there still some sense in which "generic" Diophantine sets are decidable. I think it would be pretty surprising, though. – Timothy Chow Dec 5 at 16:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552894234657288, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/81042/a-congruence-problem | # A congruence problem
Prove that if $\gcd(a,b)=d$ and $d$ divides $f$, then there is a an integer $k$ such that $a\cdot k \equiv f\pmod b$.
-
7
you haven't said what k is. – user16697 Nov 11 '11 at 4:49
## 1 Answer
If $\gcd(a,b)=d$, then you can find integers $r$ and $s$ such that $ar+bs=d$. Since $d$ divides $f$, there exists $t$ such that $dt=f$. Therefore, $$f = dt = (ar+bs)t = a(rt) + b(st) \equiv a(rt)\pmod{b}.$$ So setting $k=rt$ shows there exists a $k$ with $ak\equiv f\pmod{b}$.
-
it has to get like at-f==(mod b), from the new variable from f=dt eqn and – Jeffry Nov 11 '11 at 5:30
1
@Jeffry: I have no idea what that means. If you have specific requirements in the problem, then I can't know them unless you tell us. The government doesn't let me read minds without a warrant. – Arturo Magidin Nov 11 '11 at 5:31
1
@Jeffry: "It has to get like"; What has to get like what? What "new variable". I'm sorry, but that sentence just doesn't parse. I don't understand what you are trying to say, or what the problem is. – Arturo Magidin Nov 11 '11 at 5:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539130926132202, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=283080 | Physics Forums
## Minimum Work needed?
1. The problem statement, all variables and given/known data
what is the minimum wirk neede to push a 1000kg car 300m up a 17.5o incline?
Part a. Ignore friction
2. Relevant equations
So we are allowed to use the general equation
W=FdCos $$\theta$$
3. The attempt at a solution
So I thought you would just do:
W= (1000)(9.8)(300)(cos17.5)
W=2803927Joules
However according to my teacher we should be getting
W= 8.8 x105 Joules
Does anyone know what I am doing wrong?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help Science Advisor Draw a picture. Are you sure cos is the right trig function to be using? The force is aligned with the direction of the car's motion so W=F*d. How does the angle of the slope affect F? Split the force into components.
Oh, so you would have to use sin instead of cos. That makes more sense. I didnt know that you could interchange the trig function in the equation. Thanks so much for your help!
Recognitions:
Homework Help
Science Advisor
## Minimum Work needed?
Quote by Littlemin5 Oh, so you would have to use sin instead of cos. That makes more sense. I didnt know that you could interchange the trig function in the equation. Thanks so much for your help!
You can't interchange the trig functions in the equation!! What happens when you are pushing the car up the incline is that the only force you have to overcome is the component of m*g that is tangent to the road. That's F_g=m*g*sin(incline angle). In the W=F*d*cos(theta) the theta is 0, since we are pushing in the same direction the car is moving. Those are two DIFFERENT angles. You aren't just substituting 'sin' for 'cos'. Try to be clear on this.
But I do have one more question if I was told that in the next part there was an effective coefficent of friction of .25 , wuld I just multiple my answer for Part 1 by .25?
Recognitions: Homework Help Science Advisor No. Not at all. Now you have to figure out the normal force before you can compute the frictional force. Can you do that?
Wouldn't you do 9800Cos17.5=9346.26N so your answer there would be the Normal Force?
Recognitions: Homework Help Science Advisor Right. So now get the frictional force. The total force you have to push up the hill then the tangential force (as in the first problem) PLUS the frictional force.
so frictional force is 2336.565 and then I add that to 9800 which equals 12136.565. From there I would do (12136.565)(300)(sin17.5) Right? And the answer I get would be my answer?
Recognitions: Homework Help Science Advisor No again. F_total=F_tangential+F_friction. Ok, F_friction is 2336N. The gravitational force component you are opposing is m*g*sin(17.5). Now take F_total*d
Wait I don't really understand the last comment you made. Could you please explain it in a bit more detail?
Here is another perspective. The displacement occurs at 17.5 degrees from the horizontal. The force, that is gravitational force, is acting down. We ignore the normal work because it has no component of force in the displacement (cos90=0). So, since mg is acting down, and the displacement is at 17.5 above horiztonal, we use the FDcosO. We know the force, mg; we know the displacement, 300m, and we know the angle between them, 17.5 + 90. So.. it should be 1000kg * 300m * 9.8 * cos (17.5 + 90). I think this is right. This is assuming no friction. Also, the work is negative but magnitude is positive.
Recognitions:
Homework Help
Science Advisor
Quote by razored Here is another perspective. The displacement occurs at 17.5 degrees from the horizontal. The force, that is gravitational force, is acting down. We ignore the normal work because it has no component of force in the displacement (cos90=0). So, since mg is acting down, and the displacement is at 17.5 above horiztonal, we use the FDcosO. We know the force, mg; we know the displacement, 300m, and we know the angle between them, 17.5 + 90. So.. it should be 1000kg * 300m * 9.8 * cos (17.5 + 90). I think this is right. This is assuming no friction. Also, the work is negative but magnitude is positive.
You could certainly do it that way. But it's pretty usual with inclined plane problems to split the m*g force into tangential and normal components. Going up the plane you only do work against the tangential component of m*g and the friction force. Add them and multiply by the distance.
Thread Tools
| | | |
|-------------------------------------------|----------------------------------|---------|
| Similar Threads for: Minimum Work needed? | | |
| Thread | Forum | Replies |
| | General Physics | 2 |
| | Introductory Physics Homework | 3 |
| | Introductory Physics Homework | 5 |
| | Materials & Chemical Engineering | 0 |
| | Academic Guidance | 0 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190373420715332, "perplexity_flag": "middle"} |
http://electronics.stackexchange.com/questions/28058/design-considerations-for-powering-microcontrollers-with-a-100-m-cable | # Design considerations for powering microcontrollers with a 100 m cable
I'm currently doing some research for a project where I'd have a number of microcontrollers (say 5 ATtinys or if I manage to bring power consumption down, maybe even ATmegas) which I want to power from a single category 5e cable. I don't think that I'd need more than 100 meters of cable length (and I think that as much as 50 m may be enough), but I'd like to project for the worst case. I don't think that maximum power consumption is going to be more than 100 mA for the all devices on the cable. The idea now is that the microcontroller will check optotransistor for light level and based on that turn on or off a low-power LED.
I picked the category 5e cable because it's relatively small (and the diameter of the cable is a factor here, since I'm trying to fit in with already existing infrastructure), has 4 pairs of wires (which gives me some flexibility) and is commonly available.
So here are some of my thoughts on this topic:
According to wikipedia, I calculated that I'd have resistance of $8.422 \mbox{ } \Omega$ in one way on the cable. So total resistance of one pair would be $16.844 \mbox{ } \Omega$. This would give me a voltage drop of 1.6 V at the end of the cable, assuming 100 mA load. So I plan to use 5.5 input voltage which gives me 3.9 V at the end and is within operational range of the microcontroller family I'm planning to use. Now, since I'm using cat5e cable, I could use two pairs of wires for lower resistance and get voltage drop as low as 0.84 V. This would leave me with two free pairs.
I'm also considering adding "tank" capacitors in the $1000 \mbox{ } \mu F$ range near each microcontroller to provide some localized power sources in addition to standard decoupling capacitors I'd use. The size of the capacitors isn't such a problem, so I could make it even larger to somewhat mitigate the voltage drop caused by the cable.
With that, I think I covered the basics of the power problem. The part which is a bit fuzzy to me is how to deal with switching noise of the microcontrollers. I believe that the larger capacitors acting as tanks and a bit more decoupling capacitors could help here. There's also the Atmel's recommendation to put a small coil in series with the Vcc pin before first decoupling capacitor which should force the microcontroller to use more power from the capacitors and suppress a little bit the noise on the line. What I'm not so sure here is should I look into more complicated filter designs for the power line. The environment I'll be using this is basically a residential house, so I don't expect too much outside interference.
A problem with interference could be the two other pairs. Some of the adjacent microcontrollers may need to communicate with each-other, so I plan to reserve the two pairs for that, but I don't expect the data rates to be too high since I'd probably be using 2400 b/s.
So any comments, improvement ideas or suggestions?
-
## 1 Answer
For a more general system I would do this differently. Maybe it's overkill in your case, but this isn't that hard. I've done something similar, although the power requirements were higher.
In my system I used CAN as a multi-drop bus to all the nodes. In the common configuration it uses differential signalling, which is a good idea for long distances where you could pick up common mode noise. I reserved one pair for the CAN lines. The other three pair were used for power, with a power and ground on each pair. That way the total common mode current of each twisted pair will still be close to zero. In my case I used 48 V because that is the maximum where you generally don't have to worry about safety issues much. There are plenty of microcontrollers with CAN built in, and the silicon deals with collisions and retries automatically.
In your case the power requirements are less, so 24 V might be a good choice for the power. Lots of transistors and buck regulator chips work up to 30 V. 28 V would be better if you want to push the most power and still use the cheapest power supplies at the ends, but I said 24 V because you're not pushing the limit and that's a very commonly available off the shelf power supply voltage.
At each node, put a small buck regulator. These are small and cheap nowadays. The MCP16301 can handle up to 30 V in, costs under \$1, includes the switch, and comes in a nice and small SOT-23 package. Make sure to put a decent ceramic cap right at the input to the buck regulator though. The 24 V needs to be low impedance at high frequencies for the buck switcher to work. You probably want something like a 10 µF 30 V cap.
The advantage of this scheme is that you can tolerate a fairly wide power voltage at each node, but due to the higher voltage and therefore lower current, you won't have much drop. There will also be less heat at each node since the buck switcher will be more efficient than a linear regulator after you raise the power voltage enough to cover all the worst case conditions with headroom.
Another significant advantage is that there will be less ground offset between nodes, again due to the lower power current at the higher voltage.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632565975189209, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/2622/real-life-collision-when-only-using-truncated-hash/2626 | # Real life collision when only using truncated hash
For MD5 two different inputs are known that produce the same 128 bit hash value. However, these inputs are artificially created for this specific purpose.
For normal, real life inputs I believe no such collision is known?
When you only consider the first n bits of the hash however certainly such collisions are known for small n.
My question: what is the largest n bits truncation for which an accidental collision is known?
(of course answers need not be limited to MD5 but can also be about others hashes)
-
## 2 Answers
Accidental collisions are interesting for certain applications, and one would expect accidental collisions to occur less frequently in a system than malicious collisions.
So, if you are not worried about malicious collisions, only accidental, it is easy to compute how many digests you would need to compute before seeing an accidental collision. If the output of the hash (either truncated or not) is $n$ bits long, you would expect to see an accidental collision once there are about $2^{n/2}$ digests in your database.
As for what is publicly known for accidental collisions, I haven't come across any data. Probably because accidental collisions are not very interesting. We know exactly how many digests to compute before expecting to see one, so why waste the electricity to experimentally validate what we already know mathematically?
Looking at some numbers for SHA-1 on a GPU, if you can perform $1,746,000,000\approx 2^{30}$ sha-1 operations per second, and we would expect a collision after $2^{80}$ operations, it would take $2^{50}$ seconds (or about $35702051$ years) to see an accidental collision (ignoring future increases in computation power).
On the other hand, the same website lists MD5 at about $5,570,000,000\approx 2^{32}$ MD5 computations per second. A collision would be expected after about $2^{64}$ computations. That equates to $2^{32}$ seconds (or about 136 years).
You can follow the math then to see how long for various truncated versions of both MD5 and SHA1.
-
Don't neglect the memory requirements for the birthday paradox though, you would need an insane amount of storage to store all that ($2^{64} \times 128$ is a lot of bits). – Thomas May 17 '12 at 8:06
Yes, there have been real life inputs with collisions working on the full length of MD5. There is a pair of X.509 certificates that share an MD5 hash. There is also a pair of PostScript documents that are an MD5 collision. There are also two binary strings a mere 6 bits different that are a collision. The whole length of MD5 has been broken.
Attacks that use what you call "artificial" input are valid real life attacks. A key here is is that the data we hash often has structure that allows for bogus information to be modified or added without changing the final "human consumed" data in the file. For example, you can take a PNG image file and append data to it without affecting the image data. This allows an attacker to manipulate that extra information intelligently and find an ad hoc value that produces a collision. He don't need to accidentally trip over one, he can specifically craft it.
This attack natural lends itself to application on hashes that use the Merkel-Damgard construction, which the currently popular ones (MD5 and the entire SHA family) do. The MD construction basically works by maintaining an internal state and using the hash's compression function to combine the input with the state block by block. To attack it, first hash the original data then take the internal state after the last input block and use it as the initial state for creating a block that is a collision. If you can generate an ad-hoc collision, you may be able to extend the way you generate that collision to work with the initial state that you got from hashing the original document. (This also work on creating header data with collisions, then extending it by appending identical documents. The collision would be maintained, but the documents would have different headers.)
The bottom line is that we don't need collisions to be what you call "accidental". Specifically crafted, ad-hoc collisions are valid security breaches and have the potential to be extended to collisions on real life data. This is not always the case, but it is not uncommon.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933752179145813, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=381056 | Physics Forums
## Lagrangians and conserved quantities
Hi,
I have a relatively straight forward question. If we have a Lagrangian that only depends on time and the position coordinate (and its derivative), how can I decide whether angular momentum is conserved?
That is, if the Lagrangian specifically does not have theta or phi dependence, does that mean that angular momentum is always conserved?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Gold Member I think this is a really good question! I haven't thought of this before until now. I imagine it would be a little weird for you to be interested in a situation with angular symmetry and not using $$\theta$$ or $$\phi$$ and their derivatives for your q and q dot things, but i guess its possible. Here is my best stab, and im pretty sure of the strength of this statement: Anytime there is a conservation law, it means there is a symmetry in any one of the 4 spatial coordinates. Conservation laws are geometrically based, so look at your system, and if there is a symmetry in one of the coordinates, then there is conservation of something. I hope this helps...
Thread Tools
| | | |
|-----------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Lagrangians and conserved quantities | | |
| Thread | Forum | Replies |
| | Advanced Physics Homework | 0 |
| | Advanced Physics Homework | 2 |
| | Quantum Physics | 6 |
| | Advanced Physics Homework | 5 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238799214363098, "perplexity_flag": "middle"} |
http://mathhelpforum.com/discrete-math/149199-prove-equivalence-relation.html | # Thread:
1. ## Prove equivalence relation
Please critique the following proof and also let me know if I have defined the equivalence classes correctly.
Let $f:A\longrightarrow B$. Define a relation $\equiv$ on A by $a_{1}\equiv a_{2}$ iff $f(a_{1})=f(a_{2})$. Give a quick proof that this is an equivalence relation. What are the equivalence classes? Explain intuitively.
Proof: Test reflexive for $\equiv$. If $a\in A$ then $(a,a)\in A$: Given f is a funcion then $f(a_{1})=f(a_{1})$ implies $a_{1}\equiv a_{1}$ because two different values in the domain cannot be mapped to the same value in the range. Therefore $(a_{1},a_{1})\in A$.
Test symmetric: Let $a_{1}\equiv a_{2}$ where $(a_{1},a_{2})\in A$. Then we know $f(a_{1})=f(a_{2})$. But because = is symmetric we know $f(a_{2})=f(a_{1})$ and since $\equiv$ is defined with iff, we know $a_{2}\equiv a_{1}$ which implies $(a_{2},a_{1})\in A$.
Test transitive: Let $a_{1}\equiv a_{2}$ and $a_{2}\equiv a_{3}$ where $(a_{1},a_{2})(a_{2},a_{3})\in A$. Then we know $f(a_{1})=f(a_{2})$ and $f(a_{2})=f(a_{3})$. Because = is transitive we know $f(a_{1})=f(a_{3})$, but because $\equiv$ is defined with iff, we can conclude that $a_{1}\equiv a_{3}$. So $(a_{1},a_{3})\in A$.QED
The equivalence classes in A would be the sets $\{(a_{i},a_{j})\in A$ such that $a_{i}=a_{j})$.
2. I agree with everything except your intuitive characterization of the equivalence classes.
3. Originally Posted by oldguynewstudent
Proof: Test reflexive for $\equiv$. If $a\in A$ then $(a,a)\in A$
I think you mean ' $\equiv$' , not 'A', as the last symbol there? I.e., this is what we want to prove.
Originally Posted by oldguynewstudent
Given f is a funcion then $f(a_{1})=f(a_{1})$ implies $a_{1}\equiv a_{1}$ because two different values in the domain cannot be mapped to the same value in the range.
No, that is backwards (as well as, though it is irrelevant to the actual proof, you can't assume f is an injection as you have).
All you need to say is that if a=a then f(a)=f(a) so a $\equiv$a.
Your symmetry and transitivity arguments looked okay, as I glanced over them.
Originally Posted by oldguynewstudent
The equivalence classes in A would be the sets $\{(a_{i},a_{j})\in A$ such that $a_{i}=a_{j})$.
Wrong. The equivalence classes are not sets of ordered pairs of members of A. Rather, the equivalence classes are certain subsets of A.
E is an equivalence class (per $\equiv$) iff there exists an a in A such that E = {x | x $\equiv$a}.
I.e., E is an equivalence class (per $\equiv$) iff there exists an a in A such that E = {x | f(x)=f(a)}.
I.e., the equivalence classes (per $\equiv$) are the non-empty (assuming A is nonempty) sets each made up of, for some given a in A, all and only those members of A that map, under the function f, to f(a).
4. Yes, thank you very very much. I have trouble with proofs and have requested a meeting with my professor. The professor I had for Discrete knew his stuff but didn't know how to impart that knowledge very well. This professor for Combinatorics has been great so far. This is a great help. Thanks again!
5. In my view, very likely you're not at fault, but rather standard curricula are at fault. In my view, it should be standard for math (and science, and social sciences, and even history and other humanities) students to take a course that teaches them how to work in the predicate calculus (both strictly symbolically and intuitively). Then, for math students, before even calculus, about the first half of a set theory course (through the basics, basic axiom of choice and Zorn's lemma, the naturals, and construction of the reals as a complete ordered field; but don't need the second half that gets into more about transfinite cardinalities, etc.). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415060877799988, "perplexity_flag": "head"} |
http://www.impan.pl/cgi-bin/dict?although | ## although
[see also: though]
Although [1] deals mainly with the unit disc, most proofs are so constructed that they apply to more general situations.
Although these proofs run along similar lines, there are subtle adjustments necessary to fit the argument to each new situation.
Although the definition may seem artificial, it is actually very much in the spirit of Darbo's old argument in [5].
Now $f$ is independent of the choice of $\gamma$ (although the integral itself is not).
Thus, although we follow the general pattern of proof of Theorem A, we must also introduce new ideas to deal with the lack of product structure.
Although standard, the notion of a virtual vector bundle is not particularly well known.
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325466752052307, "perplexity_flag": "head"} |
http://electronics.stackexchange.com/questions/35293/calculating-an-r-rc-circuit | # Calculating an R-RC circuit
This is not homework. I have this circuit, and I want to calculate V2. I know it is equal to V1 at t=0, and equal to $V1 \cdot \frac{R2}{R1+R2}$ at t=$\infty$, but I don't know how to calculate the charging of the capacitor.
All I find on Google is charging of an RC, without the parallel resistor.
-
– jippie Jul 8 '12 at 14:41
## 2 Answers
The answer is in Thévenin, like Alfred and jippie also suggested. Thévenin claims that any 1-port network consisting of voltage sources and resistors can be replaced by a voltage source and a series resistor across that port, and who am I not to believe him?
Let's consider your circuit without the capacitor and assign its connections as the circuit's port.
First we look for $V_{th}$, which we do by leaving the output open-circuit, so that $R_{th}$ can't cause a voltage drop. Then R1 and R2 form a voltage divider with $V_{AB}$ = V1 $\times$ R1/(R1 + R2) = 3 V. (I'm using actual values for voltage and resistors to make it more graphic.) That's $V_{th}$. Fine.
Next we have to find $R_{th}$. You can do that by shorting all voltage sources and measure the resistance between A and B. But let's do it the alternative way: short-circuit A to B, and measure the current through that point. That should be $V_{th}/R_{th}$. Both methods give the same result, and it depends on the kind of circuit which way is best.
So shorting A-B we get I = V1/R2 = 12 V/ 12 Ω = 1 A. (What a coincidence! :-)) Then $R_{th}$ = 3 V/ 1 A = 3 Ω. If we now reconnect our load we have the typical RC circuit where C1 is charged via a series resistor (let's say C1 is 1 F):
$V_C(t) = V_\infty + (V_0 - V_\infty) e^{\dfrac{-t}{RC}}$
$V_\infty$ is $V_{th}$ because after C1 is charged there won't be a voltage drop across $R_{th}$. And $V_0$ is 0, we start with an uncharged capacitor. Then
$V_C(t) = 3 V + (0 V - 3 V) e^{\dfrac{-t}{3 s}} =3 V (1 - e^{\dfrac{-t}{3 s}})$
And that's the well-known charging equation.
The blue curve is the voltage between A and B, the purple curve is the voltage at B with respect to ground.
-
Haha! Nicely explained and was really funny. I got a little lost when you have posted both the thevenin circuit and the original circuit without the capacitor, but then I recovered quickly. – abdullah kahraman Jul 9 '12 at 21:40
1
@abdullah - Funny??? Not the exponential, I hope :-) – stevenvh Jul 10 '12 at 4:15
lol. what happened to the exponential ? :) – abdullah kahraman Jul 10 '12 at 6:44
There's always more than one approach to solving a circuit problem but the approach I generally find most useful in this type of problem is to find the Thevenin equivalent resistance $R_{TH}$ "seen" by the capacitor. This will allow you to find the time constant, $\tau = R_{TH}C$.
To find the Thevenin resistance, remove the capacitor and zero the voltage source (replace with wire). Now, find the resistance between the terminals where the capacitor connects; that resistance is $R_{TH}$
If you've already found the voltage across the capacitor at t = 0 and t = $\infty$, just "connect them together" with the exponential function:
$v_C(t) = [v_C(\infty) - v_C(0)](1 - e^{t/\tau}) + v_C(0)$
For $v_C(0) = 0$, this simplifies to:
$v_C(t) = v_C(\infty)(1 - e^{t/\tau})$
Now that you have $v_C(t)$, you have $v_2(t) = V_1 - v_C(t)$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374940991401672, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/trees?sort=unanswered | # Tagged Questions
A tree is a graph that is connected but contains no cycles.
2answers
50 views
### How many vertices of degree 1 in a tree?
How many vertices of degree 1 are there in a tree with no vertices of degree more than 4? The only thing that I have right now is that the number of edges in a tree is n-1 where n is the number of ...
1answer
132 views
### Height of a full binary tree
A full binary tree seems to be a binary tree in which every node is either a leaf or has 2 children. I have been trying to prove that its height is O(logn) unsuccessfully. Here is my work so far: I ...
1answer
85 views
### Is the graceful labeling conjecture still unsolved?
From the Wikipedia article on graceful labeling: ... A major unproven conjecture in graph theory is the Ringel–Kotzig conjecture, named after Gerhard Ringel and Anton Kotzig, which hypothesizes that ...
1answer
42 views
### Number of upper sets of size $n$ in a finite tree
Consider a finite tree $T = (V, <)$, where $y < x$ means that $y$ is the parent of $x$. We assume that $T$ has a unique root $r$ that has no parent. An upper set of $T$ is a subset $S$ of $V$ ...
1answer
140 views
### No of labeled trees with n nodes such that certain pairs of labels are not adjacent.
Moderator Note: This is a current contest question on codechef.com. What is the number of trees possible with $n$ nodes where the $i$th and $(i+1)$th node are not adjacent to each other for \$i ...
1answer
70 views
### How can I tell how many non-isomorphic unrooted trees with 6 edges exists without drawing them all?
Typically my professor asks that we draw them all, but I would like to save some time to confirm how many I need.
1answer
127 views
### What is the fairest solution/formula for rewarding points in a hierarchical network?
Introduction The nature of this hierarchical network is based on the concept of Multi-Level Marketing strategy. Example 1 - Unfair Situation Ancestor receives 1 point for every descendant ...
1answer
80 views
### Finite Rooted Binary Trees
I am new to learning about finite rooted binary trees. This lemma below is from John Meiers book: Groups, Graphs and Trees. There is no aval proof in the book. I was just wondering is I could catch a ...
1answer
30 views
### Let T be a tree with sub-trees which each set has a vertex in common - hence T has a vertex in all of its sub-trees?
The question is: Let T be a tree with sub-trees $T_1,T_2,..,T_n$ such that all trees $T_i,T_j$ have a vertex in common which each set has a vertex in common - show that T has a vertex in all $T_i$. ...
1answer
131 views
### has deleting node in a binary search tree Displacement feature?
I am developing an academic project about graph and tree theory.I searched a lot but I didn't find a clear answer. In a part of project we want to delete some nodes from tree for example we want to ...
1answer
59 views
### Looking to generalize a binomial tree with some constraints.
I've got a set of sample data and I'm looking to see if it's possible to generalize a binomial formula to give a closed form solution to this. If not, would it be possible to write a program to do ...
1answer
122 views
### How many arguments are there in a Merkle tree?
I want to calculate the amount of elements in a Merkle tree given the number of leaf elements. The number of elements at a given level n is equal to number of elements at a level n+1, divided by two ...
0answers
215 views
### Certain permutations of the set of all Pythagorean triples
The fact that the set of all primitive Pythagorean triples naturally has the structure of a ternary rooted tree may have first been published in 1970: http://www.jstor.org/stable/3613860 I learned ...
0answers
53 views
### Free medial magmas
A medial magma is a set $M$ with a binary relation $*$ satisfying $(a*b)*(c*d) = (a*c)*(b*d)$. Medial magmas constitute an algebraic category $\mathsf{Med}$, therefore there is a functor \$\mathsf{Set} ...
0answers
87 views
### Identify this combinatorial construction
I am no combinateur, but I stumbled across the following construction when studying an operad arising from information theory (actually it's a special algebra of an A$_\infty$-operad). It looked ...
0answers
82 views
### Number of spanning arborescences
I am trying to prove the following result from my book: Let $G$ be a directed graph with vertices $x_1,x_2,\cdots x_n$ for which a directed Eulerian circuit exists. A spanning arborescence with root ...
0answers
94 views
### Are almost all rooted trees asymmetric?
It's well known that almost all graphs are asymmetric (have trivial automorphism group) and that almost all free trees are symmetric. By which argument do I see whether almost all rooted trees are ...
0answers
29 views
### Mathematical notation for formulas involving trees
I am working on document that requires me to write such things as "$T_1$ is a descendant of $T_0$", or "$N_1$ is an parent of $N_2$". For now, I've been highjacking set notation for use in formulas, ...
0answers
189 views
### Algorithm for generating homeomorphically irreducible trees of size n
In this video they talk about generating all the homeomorphically irreducible trees of size 10. I was wondering if there is a generating algorithm for generating all the homeomorphically irreducible ...
0answers
40 views
### Presentation of tree decompositions (and related concepts) in terms of continuous maps?
A tree decomposition of a graph $G$ is commonly defined in terms of a tree $T$ with the following structure: Each vertex $t \in V(T)$ is associated to a set $X_t \subseteq V(G)$; The union ...
0answers
37 views
### Groups acting on (regular) trees with finite quotient
Let $T$ be a regular tree, and suppose that $G \leq \mathrm{Aut}(T)$ has finite quotient graph, $T / G$. Is it true (in general) that $G$ will have trivial centralizer in the full automorphism group? ...
0answers
326 views
### Minimum Spanning Tree in a Complete Graph
We generate a complete euclidean graph by taking N random points from a limited (1.0 x 1.0 square) 2D space, connecting them all together (complete graph) and giving the edges weights proportional (or ...
0answers
78 views
### A few questions about a relationship between some integer sequences and infinite recursive trees
In his book Gödel, Escher, Bach Douglas Hofstadter defines the following two integer sequences: Hofstadter G-sequence: $a(n)=n-a(a(n-1))$ Hofstadter H-sequence: $a(n)=n-a(a(a(n-1)))$ He says ...
0answers
75 views
### How matroids can help me locating trees inside a graph?
Background I am working on a project at present involving graph analysis. I basically need to mathematically model trees inside my graph. How can this be done using Matroids? What I am looking for ...
0answers
57 views
### Embedding tree metric isometrically into $\ell_\infty$
I just started (independent) learning on metric embeddings from the Fall 2003 offering of the course at CMU. I have a limited mathematical background and alas, it made me stumble at the first exercise ...
0answers
25 views
### Finding number of homeomorphically irreducible trees of degree N
There is a scene in Goodwill Hunting where professor challenges students with task of finding all homeomorphically irreducible trees of degree 10. This is discussed in many places, such as here and is ...
0answers
36 views
### Red Black Binary Search Trees
Give an example of a Red-Black tree and a value, for which inserting the value, and then immediately deleting it yields a tree that is different from the tree before the insertion.
0answers
26 views
### Enumeration of symbols in grammatical expressions or vertices in tree graphs
I have expressions (type of a function) like e.g. $$f:(A\to B)\to C \to (D\to E)\to F.$$ (Where I understand $A\to B\to C$ as $A\to (B\to C)$, in case that is relevant.) There might be information ...
0answers
41 views
### What is the runing time of this algorithm involving length and depth?
I'm hoping that someone can shed some light on this running time. I have a "tree", for lack of a better description, that has a length $l$ and depth $d$. I want to maximize the tree size, which ...
0answers
47 views
### How can I prove this property of a $d$-ary tree?
I have the following homework (algorithms lecture): Every $d$-ary tree $G=(V,E)$ contains a vertex $v$ such that the size of the subtree with root $v$ is at least $\frac{1}{d+1} \vert V \vert$ and at ...
0answers
15 views
### Is there a fast tree balancing algorithm when addition > deletion
Is there a tree balancing algorithm / tree structure that is faster on addition of nodes than on deletion?
0answers
30 views
### How to formulate a best-search algorithm limited by a count of nodes visited?
The problem I'm doing a search by computer program. Each node takes about 5 minutes of wall time to get a result so I'm looking to carefully choose the nodes to inspect so as to find the best result ...
0answers
107 views
### A tree that does not satisfy: If $v$ and $w$ are vertices in $T$, there is a unique path from $v$ to $w$?
It is a strange question on a book. Give an example of a tree $T$ that does not satisfy the following property: If $v$ and $w$ are vertices in $T$, there is a unique path from $v$ to $w$. I ...
0answers
42 views
### Keeping consistency in subjective ranking
I'm doing some work on a computer program that aids in ranking items which don't have a way to objectively compare to each other. As it is now, it takes each item and pairs it up with each other ...
0answers
144 views
### Finding the number of spanning trees of a given height
I hope I can avoid being confusing, but here goes. I have a graph $(V, E)$, connected, undirected and with no loops. I also have an assignment of integer-valued weight to each edge of the graph. ...
0answers
77 views
### Concerning The 'Price-Collecting Steiner Tree'
I'm a Master student at the University of Leuven, Belgium. I have to make a report of a case concerning the 'Price-Collecting Steiner Tree'. We have our model and our restrictions. We are just looking ...
0answers
40 views
### What is the algorithm to sort 5 elements in 7 binary comparisons?
I'm tasked with finding the algo that sorts 5 elements in 7 binary comparisons. (The 7 is derived from ceilingFunction(log 5!), which our text states is the minimum number of comparisons required for ...
0answers
17 views
### 2-3-4 Tree: how to insert numbers to optimise the layers?
Let's imagine I have number in a list from 1 to 15, then the way to have 2 full rows is to insert them with this order into an empty 2-3-4 tree 2,4,6,8,10,12,14,1,3,5,7,9,11,13,14,15 (you can test ...
0answers
41 views
### Nilpotency of the adjacency matrix of a directed tree network
Say I have a directed network that is organized in a tree, with all connections going downstream (genealogically). By that I mean that there is one root node connected to $c_{00}$ child nodes, and ...
0answers
54 views
### Straight skeleton is a tree
Can anybody give me a hint on how to prove that the straight skeleton of every polygon is a tree. Here is the definition of the straight skeleton (taken from Wikipedia): The straight skeleton of a ...
0answers
66 views
### Spanning Tree question regarding fundamental cycle / cutset
Hello i have a question regarding this graph and i am not sure about the answer. Given the following graph (image in link), where the dotted lines are edges of a spanning tree, find the fundamental ...
0answers
43 views
### Constructing steiner tree
Given 4 nodes with edge values as stated below, is it possible to build a minimum spanning tree using Steiner tree? ...
0answers
71 views
### Bootstrap sampling
The usual way to create bootstraps is by sampling with replacement from the original data set. The resulting resampled bootstraps have the same length (N records/data points) as the original data ...
0answers
82 views
### Bounding the number of nodes in a sub-tree of a Red Black Tree
Let T be a Red Black tree with n nodes. Let v be a child of a child of the root. Find a tight asymptotic lower bound as a function of n to the number of nodes in the sub-tree of v (meaning the root of ...
0answers
49 views
### Upper bound for number of unlabelled trees?
Is there even a remotely good formula for the upper bound for the number of unlabelled trees. I know of Knuth's formula for asymptotic behaviour, but that is not always above the actual value. The ...
0answers
179 views
### Depth-first spanning tree?
I am going to identify tree edges and back edges in an undirected graph. The graph consists of $5$ nodes, the edges between these nodes are as shown below: Suppose starting with $v_1$, after a ...
0answers
44 views
### An MST-like problem with vertex selection
Consider a planar pointset in a rectangle, where every point has a color (an integer label). We need to select one point of every color, so as to minimize the cost of a planar MST of selected points ...
0answers
233 views
### Prove that in every tree, any two paths with maximum length have a node in common.
Prove that in every tree, any two paths with maximum length have a node in common. This is not true if we consider two maximal (i.e. non-extendable) paths. What does this even mean?
0answers
110 views
### Terminology, mapping a tree to a tree
I have stumbled upon a problem, unfortunately I do not know the proper terminology to be used which hinders me in thinking about the problem and explaining the problem. I am not even sure this is the ...
0answers
123 views
### Need help performing a tree method to test for satisfiability
For those who commented on my previous questions, sorry for the lack of information and explanation. Clearly I did not do a good job of explaining myself so I deleted the question and hope this one ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355747699737549, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/90232/neumann-series-in-an-incomplete-normed-algebra?answertab=active | # Neumann series in an incomplete normed algebra
Let $\mathcal{A} \equiv (A, \|\cdot\|_A)$ be a unital (associative) normed algebra over the real or complex field, and assume that $\mathcal{A}$ is not complete. Provided $\mathcal{B}_\mathcal{A}$ is the open unit ball of $\mathcal{A}$, define $N$ to be the set of all $a \in \mathcal{B}_\mathcal{A}$ such that the Neumann series $\sum_{n=0}^\infty a^n$ does not converge in $\mathcal{A}$.
Questions. 1) Is $N$ dense in $\mathcal{B}_\mathcal{A}$? 2) And what about $\mathcal{B}_\mathcal{A}\setminus N$?
Edit (11 Dec 2011). Following Davide's comment below, let $\mathcal{C}^0([0,1],\mathbb{R})$ be the usual Banach algebra (over the real field) of all continuous functions $[0,1] \to \mathbb{R}$ endowed with the uniform norm $\|\cdot\|_\infty$. Define $A$ to be the subalgebra of $\mathcal{C}^0([0,1],\mathbb{R})$ of all polynomial functions. For each $\phi \in A$ such that $\|\phi\|_\infty < 1$, the Neumann series $\sum_{n=0}^\infty \phi^n$ converges in $\mathcal{C}^0([0,1],\mathbb{R})$ to $(1 - \phi)^{-1}$, but it does not in $\mathcal{A} \equiv (A,\|\cdot\|_\infty)$ so far as $\phi$ is not a constant. Thus, $N$ is dense in $\mathcal{B}_\mathcal{A}$, and indeed in the unit ball of $\mathcal{C}^0([0,1],\mathbb{R})$ (by the Stone-Weierstrass theorem).
-
As far as I can understand, there still exists no tag for normed algebras, and I'm not yet enabled to create a new one by myself. – Salvo Tringali Dec 10 '11 at 18:38
What example(s) of $\mathcal A$ make(s) you think that $N$ can be dense in $\mathcal B_{\mathcal A}$? – Davide Giraudo Dec 10 '11 at 23:10
2
I've retagged question - in my opinion, it's better to think twice before creating a new tag whether it will be useful. But if you think that the new tag would be useful, fell free to retag the question again, of course. – Martin Sleziak Dec 11 '11 at 14:20
2
@Martin. I don't care much about labels but still think that normed ring/algebras would deserve their own tag. :) – Salvo Tringali Dec 11 '11 at 15:01
2
Speaking as a Banach algebraist, to act as if that area is subsumed by "operator-algebras" and "c-star-algebras" seems quite mistaken in my view. (Just try applying continuous functional calculus to self-adjoint elements in $\ell^1$-group algebras and watch your norm estimates blow up in your face...) – user16299 Jan 6 '12 at 6:52
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289225339889526, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=4155741 | Physics Forums
## Maximizing heat absorption through Radiation
Hi All,
I am currently investigating a method of absorbing heat from radiation. This is being done to harness heat from skin.
I was wondering what was the best method to do this?
Is there a particular material that is best suited to do this? I know that having the emissivity value of the body that the radiation is falling on close to 1 is one thing that can be done.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
The textbook answer is to use a dull, dark, matte surface. This, though, is a bit superficial (no pun intended) because surfaces which are good absorbers of visible light may not be good absorbers of far infrared (as emitted by skin). I'd guess, though, that you wouldn't go too far wrong with a soot-blackened surface.
Thanks for that. I want to conduct the heat away. Does this mean that i could coat the conductive material [eg. silver (ideally) or copper (most likely)] with some form of a matt black paint.
## Maximizing heat absorption through Radiation
Yes, I would think so. The heat will flow through the metal only if you keep the other 'end' of the metal cooler than its black face. The black face, if I understand your set up, absorbs heat from skin, some distance away. Don't forget that the temperature of the black face won't reach anything like that of the skin (say 35°C).
Many thanks again, What would you estimate the range of heat absorption from skin radiation? i.e. How far away could i have the metal away from the skin before the heat absorbed from radiation be negligible? I know this has a lot of variable so lets just say that the atmosphere is at 10°C and no wind. The heat radiated from the skin is at 35°C and the temperature that i wish the metal be at is ~25°C.
Can't give you a straight answer on this, but here are one or two points. Seen from a point on the absorbing surface the skin needs to occupy a decent fraction (say half) of the forward field of view. So if a large patch of skin (the mind boggles!) is available it needn't be as close as a small patch. A rough calculation suggests that you'd need about 0.1 m2 of skin at a distance of 0.1 m from the black surface (assumed much smaller in area) in order to meet this rule of thumb criterion. Maybe the criterion is too stringent. If it's radiative transfer you're interested in, then the black surface needs to be below the skin, (which needs to be in a roughly horizontal plane) in order to avoid transfer by convection. What's going to make it very difficult to get the black surface to 25°C is that you're conducting heat away from it via the metal. The rate of heat arrival at the black surface will be too low to sustain anything but a very small temperature gradient through the metal. It is possible to make some a estimate the net heat arriving per second at the black surface, using Stefan's law. What value have you in mind for the area of your absorbing surface?
I'm only looking at the radiative transfer. The absorbing face will be located approximately between 10 - 25mm away from the skin and in front of the face. The area of skin (the face) is quite a bit larger than the absorbing surface and located quite close for my application so going by what you have said, it seems that radiation can play a role in the transfer of heat. The absorbing face will be about 0.004m^2
Let's make the very crude (and optimistic) assumptions that the absorbing surface is 100% absorbing at all wavelengths, and that it is effectively totally surrounded by the radiating skin, which can also be treated as a black body. In that case the net heat, P, arriving by radiation on the absorbing surface (area A) per second is given by $$P=\sigma A (T_S^4 - T_A^4)$$. Putting $\sigma = 5.67 \times 10^{-8} \text{W m}^{-2} \text{K}^{-4},\ T_S = 308 \text{K}, \ T_A = 293 \text{K}, \ A=0.004 \text{m}^2$, gives an absorbed power of 0.37 W. I suspect the real figure will be a lot less, maybe by a factor of 3 or 4.
This is very helpful, thanks. Just wondering though about the absorbing face. ''Let's make the very crude (and optimistic) assumptions that the absorbing surface is 100% absorbing at all wavelengths,'' How would i find out how absorbing the face is across the different wavelengths but more specifically infrared?(as skin emits radiation at this wavelength)
Sorry, can't help much here. With Leslie's cube (copper tank filled with hot water), I found that the dull dirty face and - curiously enough - the white-painted face radiated almost as well as the matt-black face. Good emitters are good absorbers (at a given temp and for a given wavelength), so I'd guess that absorption of far infrared from warm surfaces may not be too much of a problem. Beyond that, I can only suggest the internet...
Thanks for your help. You've given me plenty to look into.
Thread Tools
| | | |
|-------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Maximizing heat absorption through Radiation | | |
| Thread | Forum | Replies |
| | Classical Physics | 1 |
| | Introductory Physics Homework | 0 |
| | Introductory Physics Homework | 2 |
| | Classical Physics | 0 |
| | General Physics | 1 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9573407769203186, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/42430/two-questions-about-finiteness-of-ideal-classes-in-abstract-number-rings/43031 | ## Two questions about finiteness of ideal classes in abstract number rings
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let us say that an abstract number ring is an integral domain $R$ which is not a field, and which has the "finite norms" property: for any nonzero ideal $I$ of $R$, the quotient $R/I$ is finite.
(I have taken to calling such rings abstract number rings and have some vague ambitions of extending the usual algebraic number theory to this class of rings. Note that they include the two basic rings $\mathbb{Z}$ and $\mathbb{F}_p[t]$ and are closed under: localization, passage to an overring -- i.e., a ring intermediate between $R$ and its field of ractions -- completion, and taking integral closure in a finite degree extension of the fraction field. In order to answer my questions affirmatively one would have to know something about abstract number rings which are not obtained from the two basic rings via any of the above processes -- if any!)
Note that such a ring is necessarily Noetherian of dimension one, so it is a Dedekind domain iff it is normal, and in any case its integral closure is a Dedekind abstract number ring.
Question 1: Does there exist an integrally closed abstract number ring with infinite Picard (= ideal class, here) group?
$\ \$
Question 2: Let $R$ be a not-necessarily integrally closed abstract number ring with integral closure $\tilde{R}$. Suppose that the ideal class group of $\tilde{R}$ is finite. Consider the ideal class monoid $\operatorname{ICM}(R)$ of $R$, i.e., the quotient of the monoid of nonzero ideals of $R$ by the submonoid of principal ideals. (Note that the group of units of $\operatorname{ICM}(R)$ is precisely the Picard group, but if $R$ is not integrally closed it will necessarily have non-invertible ideals so that $\operatorname{Pic}(R)$ will not be all of $\operatorname{ICM}(R)$.) Can it be that $\operatorname{ICM}(R)$ is infinite?
-
Pete, is there evidence that there exists an integrally closed example with infinitely many maximal ideals that is not obtained by localization of a ring of $S$-integers of a global field? (In other words, is there reason to believe that this is not "a theory of the empty set", so to speak?) Doesn't seem you can plug into either Artin-Whaples or Iwasawa (due to lack of "enough" valuations), but have you checked if any of the methods in Iwasawa's paper would be useful? – BCnrd Oct 16 2010 at 23:05
@B: Yes, as I have said above, a key issue is whether there are any such "exotic" guys other than the ones I listed above. No, I don't have any evidence either way. I don't see how it would follow from results of Artin and Whaples (but I haven't read their paper directly, only secondary sources such as Artin's book on valuation theory). As for Iwasawa, I don't know which paper you have in mind here: please clarify? – Pete L. Clark Oct 16 2010 at 23:10
P.S.: Not that it makes any difference of course, but I would be perfectly happy if this were a "theory of the empty set": it would then be clear that the class of rings I'm isolating is closely related to classical number theory! – Pete L. Clark Oct 16 2010 at 23:11
Dear Pete: Iwasawa gave an abstract characterization of the pairs $(k, \mathbf{A}_k)$ for a global field $k$ and its adele ring among all pairs $(K,A)$ consisting of an infinite field $K$ embedded discretely and cocompactly in a locally compact Hausdorff topological ring $A$. I never read his paper, but you can surely find it quickly via Google. In the spirit of the TV game show "Jeopardy!", Iwasawa's result seems like it may be the right answer to whatever question inspired you to think about "abstract number rings". :) – BCnrd Oct 17 2010 at 1:01
Iwasawa's paper appeared in Ann. of Math. in 1953. The title is something like "On Rings of Valuation Vectors". – KConrad Oct 17 2010 at 20:03
show 3 more comments
## 4 Answers
Here is an answer to your "question 0" - an example of an "exotic" number ring. (This should be a comment but it is too long.)
Construct a sequence of number rings $\mathbb{Z}=R_0\subset R_1\subset \dots$ with the following properties:
• there is to be exactly one prime $P_{n,i}$ of $R_n$ lying over the $i$-th rational prime $p_i$, for $1\le i \le n$
• `$e(P_{n+1,i}/P_{n,i}) = f(P_{n+1}/P_{n,i})=1$`
For example, take a quadratic extension of Frac($R_n$) in which all $P_{n,i}$ split; take the integral closure of $R_n$ in it; then invert, for each $1\le i\le n$, one of the two primes over $P_{n,i}$, and all but one of the primes over $p_{n+1}$.
Then the inductive limit $R$ of $(R_n)$ is an abstract number ring (obviously integrally closed) - since for any $x\in R$ we have $x\in R_m$ for some $m$, and the sequence of quotients $R_n/(x)$ is ultimately stationary.
I can't see at once what Pic($R$) is going to be, but because of the all the localisation maybe it ends up being trivial. (Perhaps a Minkowski bound-type argument will show that.)
-
3
In this sort of example the class group will be a torsion group, being the direct limit of the finite groups $Pic(R_n)$. Which raises the question, could you ever get an ideal class of infinite order in an "abstract number ring"? – Tom Goodwillie Oct 17 2010 at 13:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To answer Question 1: Yes, there do exist integrally closed abstract number rings with infinite class group.
By factorization of ideals, for $R$ to be an abstract number ring it is enough that it is a Dedekind domain with finite residue field $R/\mathfrak{p}$ at each prime $\mathfrak{p}$. Theorem B of the paper mentioned by Hagen Knaf in his answer actually gives what you ask for (R. C. HEITMANN, PID’S WITH SPECIFIED RESIDUE FIELDS, Duke Math. J. Volume 41, Number 3 (1974), 565-582).
Theorem B: Let G be a countable abelian torsion group. Then there is a countable Dedekind domain of characteristic 0 whose class group is G, and whose residue fields are those of the integers (i.e. one copy of $\mathbb{Z}/p\mathbb{Z}$ for each prime $p$).
As such rings have finite residue fields, this gives an integrally closed abstract number ring with class group any countable torsion group you like. We can do much better than this though. After thinking about your question for a bit, I see how we can construct the following, so that all countable abelian groups occur as the class group of such rings.
Let G be a countable abelian group. Then, there is a Dedekind domain $R$ with finite residue fields such that $\mathbb{Z}[X]\subseteq R\subseteq\mathbb{Q}(X)$ and ${\rm Cl}(R)\cong G$.
I see some surprise mentioned in the comments below that it is enough to look at over-rings of $\mathbb{Z}[X]$ to find Dedekind domains with any countable class group. In fact, over-rings of $\mathbb{Z}[X]$ are very general in terms of prime ideal factorization, and can show the following. I'll use ${\rm Id}(R)$ for the group of fractional ideals of $R$ and $R_{\mathfrak{p}}$ for the localization at a prime $\mathfrak{p}$, with $\bar R_{\mathfrak{p}}$ representing its completion (which is a compact discrete valuation ring (DVR) in this case).
Let $R$ be a characteristic zero Dedekind domain with finite residue fields. Then, there is a Dedekind domain $R^\prime$ with $\mathbb{Z}[X]\subseteq R^\prime\subseteq\mathbb{Q}(X)$ and a bijection $\pi\colon {\rm Id}(R)\to{\rm Id}(R^\prime)$ satisfying
1. $\pi(\mathfrak{ab})=\pi(\mathfrak{a})\pi(\mathfrak{b})$.
2. $\pi(\mathfrak{a})$ is prime if and only if $\mathfrak{a}$ is.
3. $\pi(\mathfrak{a})$ is principal if and only if $\mathfrak{a}$ is.
4. If $\mathfrak{p}\subseteq R$ is a nonzero prime then $\bar R_{\mathfrak{p}}\cong\bar R^\prime_{\pi(\mathfrak{p})}$.
In particular, the class groups are isomorphic, ${\rm Cl}(R)\cong{\rm Cl}(R^\prime)$.
The idea is that we can construct Dedekind domains in a field $k$ by first choosing a set $\{v_i\colon i\in I\}$ of discrete valuations on $k$ and, letting $k_v=\{x\in k\colon v(x)\ge0\}$ denote the valuation rings, we can take $R=\bigcap_ik_{v_i}$. Under some reasonably mild conditions, this will be a Dedekind domain with the valuations $v_i$ corresponding precisely to the $\mathfrak{p}$-adic valuations, for prime ideals $\mathfrak{p}$ of $R$. In this way, we can be quite flexible about constructing Dedekind domains with specified prime ideals (and, with a bit of work, specified principal ideals and class group). Constructing discrete valuations $v$ on $k=\mathbb{Q}(X)$ is particularly easy. Given a compact DVR $R$ of characteristic 0 and field of fractions $E$, every extension $\theta\colon k\to E$ gives us a valuation $v(f)=u(f(X))$ where $u$ is the valuation in $E$. To construct such an embedding only requires choosing $x\in E$ which is not algebraic over $\mathbb{Q}$ and, if we want the localization $k_v$ to have completion isomorphic to $R$, then we just need $\mathbb{Q}(x)$ to be dense in $E$. There's plenty of freedom to choose $x\in R$ like this. In fact, there's uncountably many $x$, as they form a co-meagre subset of $R$. So, we have many many valuations on $\mathbb{Q}(X)$ corresponding to any given compact DVR. In this way, we have a lot of flexibility in constructing Dedekind domains in over-rings of $\mathbb{Z}[X]$.
I've written out proofs of these statements. As it is much too long to fit here, I'll link to my write-up: Constructing Dedekind domains with prescribed prime factorizations and class groups. Hopefully there's no major errors. I'll also mention that this is an updated and hopefully rather clearer write-up than my initial link (which were very rough notes skipping over many steps).
I think also that my linked proof can be modified to show that you can simultaneously choose any prescribed unit group of the form $\{\pm1\}\times U$ where $U$ is a countable free abelian group.
-
George, this is very interesting. I will take a careful look at this when I get the chance, and I would certainly be interested to see the full details. – Pete L. Clark Dec 30 2010 at 23:47
@Pete: Also, the following paper constructs Dedekind domains with any finitely generated class group and lying between $\mathbb{Z}[X]$ and $\mathbb{Q}(X)$. I'd like to have a look at that and compare, but don't have access. I wonder what stopped them from also getting non-finitely generated abelian groups? jstor.org/pss/2038634 – George Lowther Dec 31 2010 at 0:01
Actually, this link is free access. It's not really the same. ams.org/journals/proc/1973-040-01/… – George Lowther Dec 31 2010 at 0:14
@Pete: I added more details to the sketch proof. It was getting a bit long to fit here, so I added a link to the pdf instead. – George Lowther Jan 2 2011 at 22:41
Hmm, for some reason I find it surprising that every countable abelian group can be obtained as the class group of an overring of $\mathbb{Z}[X]$: I think this would be the simplest construction of such Dedekind domains. I definitely plan to take a look at the details in the near future. – Pete L. Clark Jan 3 2011 at 0:05
show 1 more comment
In the article R. C. HEITMANN, PID’S WITH SPECIFIED RESIDUE FIELDS, Duke Math. J. Volume 41, Number 3 (1974), 565-582 the author shows (Thm A):
For every countable set $F$ of countable fields with the property that for every prime $p$ the set $F$ contains only finitely many fields of characteristic $p$, there exists a countable principal ideal domain $R$ of characteristic $0$ such that the set $F$ consists precisely of all residue fields (with respect to maximal ideals) of $R$.
The construction he uses to prove the theorem seems to give domains $R$ such that the extension $K/\mathbb{Q}$ of the field of fractions $K$ of $R$ over the rationals is finitely generated but not necessarily algebraic. (Unfortunately I do not have access to the complete article :c
H
-
Thanks, that's interesting. I'll check it out when I get the chance. – Pete L. Clark Oct 21 2010 at 14:26
1
I think Theorem B of this paper is most relevant to the original question (and I referenced this in my answer). – George Lowther Dec 30 2010 at 23:17
Here is an answer to Question 1, which was communicated to me by my colleague Dino Lorenzini.
In a 1964 paper, Oscar Goldman considers the class of Dedekind abstract number rings $R$ (note that he states the condition as all quotients by nonzero prime ideals to be finite; this easily implies that the quotient by any nonzero ideal is finite) which moreover have finitely generated unit group $R^{\times}$. He proves many interesting results in this short paper: the last one is the existence of a domain $R$ satisfying the above properties and for which $\operatorname{Pic}(R)$ is not even a torsion group.
The method of proof is interesting: first he establishes the following structural criterion for a Dedekind domain to have torsion Picard group: it is equivalent that every overring $S$ -- i.e., ring intermediate between $R$ and its fraction field -- be a localization. (This result is included in my notes on commutative algebra, but I had been following Larsen and McCarty, which gives a much more elaborate proof.) Then he constructs a proper overring which doesn't have any more units, so can't be a localization.
All in all, his paper is highly recommended.
-
Interesting. I'm thinking that $Pic(R)$ can be any countable group (and, maybe even that the unit group is just $\{\pm1\}$) and added a brief sketch in my answer. I wonder if this paper is related to my idea at all? – George Lowther Dec 30 2010 at 23:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 125, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493292570114136, "perplexity_flag": "head"} |
http://mathhelpforum.com/business-math/152543-economic-problem-based-constraint-scarce-product.html | # Thread:
1. ## Economic problem based on constraint of scarce product
Hi, so i'm trying to figure out how to do this question but I'm a bit lost.
I assume the answer to a) is 3P1 + P2 = K
after that, I'm not sure what to do.
Any help is greatly appreciated! Thanks in advance!!
p.s I'm not trying to make anyone do this for me, just a point in the right direction would be nice
Consider a two product firm with a profit function:
$projection (q1,q2) = 50 + 5q1 - q1^2 + 4q2 - q2^2 - q1q2$
where q1 and q2 are the output levels of products 1 and 2 respectively. The
manufacturing of the two products uses a scarce resource: one unit of product 1 uses 3 units of the resource and one unit of product 2 uses one unit of the resource. The firm has K units of this scarce resource.
a) Write down the firm's constraint that involves the use of this scarce product.
b) Assume the constraint is binding, i.e., that K is sufficiently small that all
of the scarce resource will be used. Solve the constrained maximization
problem of the rm using the substitution method.
c) What is the profit of the firm at the optimal values of q1 and q2?
Hint: the answer will be a function of K.
d) What is the marginal value of the scarce resource to the firm (in terms of
increased profit)?
e) For what value of K would the resource no longer be scarce? , i.e., how high
must K be for the firm to choose not to use all of this resource?
2. a) your answer is more or less right. It should be
$3q_1 + q_2 \leq K$
The inequality is there because the firm does not have to use all of its available resources. I dont understand why you wrote P instead of Q. Typo?
b)
Can you write the maximisation problem?
it looks like: Maximise ((profits)) subject to ((constraint))
You found the constraint in part a
Spoiler:
$\displaystyle \max_{q_1,q2} ~~ 50 + 5q1 - q_1^2 + 4q_2 - q_2^2 - q_1q_2 - \lambda \left( 3q_1 + q_2 - K \right)$
solve using the usual methods
(differentiate with respect to q1, q2 and lambda. Set all partial derivatives to zero)
3. yar we're classmates
i have the same question, could you give some hints on what should be done for part e)?
i'm guessing maybe i'm supposed to sub in the constraint into the profit function:
π ( q1, K – 3q1) = 50 + 5q1 – q1^2 + 4 * (K – 3q1) – (K – 3q1) ^2 – q1* (K – 3q1)
am i very far off? what would void the constraint for those two quantities? does making it not scarce means it's no longer constrained?
in that case.... that would just make the two quantities equal to themselves without the K and q2? i probably don't make sense.. sheesh
4. Part (d) is trying to give you a hint on how to find the answer to part (e). You want the value of k at which the marginal value of the scarce resource is 0.
An easier way to think about is this:
In part (c), you found the profit of the firm assuming it consumes exactly K resources $\pi(K)$
So, find the value of K at which profits no longer increase by using more resources $\frac{d \pi(K)}{dK} = 0$
Because this is the point at which profits fall by using more resources, it follows that any resources available after this point would not be consumed. So this is point the question is looking for.
5. wooo! i actually did it like that afterwards
thank you! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306517839431763, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/19363?sort=newest | ## Background
Let $X$ and $S$ be simplicial sets, i.e. presheaves on $\Delta$, the so-called topologist's simplex category, which is the category of finite nonempty ordinals with morphisms given by order preserving maps.
How can we derive the structure of the face and degeneracy maps of the join from either of the two equivalent formulas for it below:
The Day Convolution, which extends the monoidal product to the presheaf category:
$$(X\star S)_{n}:=\int^{[c],[c^\prime] \in \Delta_a}X_{c}\times S_{c^\prime}\times Hom_{\Delta_a}([n],[c]\boxplus[c^\prime])$$
Where $\Delta_a$ is the augmented simplex category, and $\boxplus$ denotes the ordinal sum. The augmented simplex category is the category of all finite ordinals (note that this includes the empty ordinal, written $[-1]:=\emptyset$.
The join formula (for $J$ a finite nonempty linearly ordered set):
$$(X\star S)(J)=\coprod_{I\cup I=J}X(I) \times S(I')$$ Where $\forall (i \in I \text{ and } i' \in I'),$ $i < i'$, which implies that $I$ and $I'$ are disjoint.
Then we would like to derive the following formulas for the face maps (and implicitly the degeneracy maps):
The $i$-th face map `$d_i : (S\star T)_n \to (S\star T)_{n-1}$` is defined on $S_n$ and $T_n$ using the $i$-th face map on $S$ and $T$. Given $\sigma \in S_j\text{ and }\tau\in T_k$ , we have:
$$d_i (\sigma, \tau) = (d_i \sigma,\tau)\text{ if } i \leq j, j ≠ 0.$$ $$d_i (\sigma, \tau) = (\sigma,d_{i-j-1} \tau) \text{ if } i > j, k ≠ 0.$$ $$d_0(\sigma, \tau) = \tau \in T_{n-1} \subseteq (S\star T)_{n-1} \text{ if } j = 0$$ $$d_n(\sigma, \tau) = \sigma \in S_{n-1} \subset (S\star T)_{n-1}\text{ if } k = 0$$
We note that the special cases at the bottom come directly from the inclusion of augmentation in the formula for the join.
Edit: Another note here: I got these formulas from a different source, so the indexing may be off by a factor of -1.
## Question
How can we derive the concrete formulas for the face and degeneracy maps from the definition of the join (I don't want a geometric explanation. There should be a precise algebraic or combinatorial reason why this is the case.)?
Less importantly, how can we show that the two definitions of the join are in fact equivalent?
Edit:
Ideally, an answer would show how to induce one of the maps by a universal property.
Note also that in the second formula, we allow $I$ or $I'$ to be empty, and we extend the definition of a simplicial set to an augmented simplicial set such that $X([-1])=*$, i.e. the set with one element.
A further note about the first formula for the join: $\boxplus$ denotes the ordinal sum. That is, $[n]\boxplus [m]\cong [n+m+1]$. However, it is important to notice that there is no natural isomorphism $[n]\boxplus [m]\to [m]\boxplus [n]$. That is, there is no way to construct this morphism in a way that is natural in both coordinates of the bifunctor. This is important to note, because without it, it's not clear that the ordinal sum is asymmetrical.
-
I am somewhat stymied by the main question. I would not want to give an answer that consists of, e.g., writing down the actual definition of d_i, in terms of being the functor applied to a certain map, and then observing that when [n] is decomposed into two halves there are two possibilities for how d_i might occur. Where are you finding your definition for d_i and can you be more specific about why applying this to the definition of the join is problematic? – Tyler Lawson Mar 26 2010 at 2:46
Let me put it this way, I have looked through the literature extensively and have not been able to find a real proof over the course of five months, since I asked the first of this series of questions. If you can explain it with an informal combinatorial argument rather than an informal geometric one, it might be sufficient. A geometric argument is not sufficient because it does not connect the symbolic representation above with the geometry. – Harry Gindi Mar 26 2010 at 3:30
My problem with the above definitions of those maps is that there's no obvious way to derive them from the definition. The whole point of the join being a monoidal product is that we should be able to derive the structure maps formally from the definition of the underlying product and the structure maps of the two presheaves. – Harry Gindi Mar 26 2010 at 3:37
All right. I don't have time now, but I'll try to write something down tomorrow if you don't have a satisfactory answer. – Tyler Lawson Mar 26 2010 at 4:25
Thanks! After waiting this long, another day won't hurt. Let's just hope I don't get hit by a bus! =) – Harry Gindi Mar 26 2010 at 4:33
## 1 Answer
(A note: I am going to regard simplicial sets as also defined on the empty ordinal as well, with $X(\emptyset) = *$, which is required for the join formula. This is implicit in your first definition and will remove the need for two extra cases for $d_i$ at the end.)
Regarding the "minor" question. The short explanation is that this follows by decomposing the Hom-set according to the preimage of $c$ and $c'$ in $n$, and observing that each decomposition of $n$ provides an initial choice.
In more category-theoretic language, one way to rewrite the convolution is using the "over" category: ```$$
(X \star S)_n = \int^{[n] \to [c] \boxplus [c']} X_c \times S_{c'}
$$``` where now the coend is taken over the comma category $n \downarrow \boxplus$ whose objects are triples $([c],[c'],f)$ of a pair of objects of $\Delta$ and a morphism from $[n]$ to their ordinal sum. We note that this comma category decomposes as a disjoint union of categories: each $([c],[c'],f)$ determines a decomposition $[n] = f^{-1} [c] \cup f^{-1} [c']$ into a disjoint union, and morphisms preserve such a decomposition. Therefore, ```$$
[n] \downarrow \boxplus \simeq \coprod_{[n] = I \cup I'} (I \downarrow \Delta) \times (I' \downarrow \Delta)
$$``` This makes the coend decompose: ```$$
(X \star S)_n = \coprod_{[n] = I \cup I'} \int^{I \to [c], I' \to [c']} X_c \times S_{c'}
$$``` However, the comma category $(I \downarrow \Delta)\times(I' \downarrow \Delta)$ has an initial object: $I \times I'$ itself. Thus, the coend degenerates down to simply being the value: ```$$
(X \star S)_n = \coprod_{[n] = I \cup I'} X_{|I|} \times S_{|I'|}
$$``` This is slightly different notation for the second definition of the join that you gave.
Now, as for the boundary formulas.
The definition of `$d_i$` is as follows. For each $0 \leq i \leq n$, there is a unique map `$d^i:[n-1] \to [n]$` in $\Delta$ whose image is `$[n] \setminus \{i\}$`: $d^i(x) = x$ for $x < i$, and $d^i(x) = x+1$ for $x \geq i$. The induced map `$(X \star S)_n \to (X \star S)_{n-1}$` is the map induced by applying the contravariant functor to $d^i$.
Since `$(X \star S)_n$` is a disjoint union of sets, it suffices to show that the formula is correct on $X(I) \times S(I')$ for all decompositions of $[n]$ into $I \cup I'$, where $|I| = j+1$ and $|I'| = k+1$. There are two possibilities: either $i \in I$ when $0 \leq i \leq j$, or $i \in I'$ when $j < i \leq n$.
In either case, the map $[n-1] \to [n] = I \cup I'$ induces, by taking preimages, a unique ordered decomposition $[n-1] = J \cup J'$ of $[n-1]$. If $i \in I$, then $J$ has size $|I| - 1$ and $J'$ is mapped isomorphically to $I'$ by $d^i$. In this case, the map $d^i$ is isomorphic to the map $d^i \boxplus id$ on $[j-1] \boxplus [k] \to [j] \boxplus [k]$. If $i \in I'$, we have the reverse possibility, with $d^i$ isomorphic to $id \boxplus d^{i-j-1}$ (the upper index necessary because inserting the identity at the beginning adds $j+1$ elements to the ordered set at the beginning).
In the case $0 \leq i \leq j$, the induced map ```$$
d_i: X(I) \times S(I') \to \coprod_{[n-1] = K \cup K'} X(K) \times S(K')
$$``` is therefore the map $X(d^i) \times id: X(I) \times S(I') \to X(J) \times S(J')$, followed by the inclusion into the coproduct. In the case $j < i \leq n$, the map is $id \times S(d^{i-n-1})$ followed by inclusion.
This recovers the formula for $d_i$ that you have written down, up to inserting copies of a point $*$ as in the remark at the beginning.
-
Thank you so much, I really appreciate it. This is an awesome answer. – Harry Gindi Mar 26 2010 at 13:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262459874153137, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/97579?sort=votes | ## upper bounds on a certain matrix norm
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there some simple upper bound on $||(B^{-1}+A^{-1})^{-1}||$, where $A,B$ are $n \times n$ symmetric matrices?
-
2
What norm are you denoting by $||\cdot ||$? – Noah Stein May 21 2012 at 18:22
Of course, for posdef matrices suchs bounds follow trivially using the operator harmonic, geometric, arithmetic mean inequalities. So I guess, the broader class of symmetric or Hermitian matrices in your question is deliberate.... – S. Sra May 21 2012 at 18:41
My conjecture is that for any symmetric norm, your lhs $\le \| |a^{-1}b^{-1}|^{-1/2} \|$, where $|x|$ denotes the operator absolute value. Corresponding to this, one can then derive (i guess) an arithmetic-mean-style upper bound. – S. Sra May 21 2012 at 18:51
(I threw in a factor of $2$ into your lhs to replace it by the more symmetric harmonic mean, while writing out the above inequality) – S. Sra May 21 2012 at 18:51
## 3 Answers
You can use the surprising identity $(A^{-1}+B^{-1})^{-1}=A(A+B)^{-1}B$, and take the norms of the three factors separately.
-
Depending on the norm you are using, this might not give a very tight bound. Suppose $A=B$ and has one eigenvalue $\epsilon$ and the rest $1$. Then the spectral norm of $A$ is $1$, the spectral norm of $(A+B)^{-1})=1/(2\epsilon$, and the spectral norm of $B$ is $1$, while the actual spectral norm of $(A^{-1}+B^{-1})^{-1}$ is $1/2$. But if $\epsilon <<1$, then $1/(2\epsilon)<<1/2$. – Will Sawin May 21 2012 at 18:45
I'm afraid this throws me back to the Woodbury formula, which I was coming from, but it's still a nice answer! – Felix Goldberg May 21 2012 at 19:41
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To expand on my first comment, if $A, B > 0$ are symmetric positive definite matrices. Then, it is known that
$$\left(\frac{A^{-1}+B^{-1}}{2}\right)^{-1} \le A\sharp B \le \frac{A+B}{2},$$ where the inequalities are in the Löwner partial order, and $A\sharp B := A^{1/2}(A^{1/2}BA^{1/2})^{-1/2}A^{1/2}$ denotes the matrix geometric mean.
These operator inequalities are of course, stronger than corresponding norm inequalities (based on unitarily invariant norms).
For the case where you don't have positive matrices, I think the conjecture mentioned in my second argument can be expanded into a proof --- maybe if I get time, I'll try to expand that.
-
Thanks!......... – Felix Goldberg May 21 2012 at 19:41
Probably not unless $A$ and $B$ are positive-definite, since if $B$ is very close to $-A$ then $B^{-1}+A^{-1}$ is very small and so its inverse is very large. In fact, depending on the norm, they probably need to be close only on one shared or almost-shared eigenvector.
For spectral norm of positive-definite matrices, we have a nice answer. The highest eigenvalue of $(A^{-1}+B^{-1})^{-1}$ is the lowest eigenvalue of $A^{-1}+B^{-1}$, which one can find by minimizing $x^T(A^{-1}+B^{-1})x$ with respect to $x^Tx=1$. But the minimum for $A^{-1}$ is its lowest eigenvalue, $1/||A||$, and the minimum for $B^{-1}$ is similarly $1/||B||$. Thus:
$x^T( A^{-1}+B^{-1})^{-1} x= x^T A^{-1} x+ x^T B^{-1} x\geq 1/||A||+1/||B||$
So the spectral norm of the harmonic sum is bounded by the harmonic sum of the spectral norms!
-
"So the spectral norm of the harmonic sum is bounded by the harmonic sum of the spectral norms!" Cosmic harmony... :) – Felix Goldberg May 21 2012 at 19:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9206767082214355, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/278895/how-to-communicate-mathematics-by-computer-and-in-the-internet | # How to communicate mathematics by computer and in the internet?
I would like to know how can I exchange mathematical ideas with symbolic representations by computer, by e-mail or asking questions with formulas and mathematics symbols in the internet. For letters we have the keyboard but mathematics it is different due to the complexity. I have heard about Latex but I am not sure if it is the best solution for what I want... I also don`t understand very well what are macros and the difference between Tex e Latex.
-
For sure, LaTeX is the best way to do that. I suggest you to learn it. – Sigur Jan 14 at 22:27
And, of course, this is the place for asking questions and sharing mathematical ideas using $\LaTeX$. – Zango Lotino Jan 14 at 22:28
Was this a homework question? In what sort of course? – Asaf Karagila Jan 14 at 22:39
To answer your last question: LaTeX is a language built on the language called TeX. In particular, the LaTeX compiler that I have interprets the code I have written, translates it to TeX, compiles it, and then gives me back the pdf. (I am pretty sure this is how it always works, but I am not 100% sure). You should not attempt to learn TeX before LaTeX; it's unnecessary and much less human-readable. – Eric Stucky Jan 14 at 23:06
– andybenji Jan 15 at 0:42
## 1 Answer
Hint.
1) Stack Exchange
2) Where do I start LaTeX programming?
3) http://en.support.wordpress.com/latex/
4) http://www.mathjax.org/
How to Write Your First Paper Steven G. Krantz and A MATHEMATICIAN’S SURVIVAL GUIDE by PETER G. CASAZZA
5) Funny notes in A Guide to Writing Mathematics.
6) Reference Book: How to Write Mathematics
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9007014632225037, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/52365/euler-method-on-differential-equation-problem?answertab=active | # Euler method on differential equation problem
I have this problem to solve. I want to compute the inclination of a plane $\theta(t)$ at every frame of a simulation given the following rule for its angular speed of rotation $\omega(t)$
$$\omega(t) = - \frac{\mathbf{V}\cdot\hat{\mathbf{n}}}{(||\mathbf{P}-\mathbf{P}'||)}\frac{1}{\tan(\theta(t))}$$
given that the starting angle $\theta(t=0)=45°$
The quantity $$\frac{\mathbf{V}\cdot\hat{\mathbf{n}}}{(||\mathbf{P}-\mathbf{P}'||)}$$ is the ratio between the velocity of the observer along his line of sight ($\mathbf{V}\cdot\hat{\mathbf{n}}$) and the distance of the observer $\mathbf{P}$ from a point $\mathbf{P}'$ and is computed externally at every frame.
I'm unable to figure out how to solve this equation in Mathematica, is it possible analytically?
If this is not possible, how can I implement a iterative method that can deal with the imprecision of computation of the $\frac{1}{\tan(\theta(t))}$ term?
In a first implementation I choose:
````double omegaz= -speedAlongLineOfSight/((cyclopeanEye-projPoint).norm() *tan( toRadians(theta) ));
theta = theta - (omegaz)*(deltaT);
````
but I think this continuosly sums errors, creating instability of the solution.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004230499267578, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/222000/how-many-different-passwords-of-length-6-can-be-formed?answertab=votes | # How many different passwords of length 6 can be formed…
Each user on a computer system has a password, which is six characters long, where each character is an uppercase letter or digit. Each password must contain at least one digit. How many possible passwords are there?
Why am I incorrect in reasoning that the answer is $10*36^5$? If every password must contain a digit, then there are only $10$ ways to choose one character in the password and $36^5$ ways to choose the other 5 characters.
-
## 2 Answers
You’re not taking into account that the digit can occur in any of the six positions. There are $10\cdot36^5$ passwords that begin with a digit. There are also $10\cdot36^5$ passwords that end with a digit; some of these also begin with a digit and have already been counted, but some do not, so your figure of $10\cdot36^5$ is necessarily too small.
The easiest way to count the acceptable passwords is to note that there are $36^6$ six-character strings made up of upper-case letters and digits, and $26^6$ of them are made up entirely of letters, so there are $36^6-26^6$ that include at least one digit.
It is possible to count them directly, but the counting is more complicated. For each of the $6$ positions in the password there are $10\cdot36^5$ passwords having a digit in that position, so to a first approximation there are $6\cdot10\cdot36^5$ acceptable passwords. However, as noted in the first paragraph, this counts some passwords more than once. For each pair of positions in the password there are $10^2\cdot36^4$ passwords having digits in both of those positions, and all of these passwords have been counted twice. Since there are $\binom62$ pairs of positions, we must subtract $\binom62\cdot10^2\cdot36^4$ to get rid of the double-counting. Unfortunately, this overcompensates, and there are further corrections to be made. The net result, given by the inclusion-exclusion principle, is
$$\sum_{k=1}^6(-1)^{k+1}\binom6k10^k36^{6-k}\;.$$
Either way, the result is $1,867,866,560$.
-
Why does $36^6-26^6$ represent ones that include at least one digit? I'm sorry, it is just confusing to me. – user1038665 Oct 27 '12 at 10:16
@user1038665: There are $36^6$ altogether. There are $26^6$ that have no digit. $36^6-26^6$ is what’s left after you throw away the ones that have no digit, and those are precisely the ones that have at least one digit. – Brian M. Scott Oct 27 '12 at 10:21
How many passwords are there, ignoring the "at least one digit" requirement? How many passwords are there with no digits all? Now subtract.
Your answer is incorrect because not only are there 10 ways to choose the digit, but there are 6 places where this digit can go. If you multiplied your answer by 6, it would now be double counting. Example:
1. Choose the digit 9 to go into position 1, and choose 11119 for the rest of the string.
2. Choose the digit 9 to go into position 6, and choose 91111 for the first 5 characters.
Answers: $36^6$, $26^6$, $36^6-26^6$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.95146644115448, "perplexity_flag": "head"} |
http://nrich.maths.org/224/note?nomenu=1 | ## Fair Exchange
In your bank, you have three types of coins. The number of spots shows how much they are worth.
| | | |
|----|----|----|
| | | |
| 1 | 2 | 5 |
Can you choose coins to exchange with the groups below to make the same total?
Full screen version
This text is usually replaced by the Flash movie.
Can you find another way to do each one?
### Why do this problem?
This problem gives opportunities for children to practise numbers bonds in the context of a game. Children can try out different options to find sets with equal numbers of spots in them.
You could focus on encouraging learners to work systematically to find all possibilities.
### Possible Approach
Introduce the activity to the class on an interactive whiteboard and ask them to choose a number to match from the three choices $5$, $8$ and $11$. The 'coins' in the game have the same number of spots as the number they represent which makes them easier for children to work with than either toy coins or real money. It would be possible to use real coins or toy money as an alternative.
Ask the children what each of the target numbers is in turn: $5$, $8$ and $11$. Then see if they can suggest different sets of coins that have the same value and try them out using the interactivity.
The children could then go on to creating their own equivalent sets of coins either using coins cut out from this doc and pdf card showing coins worth $1$, $2$ and $5$.
### Key questions
What is the total you've got to make?
How many more do you need?
Can you do it in a different way?
What is the largest coin you could use?
Could you make that amount with just twos?
### Possible extension
Children can choose their own target numbers and see how many different equivalent sets they can make using coins worth $1$, $2$ and $5$. They coulc use real coins instead of the printed version and even move on to higher deniminations such as $10$p or $20$p.
Learners could try Weighted Numbers which uses more numbers.
### Possible support
Plenty of practice with exchanging small collections of coins may be needed by some children. Understanding that $5$ penny pieces are worth the same as one $5$p piece is tricky and may take time to establish with young children. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463195204734802, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/38404/how-is-gauss-law-integral-form-arrived-at-from-coulombs-law-and-how-is-the | # How is Gauss' Law (integral form) arrived at from Coulomb's Law, and how is the differential form arrived at from that?
On a similar note: when using Gauss' Law, do you even begin with Coulomb's law, or does one take it as given that flux is the surface integral of the Electric field in the direction of the normal to the surface at a point?
-
3
While it's healthy to know these derivations, you should keep in mind that Gauss's law is more general than Coulomb's law. Coulomb's law is only true if the charges are stationary, there are no changing magnetic fields, etc. But Gauss's law is true under all circumstances. So Gauss's law is more than just a consequence of Coulomb's law. – Steve B Nov 23 '12 at 13:31
## 1 Answer
Let us for simplicity consider $n$ point charges $q_1$, $\ldots$, $q_n$, at positions $\vec{r}_1$, $\ldots$, $\vec{r}_n$, in the electrostatic limit, with vacuum permittivity $\epsilon_0$.
Now let us sketch one possible strategy to prove Gauss' law from Coulomb's law:
1. Deduce from Coulomb's law that the electric field at position $\vec{r}$ is $$\tag{1} \vec{E}(\vec{r})~=~ \sum_{i=1}^n\frac{q_i }{4\pi\epsilon_0}\frac{\vec{r}-\vec{r}_i}{|\vec{r}-\vec{r}_i|^3} .$$
2. Deduce the charge density $$\tag{2} \rho(\vec{r})~=~\sum_{i=1}^n q_i\delta^3(\vec{r}-\vec{r}_i).$$
3. Recall the following mathematical identity $$\tag{3}\vec{\nabla}\cdot \frac{\vec{r}}{|\vec{r}|^3}~=~4\pi\delta^3(\vec{r}) .$$ (This Phys.SE answer may be useful in proving eq.(3), which may also be written as $\nabla^2\frac{1}{|\vec{r}|}=-4\pi\delta^3(\vec{r})$).
4. Use eqs. (1)-(3) to prove Gauss' law in differential form $$\tag{4} \vec{\nabla}\cdot \vec{E}~=~\frac{\rho}{\epsilon_0} .$$
5. Deduce Gauss' law in integral form via the divergence theorem.
-
Is 3 Poisson's equation, a generalisation of it or a subdivision of it? And thanks for the answer- not too unfathomable. – Alyosha Sep 26 '12 at 19:17
I updated the answer. Eq.(3) is a mathematical identity, while Poisson's equation has physical content. – Qmechanic♦ Sep 26 '12 at 19:35
– Qmechanic♦ Sep 30 '12 at 16:16
I intuit equation (2), but why cube $\delta(\mathbf{r}-\mathbf{r_i})$? – Alyosha Mar 12 at 13:17
1
@Alyosha: It's standard notation for the 3-dimensional delta function $\delta^3(\vec{r})~:=~\delta(x)\delta(y)\delta(z)$. – Qmechanic♦ Mar 12 at 14:27
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043105840682983, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/93382/a-modified-divisor-game | ## A modified divisor game
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a two player game. There are N balls marked 1 to N. A move consists of removing a ball n and all other balls which are divisors of n (including 1). The players alternate the moves. The one who takes the last ball wins the game.
Eg. [1 ,2 ,3 , 4, 5]--[Player 1 (4, 2, 1)]-->[3, 5]--[Player 2 (3)]-->[5]--[Player 1 (5)]-->[]. Player 1 wins. The () contains the balls a player removes.
I tried solving the problem for N < 10 manually and was able to observe that it was always possible to force victory for player who starts the game. I also know that this is always the case for all N from here. Can someone share the proof of this result and the playing strategy?
I tried to use the strategy stealing argument from the game of chomp but I am not sure if it is applicable here.
-
## 1 Answer
One can indeed prove that the first player has a winning strategy using the classical strategy stealing argument. If the second player had a winning strategy, the first player would just play "1" and then follow the second player's winning strategy. This works since any move by the second player would also remove the number "1", thus the the state of the game after the second player's turn would be a valid state of the game after a single move.
-
Well, now I understand that the crux of the strategy stealing is finding a "zero" move which is a subset of any other legal move. Though we can figure out the winning moves with the aid of computer, is it possible to find a general strategy? Is there a mathematical formula as in the case Nims game or its proven that it can`t be found? – forcebrute Apr 7 2012 at 6:12
I did not think about this particular game, but in chomp it is a notable open problem. – Ori Gurel-Gurevich Apr 7 2012 at 17:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616698622703552, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/9340/bounding-the-series-sum-m-leq-x-m-neq-n-frac1-logm-n | # Bounding the series $\sum_{m\leq x,m\neq n}\frac{1}{|\log(m/n)|}$
I am trying to reproduce the following bound:
$\sum_{1\leq m\leq x, m\neq n}\frac{1}{|\log(m/n)|}=O(x\log(x))$,
for $x\geq 2$ and some $n$, $1\leq n\leq x$ (the implicit constant shouldn't depend on $n$).
I have tried to bound each term by $\frac{1}{|\log(1\pm 1/x)|}$, but this doesn't seem to be good enough. Splitting the sum at $\log(x)$ also didn't help. Any ideas?
Thank you very much for your comments or hints.
-
I think you may need to split it at kn for some constant k. The sum up to kn will not depend on x, so is negligible as x gets very large (but will grow with n). Above kn you can change over to an integral. Hope this helps-I haven't checked that it works. – Ross Millikan Nov 7 '10 at 23:33
sorry, "fixed $n$", was imprecise, I should be able to choose e.g. $n=x$ and this bound should still hold. – Troy K. Nov 8 '10 at 0:42
## 2 Answers
This answer corresponds to the original question, where $n$ was assumed fixed.
Considering the corresponding integral $\int_a^x {\frac{{du}}{{\log (u/n)}}} = n\int_{a/n}^{x/n} {\frac{{du}}{{\log (u)}}}$ (where $a$ is some constant) and the fact that the logaritmic integral function li(x) is $O(x/\log(x))$ as $x \to \infty$, your bound should be $O(x/\log(x))$ (and not $O(x\log(x))$.
-
Thanks, sorry about the confusion with the "fixed $n$". – Troy K. Nov 8 '10 at 1:10
Assuming $1\leq n \leq x$, split the sum into two parts, $1\leq m <n$ and $n<m\leq x$.
To handle the first part, make the variable change $m\to n-k$, and notice that $$\sum_{1\leq m <n } \frac{1}{|\log\frac{m}{n}|} = - \sum_{1\leq k \leq n-1} \frac{1}{\log\frac{n-k}{n}}.$$ Since $-\log\frac{n-k}{n} > \frac{k}{n}$ for $1\leq k \leq n-1$, this last sum is $$< n \sum_{1\leq k \leq n-1} \frac{1}{k} = O(n\log n) = O(x\log x).$$
The second part can be handled similarly making the variable change $m\to n+k$.
Remark: Out of curiosity, are you computing moments of the Riemann zeta-function?
-
– Troy K. Mar 6 '11 at 18:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333580136299133, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/112102/computing-a-density-function-for-the-integral-of-a-stochastic-process-given-its | ## Computing a density function for the integral of a stochastic process, given its transition function
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
$P$ is a one-dimensional Markov stochastic process that runs on time interval $[0, t_f]$. I know its transition function: $P(0) = x_0$ and for any $0 \le t_a < t_b \le t_f$, the function $f(x_b | x_a, t_a, t_b)$ describes the probability that $P(t_b) = x_b$ given that $P(t_a) = x_a$ (so $f$ is a density function in its first parameter).
Now, let $I$ be the random variable described by $\int_0^{t_f} P(s) ds$ for a random realization of $P$. Is it possible to find a density function for $I$ in terms of $f$ and $x_0$?
-
Is it a Markov process? If not, you might need more than that to describe the process. – Robert Israel Nov 11 at 18:58
Ah, yes it is, thank you. I just edited that detail into the post. – GB Nov 11 at 19:10
1
The following paper might help you identify situations where there is an explicit expression for the moment generating function of the integral arxiv.org/abs/0710.1599 (Albanese and Lawi, 2007.) – Paul Tupper Nov 12 at 18:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8576416373252869, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/39287/list | 2 edited title
1
# How to exact the diagonal from a bivariate generating function
Let $F(s,t)= \sum_{i,j} f(i,j) s^i t^j$, which is a bivariate generating funcion of the number $f(i,j)$ for some enumeration problem. Sometimes we know about $F(s,t)$, but what we really need is the number $f(i,i)$ with the generating function $G(x)= \sum_i f(i,i) x^i$, called the diagonal of $F$.
The question is how to obtain the diagonal $G(x)$ from $F(s,t)$? Furthermore, how to get the asymtotic formula for $f(i,i)$ from $F(s,t)$?
One way to do this is shown as follows: $G(x)$ is the constant term of $F(s,\frac{x}{s})$ regarded as a Laurent series in $s$ whose coefficients are power series in x. And we can use Cauchy's integral theorem and Residue Theorem to compute.
This method works when $F(s,t)$ is rational. But when $F(s,t)$ is more complicated, it seems not workable. An example is $F(s,t)=\frac{4 s t}{\sqrt{1-4 s^2}\sqrt{1-4 t^2}(\sqrt{1-4 s^2}\sqrt{1-4 t^2}-4 s t)}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479377269744873, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/45272/necessary-condition-for-a-graph-to-be-non-hamiltonian | ## Necessary condition for a graph to be Non-Hamiltonian
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let us denote the edges incident on vertices of valence 2 as "required" as these edges has to be covered by a Hamiltonian circuit, if one exists on that (undirected) graph. Given a graph on which a proper subset of the "required" edges along with two edges incident on a vertex of valency $\geq 3$ form a cycle, can anything related to the Hamiltonicity of the graph be claimed? A few basic rules for the existence of Hamiltonian Cycles is listed here: http://www.mit.edu/~miforbes/ham_cycle.pdf Can rule (4) be extended in any way to answer this query?
-
This confuses me.. if there is a cycle of "required edges" then surely this is a connected component of the graph which is therefore disconnected and not hamiltonian. – gordon-royle Nov 8 2010 at 7:26
I share Gordon's confusion. Also, it seems like the title should read: Is this a sufficient condition for a graph to be non-Hamiltonian, and the answer should be yes. – Tony Huynh Nov 8 2010 at 9:33
Sorry.. the cycle is formed by all "required" edges except two. These two are the two edges incident to a vertex of valence $\geq 3$. – Esha Nov 8 2010 at 10:34
1
Simultaneously crossposted: cstheory.stackexchange.com/questions/2777/… – Tsuyoshi Ito Nov 8 2010 at 11:20
1
cstheory.stackexchange link: cstheory.stackexchange.com/questions/2777/… – Esha Nov 9 2010 at 2:25
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9165428280830383, "perplexity_flag": "middle"} |
http://nrich.maths.org/608/solution | ### N000ughty Thoughts
Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the number of noughts in 10 000! and 100 000! or even 1 000 000!
### Fac-finding
Lyndon chose this as one of his favourite problems. It is accessible but needs some careful analysis of what is included and what is not. A systematic approach is really helpful.
### Binomial Coefficients
An introduction to the binomial coefficient, and exploration of some of the formulae it satisfies.
# Factorial
##### Stage: 4 Challenge Level:
The problem here is to find the number of zeros at the end of the number which is the product of the first hundred positive integers. We call this `100 factorial' and write it 100!
For example 4! = 1x2x3x4 = 24 and 5! = 1x2x3x4x5 = 120.
Well done Xinxin of Tao Nan School, Singapore who sent in the solution below in record time and also the Key Stage 3 Maths Club at Strabane Grammar School, N. Ireland and James of Hethersett High School, Norfolk.
The Strabane group said We started with a 10 x 10 number square and worked out 2!, 3!, 4! ... etc. We quickly realised that the number of zeros at the end of 100! depends on the number of tens appearing within the product which in turn depends on the number of twos and fives'' and Xinxin's solution is re-produced in full below.
This question basically asks about the number of zeros ending the number `100!'.
Since 2x5 equals 10, the key to answering this question is finding out the number of matches of 2 and 5 occurring in the prime factors of 100!.
Since it is obvious that when 100! is factorised, there are more 2's than 5's. As a result, all the 5's will find matches. Counting the number of 5's gives the number of matches.
First, all the multiples of 5:
5=1x5
10=2x5
15=3x5
20=4x5
...
There are a total of 20 multiples of 5. As a result, we have already found 20 matches, and thus 20 zeros.
However, it is noted that four numbers contribute two 5's to the factors of 100!. They are:
25=5x5
50=2x5x5
75=3x5x5
100=2x2x5x5
As a result, there are, in fact, 24 5's in the factors of 100!.
Thus, 100! ends with 24 zeros.
Done on the 4 th of November 1998 by:
Li Xinxin
Tao Nan School
Singapore.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365996718406677, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/122298/getting-the-angles-of-a-non-right-triangle-when-all-lengths-are-known/122303 | # Getting the angles of a non-right triangle when all lengths are known
I have a triangle that I know the lengths of all the sides. But I need to know the angles. Needs to work with non-right triangles as well.
I know it is possible, and I could have easily done this years ago when I was in trig, but it has completely slipped my mind.
Id prefer a solution that I can code into a function, or something that does not require constructing right triangles from it.
-
2
– Austin Mohr Mar 19 '12 at 23:36
1
Recall the cosine law: if $a,b,c$ are the lengths of the triangle and $\theta$ is the angle opposite $c$ then $c^2=a^2+b^2-2abcos\theta$ – you Mar 19 '12 at 23:37
## 1 Answer
As has been mentioned in comments, the formula that you're looking for is the Law of Cosines. The three formulations are: $$c^2 = a^2 + b^2 - 2ab \cos(C)$$ $$b^2 = a^2 + c^2 -2ac \cos(B)$$ $$a^2 = b^2 + c^2 - 2bc \cos(A)$$
You can use this law to find all three angles. Alternatively, once you have one of the angles, you can find the next using the Law of Sines, which states that: $${\sin{A} \over a} = {\sin{B} \over b} = {\sin{C} \over c}$$
And of course, once you have solved for two of the angles, you need only subtract their sum from $180$ degrees to get the measure of the third.
-
@Théophile: (Now if we delete all these comments, no one will ever know...) – The Chaz 2.0 Mar 20 '12 at 15:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9545448422431946, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/190880/number-of-divisors?answertab=active | # Number of divisors
How can I find number of divisors of N which are not divisible by K.
($2 \leq N$, $k \leq 10^{15})$
One of the most easiest approach which I have thought is to first calculate total number of divisors of 'n' using prime factorization (by Sieve of Eratosthenes) of n and then subtract from it the Number of divisors of the number 'n' that are also divisible by 'k'.
In order to calculate total number of divisors of number n that are also divisible by 'k' I will have to find the total number of divisors of (n/k) which can be also done by its prime factorization.
My problem is that since n and k can be very large, doing prime factorization twice is very time consuming.
Please suggest me some another approach which requires me to do prime factorization once.
-
Which algorithm are you using for Factorizing the integer? – Quixotic Sep 4 '12 at 10:41
Sieve of Eratosthenes – Snehasish Sep 4 '12 at 11:25
## 2 Answers
Your idea looks fine. But for integer factorization you can implement Pollard's rho algorithm or even faster Elliptic Curve Method.
You can test your algorithm at here and here.
-
Here is a code in C
````int NUM_DIVISORS(int n)
{
int j=0;
int p=0;
if(!(n%2))
{
for(j=2;j<=sqrt(n);j=j+1){
if (!(n%j)){
p = (p+2);
}
if(j*j==n){
p=p-1;
}
}
}//end of if
if(n%2) {
for(j=3;j<=sqrt(n);j=j+2){
if (!(n%j)){
p = p+2;
}
if(j*j==n){
p=p-1;
}
}
}//end of if
p=p+2;
return p;
}//end of function
````
You can also per-initialize an array of prime numbers up to a certain number, then use it to make your code run faster
Also take a look at my answer
-
Your code is WRONG. I think you have given me the code of total number of divisors of a number !!! – Snehasish Sep 4 '12 at 15:51
@Snehasish What's wrong with it? I uploaded the code as a hint because you wanted "total number of divisors of 'n'" – PooyaM Sep 4 '12 at 16:01
I want to calculate total number of divisors of n which are also divisible by k without calculating prime factorization twice. – Snehasish Sep 4 '12 at 19:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062440395355225, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/87877?sort=newest | ## Jacobi’s equality between complementary minors of inverse matrices
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What's a quick way to prove the following fact about adjugates of an invertible matrix $A$ and its inverse?
Let $A[I,J]$ denote the submatrix of an $n \times n$ matrix $A$ obtained by keeping only the rows indexed by $I$ and columns indexed by $J$. Then
$$|\det A[I,J]| = | (\det A)^{-1} \det A^{-1}[I^c,J^c]|,$$ where $I^c$ stands for $[n] \setminus I$, for $|I| = |J|$. It is trivial when $|I| = |J| = 1$ or $n-1$. This is apparently proved by Jacobi, but I couldn't find a proof anywhere in books or online. Horn and Johnson listed this as one of the advanced formulas in their preliminary chapter, but didn't give a proof. In general what's a reliable source to find proofs of all these little facts? I ran into this question while reading Macdonald's book on symmetric functions and Hall polynomials, in particular page 22 where he is explaining the determinantal relation between the elementary symmetric functions $e_\lambda$ and the complete symmetric functions $h_\lambda$.
I also spent 3 hours trying to crack this nut, but can only show it for diagonal matrices :(
Edit: It looks like Ferrar's book on Algebra subtitled determinant, matrices and algebraic forms, might carry a proof of this in chapter 5. Though the book seems to have a sexist bias.
-
2
Wow - I did not know that algebra proofs could have a "sexist bias". I am too curious to let it pass --- what do you mean exactly? – Federico Poloni Feb 8 2012 at 10:19
1
I was just referring to the preface, where he said the book is suitable for undergraduate students, or boys in their last years of school. Maybe the word "boy" has a gender neutral meaning back then? – John Jiang Feb 8 2012 at 19:39
## 2 Answers
The key word under which you will find this result in modern books is "Schur complement". Here is a self-contained proof. Assume $I$ and $J$ are $(1,2,\dots,k)$ for some $k$ without loss of generality (you may reorder rows/columns). Let the matrix be $$M=\begin{bmatrix}A & B\\ C & D\end{bmatrix},$$ where the blocks $A$ and $D$ are square. Assume for now that $A$ is invertible --- you may treat the general case with a continuity argument. Let $S=D-CA^{-1}B$ be the so-called Schur complement of $A$ in $M$.
You may verify the following identity ("magic wand Schur complement formula") $$\begin{bmatrix}A & B\\ C & D\end{bmatrix} = \begin{bmatrix}I & 0\\ CA^{-1} & I\end{bmatrix} \begin{bmatrix}A & 0\\ 0 & S\end{bmatrix} \begin{bmatrix}I & A^{-1}B\\ 0 & I\end{bmatrix}. \tag{1}$$ By taking determinants, $$\det M=\det A \det S. \tag{2}$$ Moreover, if you invert term-by-term the above formula you can see that the (2,2) block of $M^{-1}$ is $S^{-1}$. So your thesis is now (2).
Note that the "magic formula" (1) can be derived via block Gaussian elimination and is much less magic than it looks at first sight.
-
I guess for any thesis involving minors the Schur complement formula would be among the first things to try. – John Jiang Feb 8 2012 at 10:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is nothing but the Schur complement formula. See my book Matrices; Theory and Applications, 2nd ed., Springer-Verlag GTM 216, page 41.
Up to some permutation of rows and columns, we may assume that $I=J=[1,p]$. Let us write blockwise $$A=\begin{pmatrix} W & X \\ Y & Z \end{pmatrix}.$$ On the one hand, we have (Schur C.F) $$\det A=\det W\cdot\det(Z-YW^{-1}X).$$ Finally, we have $$A=\begin{pmatrix} \cdot & \cdot \\ \cdot & (Z-YW^{-1}X)^{-1} \end{pmatrix},$$which gives the desired result.
These formula are obtained by factorizing $A$ into $LU$.
-
Your answer is equally valid. Thanks! – John Jiang Feb 8 2012 at 10:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421491026878357, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/34726/ados-theorem-for-metric-lie-algebras | ## Ado’s theorem for metric Lie algebras?
### Background
Ado's Theorem states that every finite-dimensional Lie algebra over a field of zero characteristic admits a faithful representation.
More precisely, if $\mathfrak{g}$ is a finite-dimensional Lie algebra over a field $K$ of zero characteristic, then there is a Lie algebra monomorphism $\rho: \mathfrak{g} \to \operatorname{End}(K^N)$ for some $N$, where $\operatorname{End}(K^N)$ is the Lie algebra of endomorphisms of $K^N$ relative to the commutator.
Now let $\mathfrak{g}$ be a finite-dimensional Lie algebra over $K$ and let $\langle-,-\rangle$ be an ad-invariant symmetric inner product; that is, a nondegenerate symmetric $K$-bilinear form such that for all $x,y,z \in \mathfrak{g}$, `$$\langle [x,y], z\rangle = \langle x,[y,z]\rangle.$$` We call such a Lie algebra a metric Lie algebra.
For example, $\operatorname{End}(K^N)$ itself is a metric Lie algebra, relative to the inner product $$\langle X,Y \rangle := \operatorname{Tr}(XY),$$ for endomorphisms $X,Y$.
Semisimple and, more generally, reductive Lie algebras are metric, but there are others. The relevant structure theorem is due to Medina and Revoy (MathSciNet link).
A final definition, added after Victor's comment below, is the following. Given two metric Lie algebras, the orthogonal direct sum of the underlying vector spaces can again be given the structure of a metric Lie algebra, in which the original Lie algebras sit as orthogonal ideals. This gives rise to the notion of an indecomposable metric Lie algebra as one which is not isomorphic (as metric Lie algebra) to the direct product of orthogonal (nonzero) ideals.
The following question is motivated by trying to construct Chern-Simons forms for Lie groups admitting a bi-invariant metric. But this motivation aside, I think the question is natural.
### Question
Does every (finite-dimensional) indecomposable metric Lie algebra admit a faithful representation $$\rho: \mathfrak{g} \to \operatorname{End}(K^N),$$ for some $N$, such that for all $x,y\in\mathfrak{g}$, $$\langle x,y \rangle = c \operatorname{Tr}(\rho(x)\rho(y)),$$ for some nonzero $c \in K$?
-
Is there a particular class of metric Lie algebras that you are interested in? There is a silly counterexample for a metric Lie algebra of the form $g_1\oplus g_2,$ where the summands $g_i$ are split simple, the restriction of the metric to $g_i$ is the $c_i$-multiple of the Killing form on $g_i$, and the ratio $c_1/c_2$ is irrational. – Victor Protsak Aug 6 2010 at 4:12
Sorry, I don't understand what $L$ is. Is it $K$? Or is it just a formal symbol? – Pierre-Yves Gaillard Aug 6 2010 at 5:57
PYG: My guess is that, L being the first character in "Lie", it indicates that $\text{End}_L(K^N)$ is viewed as a Lie algebra. – Victor Protsak Aug 6 2010 at 6:03
@Pierre-Yves: As Victor pointed out, I was making a distinction between the Lie algebebra and the underlying associative algebra by attaching the $L$ to $\operatorname{End}$. Given my choice of notation for the field, I can see how this was confusing. I will edit. – José Figueroa-O'Farrill Aug 6 2010 at 10:19
@Victor: Sorry, I forgot to add that the indecomposability condition! Thanks for pointing this out. I will edit. – José Figueroa-O'Farrill Aug 6 2010 at 10:20
show 3 more comments
## 1 Answer
The answer is negative if $\mathfrak g$ is solvable and non commutative. It follows from the "Critère de Cartan" (Bourbaki, algèbres de Lie, chapitre 1, par. 5).
-
Bienvenue Michel! – Alain Valette Nov 12 2011 at 17:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.878478467464447, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/7385-story-drop-problems.html | # Thread:
1. ## Story drop Problems
How do I solve these?
A stone is dropped from the edge of a roof, and hits the ground with a velocity of -165 feet per second. How high (in feet) is the roof?
and
A particle is moving with acceleration a(t) = 24t+4. Its position at time t=0 is s(0) = 14 and its velocity at time t=0 is v(0) = 3 What is its position at time t=6? ________
2. Originally Posted by killasnake
A stone is dropped from the edge of a roof, and hits the ground with a velocity of -165 feet per second. How high (in feet) is the roof?
$v_i=0$ because it was from rest
$v_f=-165$ I suppose down is negative in this problem
$a=-9.8$ If you don't know why then you're in trouble, however that is in m/s so you have to convert to feet (which I forgot the conversion so you'll have to do it yourself)
$\Delta d=?$ This is what we're trying to find
Notice that they don't give time, so we'll use an equation without time.
The only one that comes to mind is: $v_f^2=v_i^2+2a\Delta d$
So then we switch things around to get: $\Delta d=\frac{v_f^2-v_i^2}{2a}$
So substitute to get the answer
3. Originally Posted by Quick
$v_i=0$ because it was from rest
$v_f=-165$ I suppose down is negative in this problem
$a=-9.8$ If you don't know why then you're in trouble, however that is in m/s so you have to convert to feet (which I forgot the conversion so you'll have to do it yourself)
$\Delta d=?$ This is what we're trying to find
Notice that they don't give time, so we'll use an equation without time.
The only one that comes to mind is: $v_f^2=v_i^2+2a\Delta d$
So then we switch things around to get: $\Delta d=\frac{v_f^2-v_i^2}{2a}$
So substitute to get the answer
That does fairly well for an answer Quick, except for one little problem:
Acceleration is in units of m/s^2, not m/s. (Or ft/s^2 if you prefer.)
Tsk, tsk! This is the second time I've had to tell you that!
-Dan
4. Originally Posted by Quick
$a=-9.8$ If you don't know why then you're in trouble, however that is in m/s so you have to convert to feet (which I forgot the conversion so you'll have to do it yourself)
s(t) = -16t^2 ft
v(t) = -32t ft/s
a(t) = -32 ft/s^2
5. Originally Posted by killasnake
A particle is moving with acceleration a(t) = 24t+4. Its position at time t=0 is s(0) = 14 and its velocity at time t=0 is v(0) = 3 What is its position at time t=6?
We've got the acceleration function, and we know s(0) and v(0) as initial conditions. So:
$v(t) - v(0) = \int_0^t \, a(t') \, dt' = \int_0^t \, (24t' + 4)dt' = (12t'^2 + 4t')|_0^t = 12t^2 + 4t$
So $v(t) = 12t^2 + 4t - 3$
Again:
$s(t) - s(0) = \int_0^t \, v(t') \, dt' = \int_0^t \, (12t'^2 + 4t' - 3)dt'$ $= (4t'^3 + 2t'^2 - 3t')|_0^t = 4t^3 + 2t^2 - 3t$
So $s(t) = 4t^3 + 2t^2 - 3t - 14$
Thus $s(6) = 4(6)^3 + 2(6)^2 - 3(6) - 14 = 904$ (in whatever units the problem is supposed to be written in.)
Piece of trivia for you: The coefficient of the t^3 term is constant. This means that the above equation for displacement is written in terms of a constant "jerk." (No kidding! The jerk, j, is defined as the first time derivative of the acceleration.)
-Dan
6. wow you guys are quick! thank you so much for the help!
7. Hello, killasnake!
1) A stone is dropped from the edge of a roof,
and hits the ground with a velocity of -165 ft/sec.
How high (in feet) is the roof?
You're expected to know the "free fall" formula: . $y(t) \:=\:h_o + v_ot - 16t^2$
. . where: . $\begin{Bmatrix}y(t) = & \text{height at time }t \\ h_o = & \text{initial height} \\ v_o = & \text{initial velocity} \\ t = & \text{time in seconds}\end{Bmatrix}$
In this problem, the stone is dropped: $v_o = 0$
. . Hence, the function is: . $y(t) \:=\:h_o - 16t^2$
and we want to determine $h_o.$
We are told that the stone hits the ground at -165 ft/sec.
When does this happen?
. . It happens when $y(t) = 0$
So we have: . $h_o - 16t^2\:=\:0\quad\Rightarrow\quad t = \frac{\sqrt{h_o}}{4}$ seconds.
Velocity is the derivative of position: . $v(t)\:=\:-32t$
At that time, we have: . $-32t \:=\:-165\quad\Rightarrow\quad-32\left(\frac{\sqrt{h_o}}{4}\right)\:=\:-165$
. . $-8\sqrt{h_o} \:=\:-165\quad\Rightarrow\quad \sqrt{h_o}\:=\:\frac{165}{8}\quad\Rightarrow\quad h_o \:=\:\left(\frac{165}{8}\right)^2$
Therefore: . $h_o\:=\:425.390625 \:\approx\:425.4$ feet.
8. Originally Posted by topsquark
We've got the acceleration function, and we know s(0) and v(0) as initial conditions. So:
$v(t) - v(0) = \int_0^t \, a(t') \, dt' = \int_0^t \, (24t' + 4)dt' = (12t'^2 + 4t')|_0^t = 12t^2 + 4t$
So $v(t) = 12t^2 + 4t - 3$
Again:
$s(t) - s(0) = \int_0^t \, v(t') \, dt' = \int_0^t \, (12t'^2 + 4t' - 3)dt'$ $= (4t'^3 + 2t'^2 - 3t')|_0^t = 4t^3 + 2t^2 - 3t$
So $s(t) = 4t^3 + 2t^2 - 3t - 14$
Thus $s(6) = 4(6)^3 + 2(6)^2 - 3(6) - 14 = 904$ (in whatever units the problem is supposed to be written in.)
-Dan
Slight correction on this problem.
v(t) is NOT $v(t) = 12t^2 + 4t - 3$
it is $v(t) = 12t^2 + 4t + 3$
and hence
s(t) is NOT $s(t) = 4t^3 + 2t^2 - 3t - 14$
it is $s(t) = 4t^3 + 2t^2 + 3t + 14$
so the answer is 968 not 904 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588232636451721, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/42277/limit-of-the-derivative-of-a-function-as-x-goes-to-infinity | # Limit of the derivative of a function as x goes to infinity
I've been trying to solve the following problem:
Suppose that $f$ and $f'$ are continuous functions on $\mathbb{R}$, and that $\displaystyle\lim_{x\to\infty}f(x)$ and $\displaystyle\lim_{x\to\infty}f'(x)$ exist. Show that $\displaystyle\lim_{x\to\infty}f'(x) = 0$.
I'm not entirely sure what to do. Since there's not a lot of information given, I guess there isn't very much one can do. I tried using the definition of the derivative and showing that it went to $0$ as $x$ went to $\infty$ but that didn't really work out. Now I'm thinking I should assume $\displaystyle\lim_{x\to\infty}f'(x) = L \neq 0$ and try to get a contradiction, but I'm not sure where the contradiction would come from.
Could somebody point me in the right direction (e.g. a certain theorem or property I have to use?) Thanks
-
2
Hint: What is $\lim_{x \to \infty} \frac{f(x)}{x}$? – N. S. May 31 '11 at 5:14
## 5 Answers
Hint: If you assume $\lim _{x \to \infty } f'(x) = L \ne 0$, the contradiction would come from the mean value theorem (consider $f(x)-f(M)$ for a fixed but arbitrary large $M$, and let $x \to \infty$).
-
Ah ok, so $\displaystyle\lim_{x\to\infty}\frac{f(x) - f(M)}{x - M} = 0$, right? (Which I suppose is what user9176 was implying.) So just to make sure I'm clear on this, if we take $\frac{f(x) - f(M)}{x - M} = f'(c)$ for some $c \in (M, x)$ then as $x \to \infty$ the left-hand side goes to $0$. And since we take $M$ arbitrarily large does it follow that $c \to \infty$, and hence $\displaystyle\lim_{c \to \infty}f'(c) = 0$? – saurs May 31 '11 at 6:37
@sarus: I suggest proving the result by contradiction, that is by assuming $\lim _{x \to \infty } f'(x) = L \ne 0$ (as you originally tried). It may be comfortable for you to split into the cases $L>0$ and $L<0$. – Shai Covo May 31 '11 at 6:52
OK, I've got it now. Thanks for the help. – saurs May 31 '11 at 7:47
What's wrong with swapping the order of the limits? – Rhythmic Fistman May 31 '11 at 18:30
@Rhythmic Fistman: Can you please be more specific? – Shai Covo May 31 '11 at 19:03
show 5 more comments
It follows by a L'Hôpital slick trick: $\$ if $\rm\ f + f\:\:'\!\to L\$ as $\rm\ x\to\infty\$ then $\rm\ f\to L,\ f\:\:'\!\to 0\:,\$ since
$$\rm \lim_{x\to\infty}\ f(x)\ =\ \lim_{x\to\infty}\frac{e^x\ f(x)}{e^x}\ =\ \lim_{x\to\infty}\frac{e^x\ (\:f(x)+f\:\:'(x)\:)}{e^x}\ =\ \lim_{x\to\infty}\ (\:f(x)+f\:\:'(x)\:)$$
This L'Hospital rule trick achieved some noteriety due to the fact that the problem appeared in Hardy's classic calculus texbook A Course of Pure Mathematics, but with a less elegant solution. For example, see Landau; Jones: A Hardy Old Problem, Math. Magazine 56 (1983) 230-232.
-
You know that $\lim_{x \to \infty}f(x)$ and $\lim_{x \to \infty}f^'(x)$ exists. Then by Lagrange's theorem there exists $c_n \in (n,n+1)$ such that $f(n+1)-f(n)=f^'(c_n)$ Taking the limit as $n \to \infty$ you get that $\lim_{n \to \infty}f'(c_n)=0$. Since the limit exists, and there exists a sequence for which the limit of the function is $0$ it follows that $\lim_{n \to \infty}f^'(x)=0$.
-
This is in response to an interesting observation made by Rhythmic Fistman in a comment below my (first) answer. We suppose that $\lim _{x \to \infty } f'(x) = L$ for some $L \in \mathbb{R}$. Then, from the definition of the derivative, $$L = \mathop {\lim }\limits_{x \to \infty } \mathop {\lim }\limits_{h \to 0} \frac{{f(x + h) - f(x)}}{h}.$$ As Rhythmic Fistman observed, naively changing the order of the limits gives rise to the equality $$L = \mathop {\lim }\limits_{h \to 0} \mathop {\lim }\limits_{x \to \infty } \frac{{f(x + h) - f(x)}}{h} = 0,$$ where the last equality follows from the assumption that $\lim _{x \to \infty } f(x)$ exists (finite). Hence'' the desired result $\lim _{x \to \infty } f'(x) = 0$. However, as the following counterexample shows, this procedure is not allowed in principle. Define a two-variable function $f$ by $f(x,h)=xe^{-|h|x}$. Analogously to the case in the original question (where the role of $f(x,h)$ is played by $\frac{{f(x + h) - f(x)}}{h}$), $$\mathop {\lim }\limits_{x \to \infty } f(x,h) = \mathop {\lim }\limits_{x \to \infty } xe^{ - |h|x} = 0,$$ for any $h \neq 0$. Hence, $$\mathop {\lim }\limits_{h \to 0} \mathop {\lim }\limits_{x \to \infty } f(x,h) = 0.$$ Also, for any $x \in \mathbb{R}$, $$\mathop {\lim }\limits_{h \to 0} f(x,h) = \mathop {\lim }\limits_{h \to 0} xe^{ - |h|x} = x$$ (this is analogous to the case in the original question, where $f'$ is assumed continuous on $\mathbb{R}$). Hence, $$\mathop {\lim }\limits_{x \to \infty } \mathop {\lim }\limits_{h \to 0} f(x,h) = \infty \neq 0 = \mathop {\lim }\limits_{h \to 0} \mathop {\lim }\limits_{x \to \infty } f(x,h).$$
-
Thanks, good counter example, I think I get it. So in general limits don't commute - are there any conditions under which they do? – Rhythmic Fistman Jun 8 '11 at 21:15
– Shai Covo Jun 9 '11 at 14:34
– Shai Covo Jun 9 '11 at 14:41
To expand a little on my comment, since $\lim_{x \to \infty} f(x) = L$, we get
$$\lim_{x \to \infty} \frac{f(x)}{x} =0 \,.$$
But also, since $\lim_{x \to \infty} f'(x)$ exists, by L'Hospital we have
$$\lim_{x \to \infty} \frac{f(x)}{x}= \lim_{x \to \infty} f'(x) \,.$$
Note that using the MTV is basically the same proof, since that's how one proves the L'H in this case....
P.S. I know that if $L \neq 0$ one cannot apply L'H to $\frac{f(x)}{x}$, but one can cheat in this case: apply L'H to $\frac{xf(x)}{x^2}$ ;)
-
1
This a nice answer. As for the P.S., rather than your "cheat", it seems more natural to just replace $f(x)$ with $f(x) - L$, doesn't it? – Pete L. Clark May 31 '11 at 15:17
:) we all miss simple things, eh? – N. S. May 31 '11 at 19:35
well, I can't speak for "all", but I know I do... – Pete L. Clark Jun 1 '11 at 16:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 9, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551029205322266, "perplexity_flag": "head"} |
http://mathhelpforum.com/math-topics/58419-turning-point-parabola.html | Thread:
1. Turning Point of a Parabola
Hello,
How do I find the turning point of a parabola?
Thanks in advance
2. Originally Posted by J4553
Hello,
How do I find the turning point of a parabola?
Thanks in advance
Complete the square.
Alternatively, use calculus.
Post a more specific question (and state your mathematical background).
3. Originally Posted by J4553
Hello,
How do I find the turning point of a parabola?
Thanks in advance
The turning point of a parabola is the vertex of the parabola.
For the given equation of parabola, you can find the vertex by completing the square in the form
$y = a(x-h)^2+k$
where (h, k) is vertex.
If, suppose equation of a parabola is
$y = 3x^2-6x+5$
Then, complete the square, so, eqn becomes,
$y = 3(x-1)^2+2$
so, vertex = (1, 2)
so, turning point = (1, 2)
Did you get it now???
4. Shyam - no, I'm afraid that just confuses me even more.
I think my lecturer gave the formula: -b/2a to find the x-component of the vertex, then may have said to substitute that value into another equation to find the y-component.
Does this sound right or am I way off?
5. Originally Posted by J4553
Shyam - no, I'm afraid that just confuses me even more.
I think my lecturer gave the formula: -b/2a to find the x-component of the vertex, then may have said to substitute that value into another equation to find the y-component.
Does this sound right or am I way off?
that formula is correct, for example, take this function;
y=x^2+6x+8
b=6, a=1
-b/2a=-6/2(1)=-3
sub -3 into the function to get the value of y
y=(-3)^2+6(-3)+8=-1
therefore, the vertex for this example is (-3,-1)
Shyam is simply using another method then the one you are instructed to use
6. Perfect - thanks!
7. Originally Posted by J4553
Shyam - no, I'm afraid that just confuses me even more.
I think my lecturer gave the formula: -b/2a to find the x-component of the vertex, then may have said to substitute that value into another equation to find the y-component.
Does this sound right or am I way off?
If, suppose equation of a parabola is
$y = 3x^2-6x+5$
$x = \frac{-b}{2a}=\frac{-(-6)}{(2)(3)}=1$
$y = 3(1)^2-6(1)+5$
y = 2
so, you have (1, 2) as the vertex or turning point of parabola | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053405523300171, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/56679?sort=oldest | ## Dual of a Basis for a Hopf Algebra Conatined in all Dually Paired Hopf Algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For an infinite dimensional Hopf algebra $H$, a non-degenerate dually pairing Hopf algebra $H'$, and a choice of basis $e_i$ of $H$, is the dual basis $e^i$ (defined of course by $e^i(e_j) = \delta_{ij}$) contained in $H'$?
I am interested in the specific case of $SU_q(N)$ and the dually paired Hopf algebra $\mathfrak{sl}_N$.
-
## 1 Answer
No. Let $k$ be a field of characteristic $0$. Consider the symmetric algebra on one generator $k[x]$, with comultiplication $x \mapsto x\otimes 1 + 1\otimes x$. It has a Hopf pairing with itself, given by `$\langle x^m,x^n\rangle = n! \hspace{.5ex} \delta_{m=n}$`. Then consider the bases of $k[x]$ given by expansion around $1$, i.e. $e_n = (x-1)^n$. The dual basis, if it exists, includes $e^0$ such that `$\langle e^0, (x-1)^n\rangle = \delta_{0,n}$`. So suppose that `$e^0 = \sum a_n x^n$`; then: ```$$ \begin{aligned}
1 & = a_0 \\
0 & = a_1 - a_0 \\
0 & = a_2 - 2a_1 + a_0 \\
\dots & \phantom= \dots \\
0 & = \sum_{k=0}^n (-1)^{n-k} \hspace{.5ex} n!\hspace{.5ex} a_n \hspace{.5ex} \binom{n}{k} \\
\dots & \phantom= \dots \\
\end{aligned} $$``` The solution is that all $a_n = 1/n!$, i.e. $e^0 = \sum x^n/n! = \exp(x)$. But this is not a polynomial.
In the case you ask about, you will similarly have some bases with dual bases, and some without.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8486231565475464, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/3939/list | ## Return to Question
6 deleted 136 characters in body
Let me ask a more specific question.
Suppose P(x) $P(x)$ is a monic integer polynomial with roots r1, r2$r_1, ... rnr_n$ such that pk$p_k = r1k + r2kr_1^k + ... + rnkr_n^k$ is a non-negative integer for all k. positive integers $k$. Is P(x) $P(x)$ necessarily the characteristic polynomial of a non-negative integer matrix?
(The motivation here is that I want r1, $r_1, ... rnr_n$ to be the eigenvalues of a directed multigraph.)
Edit: If that condition isn't strong enough, how about the additional condition that $$\frac{1}{n} \sumd sum_{d | n μ(d) pn/d} \mu(d) p_{n/d}$$
is a non-negative integer for all d?
5 edited tags
4 Changed wording.
Let me ask a more specific question. Suppose P(x) is a monic integer polynomial with roots r1, r2 ... rn such that pk = r1k + r2k + ... + rnk is a non-negative integer for all k. Is P(x) necessarily the characteristic polynomial of a non-negative integer matrix?
(The motivation here is that I want r1, ... rn to be the eigenvalues of a graph.directed multigraph.)
Edit: If that condition isn't strong enough, how about the additional condition that \sumd | n μ(d) pn/d is a non-negative integer for all d?
3 Fixed wording.
Let me ask a more specific question. Suppose P(x) is a monic integer polynomial with roots r1, r2 ... rn such that pk = r1k + r2k + ... + rnk is a non-negative integer for all k. Is P(x) necessarily the characteristic polynomial of a non-negative integer matrix?
(The motivation here is that I want r1, ... rn to be the eigenvalues of a graph.)
Edit: If that condition is isn't strong enough, how about the additional condition that \sumd | n μ(d) pn/d is a non-negative integer for all d?
2 Added a condition.
Let me ask a more specific question. Suppose P(x) is a monic integer polynomial with roots r1, r2 ... rn such that pk = r1k + r2k + ... + rnk is a non-negative integer for all k. Is P(x) necessarily the characteristic polynomial of a non-negative integer matrix?
(The motivation here is that I want r1, ... rn to be the eigenvalues of a graph.)
Edit: If that condition is strong enough, how about the additional condition that \sumd | n μ(d) pn/d is a non-negative integer for all d?
1
# When is a monic integer polynomial the characteristic polynomial of a non-negative integer matrix?
Let me ask a more specific question. Suppose P(x) is a monic integer polynomial with roots r1, r2 ... rn such that r1k + r2k + ... + rnk is a non-negative integer for all k. Is P(x) necessarily the characteristic polynomial of a non-negative integer matrix?
(The motivation here is that I want r1, ... rn to be the eigenvalues of a graph.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8511159420013428, "perplexity_flag": "middle"} |
http://scicomp.stackexchange.com/questions/4952/numeric-solution-of-simple-but-possibly-singular-linear-system | # Numeric solution of simple but possibly singular linear system
I have a simple (and small) linear homogeneous system $Ax=0$, where the entries of the $N\times M$ matrix $A$ are small integers. I do not need fancy methods which efficiently solve almost singular matrices and treat roundoff errors etc. But I need to know if a system is singular and also the general solution for underdetermined systems. It must be rock-solid. Any reference to such an algorithm is welcome. I am coding in C. Is SVD the way to go?
-
Are you looking for integer solutions as well? What is the size of $N$ and $M$? By "general solution", do you mean a basis of the null space of $A$? – Christian Clason Jan 4 at 22:30
Yes, essentially I mean a basis of the null space. Size of N and M can vary, where `N>M` and `M>N` is both possible . Typical size could be `N,M\sim 10`. I searched a bit further and found that 'reduced Row Echelon form' could be the solution. Since `A` has integer elements only rationals would appear and I could compute exact results. Anybody aware of an open implementation (any language). – highsciguy Jan 4 at 22:38
I.e. I search an implementation of the Gaussian elemination for `NxM` matrices. I know that of Numerical recipies, but it is for `NxN`. I understand now that SVD would work. But it seems a bit overkill to me with the additional advantage that it would use floats and probably be to long to adopt it to integers/rationals. – highsciguy Jan 4 at 23:00
## 3 Answers
The Integer Matrix Library is a C library that claims to be able to compute the null space of an integer matrix. See also this answer on MathOverflow to the exact same question, which gives a list of libraries (including PARI, which also can be called from C and is still being updated). You could also take a look at LinBox, even though it's written in C++.
-
1
The reference to IML was pretty useful to me. It does what I want. Also the question on MathOverflow is really the same. – highsciguy Jan 6 at 15:06
The general solution of your underdetermined system is that $x$ is a member of the nullspace of $A$.
To find the nullspace of $A$, the most numerically stable method is to use an SVD. See Null-space of a rectangular dense matrix for further details.
-
The SVD will both (1) tell you what the null space of your matrix is (2) give you the pseudo-inverse of the matrix in the event that it is singular. The SVD is, generally speaking, a very handy matrix factorization and for your stated purposes should definitely get the job done.
If you're determined to write the code yourself, Trefethen and Bau's book has enough details to work from, either implementing the really bad way of finding the eigendecomposition of A*A or the proper way using bidiagonalization.
However, if you just happen to be working in C and don't feel the need to do it yourself, the Wikipedia page for the SVD has a long list of libraries that will implement it for you, chief among which is the GNU Scientific Library. Or, a google search of "SVD c source code" turns up a couple results as well.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492892026901245, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/60428/what-is-the-most-accurate-method-to-get-intersection-point-in-3d | # What is the most accurate method to get intersection point in 3D?
I have been given 3D point data, belonging to different planar segments. Points are not exactly laid on the planes so that I have fitted best planes using least square solutions. Now, I want to find the intersection point where more than 2 planes are intersected. I found there are two methods for that
1. By computing intersection lines using 2 plane pairs and then find the final point using some adjustment process
2. BY using 3D planes itself and find the intersection point.
So, now I would like to know which method is given most accurate result. I guess each method is given 2 difference results.
-
## 1 Answer
I think neither methods will lead to the most accurate solution: If one first estimates the planes and then find the intersection point, one does not enforce directly the constraint that all planes intersect exactly in a single point.
If I understand correctly, this is what we know:
• There is a set of $M$ planes (three or more) which intersect in a single point $\mathbf a$. (This is our parametric model.)
• The planes are represented by a set of $K$ points $\mathbf x^{(k)}$ affected by zero-mean Gaussian noise (our data).
• We assume that we know which point belong to which plane. Thus we have a mapping $I: \{1,...,K\} \rightarrow \mathbb N$ which maps a point index $k$ onto a plane index $I(k)$. (If we don't know the mapping $I$, we can find it using RANSAC or a similar method.)
Maximizing the likelihood of the model parameters is equivalent to minimizing the geometric error between the data and the model.
Our model might look as follows:
All planes intersect in a single point $\mathbf a = (a_1,a_2,a_3)^\top$. Each plane has a different orientation specified by a normal vector $\mathbf n^{(m)}$. Now our $M$ planes are represented by $(3+3M)$ parameters: $$\mathbf p = (a_1,a_2,a_3,n_1^{(1)},n_2^{(1)},n_3^{(1)} ,...,n_1^{(M)},n_2^{(M)},n_3^{(M)})^\top$$ (However, the problem has only $3+2M$ degree of freedom since the length of the normals are insignificant. Thus, we have a gauge freedom of $M$ dimension. If necessary, this free gauge can be removed by using a minimal/two-dimensional parametrisation of the normals. A good possibility is to restrict the normals to lie on a sphere as described in Hartley, Zisserman: "Multiple View Geometry", Second edition, A 6.9.3.)
Now the geometric error we wish to minimize is:
$$S = \sum_{1=k}^K [d(\mathbf x^{(k)}, \mathbf a,\mathbf n^{(I(k))})]^2$$
Here, $d(\mathbf x, \mathbf a,\mathbf n)$ is the distance between a point $\mathbf x$ and a plane $(\mathbf a,\mathbf n)$.
We can find the optimal plane parameters $\mathbf p$ by jointly minimizing $S$ wrt. to $\mathbf p$.
-
thank you for your good explanation. during last couple of months i was using line intersection to get intersection point of many planes. But, now i have been directed back to get intersection point using plane intersection itself. So, i need your comments again to achive this. If we suppose, that we already obtained good planes which fit the points on RANSAC or suppose we have only the plane parameters. then, i need to know the way of getting intersection point where many planes meet. i would like to do this with least square solutions but i am poor in mathematics. plz help me. thanks. – niro Dec 2 '11 at 11:39
if i will be supported step wise how this can implemte in matrix form then, i hope i could implement this. thanks. sorry for disturbing you. – niro Dec 2 '11 at 11:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319347739219666, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/132898-multiplying-binomial-w-two-trinomials.html | # Thread:
1. ## multiplying binomial w/two trinomials
Sorry, I figured it out; I'd like to delete the thread can't seem to delete right now though.
2. Originally Posted by greatsheelephant
Here is the expression I'm supposed to find the product of:
$(4y^3+5y^2-12y)(-12-2)(7y^2-9y+12)$
To multiply it, does it matter whether or not I multiple a binomial with one of the trinomials first, or should I multiply the two trinomials together first? I can't seem to get it right. First I tried multiplying on eof the trinomials with the binomial:
$-12y(4y^3+5y^2-12y)-2(4y^3+5y^2-12y)$
which gave me $-48y^4+76y^3-10y^2+24$
I then multiplied that by the other trinomial,
$(7y^2-9y+12)$
which gave me $-336y^6+964y^5-1330y^4+1002y^3+48y^2-216y+288$
the correct answer is supposed to be $-336y^6-44y^5+974y^4-1854y^3+1392y^2+288y$
Thanks
$-12-2 = -14$ so that's the middle bracket sorted. Also you can take a factor of y from the first set.
$-14y(4y^2+5y-12)(7y^2-9y+12)$
Since this gives a polynomial of order 5 whereas the book says order 6 I suspect you've made a typo in the original question. Probably the middle bracket
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944593608379364, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/24842?sort=votes | ## BSD conjecture and L functions with zeroes of order g
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If the group of rational points of $E$, which is finitely generated by the Mordell-Weil Theorem, has $g$ generators of infinite order, then the Birch-Swinnerton-Dyer conjecture gives
$L_E(s)$ has a zero of order $g$ at $s=1$.
Assuming the BSD conjecture, is it possible to (and if so how) to construct such $L_E(s)$? Specifically, if we want $g=3$ or $4$?
-
## 1 Answer
I'm not entirely sure what you mean by your question. Here are two remarks:
1. If you assume BSD, then to "construct" $L_E$ you just need to give the curve $E$. There are (many) elliptic curves /$\mathbb{Q}$ whose ranks have been computed, and are (say) equal to 3 or 4.
2. If one wants an example without assuming BSD, then you are in trouble - for given $E$, you can compute $L^{(n)}(s)$ to any desired degree of accuracy, but proving that it vanishes computationally is impossible.
However, two things help you. If the sign in the functional equation is -1, then you have that the order of vanishing is also odd. The Gross-Zagier formula can be used to check the vanishing of the first derivative. For example, this is used in the following paper to exhibit an elliptic curve $E$ whose $L_E$ provably vanishes to order 3.
On the Conjecture of Birch and Swinnerton-Dyer for an Elliptic Curve of Rank 3 Author(s): Joe P. Buhler, Benedict H. Gross, Don B. Zagier Source: Mathematics of Computation, Vol. 44, No. 170 (Apr., 1985), pp. 473-481
-
I'm in case 1. I'm just curious where to find such curves E, and how to turn them in to $L_E(s)$. – paarshad May 16 2010 at 7:32
In that case you're probably best off looking at Cremona's very detailed tables of elliptic curves of conductor < 130000: warwick.ac.uk/staff/J.E.Cremona/ftp/data They include the $a_p$, which will allow you to write down the $L$-function by hand. For computing values of the L-function, T. Dokchitser has written a nice program that can do this given the input of the $a_p$ and a functional equation. It has been implemented in sage, and linked up with the elliptic curve functionality - cf. the example here: sagemath.org/doc/reference/sage/lfunctions/… – JT May 16 2010 at 17:59
This is what I was looking for. Thank you. – paarshad May 17 2010 at 1:21
"you can compute L(n)(s) to any desired degree of accuracy, but proving that it vanishes computationally is impossible." Can't one discretize the L-value and check whether it is zero? – Idoneal May 19 2010 at 5:58
Sorry. That was sheer nonsense. – Idoneal May 19 2010 at 6:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9294401407241821, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/semiclassical+scattering | # Tagged Questions
0answers
30 views
### equation for the potential in terms of the phase shift for a rotational invariant potential
let be a potential in 3d invariant under rotations so $V(r)$ in the WKB approximation the phase shifts are given by \delta _{l} (k)= \int_{a}^{\infty}dr ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8053929209709167, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/120829/an-infinite-group-such-that-every-proper-subgroup-is-finite | ## An infinite group such that every proper subgroup is finite?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
While reading about the Burnside problem, I thought of the following question:
```` If every proper subgroup of G is finite, does it follow that G is also finite?
````
Despite extensive searching (and thinking), I am unable to find a solution. (I suspect that the answer is no)
-
2
A Google search for the exact quote "every proper subgroup is finite" helps. – Jonas Meyer Feb 5 at 4:38
3
For a finitely generated example see Tarski monster. – Benjamin Steinberg Feb 5 at 5:14
4
I'm fairly sure this question or some slight variation had been asked before with the same answers. – Benjamin Steinberg Feb 5 at 5:20
## 1 Answer
No. The direct limit of the cyclic groups of order $p^n$ is infinite, but every proper subgroup of it is finite.
-
1
I like it. It is the rotation group of the circle of length 1 by elements of Z[1/2]. – Matt Brin Feb 7 at 2:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9155606031417847, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/273546/what-is-the-difference-between-open-ball-and-neighborhood-in-real-analysis/273548 | # What is the difference between open ball and neighborhood in real analysis?
I'm learning real analysis.
Open ball: The collection of points $x \in X$ satisfying $|x - x_{0}| < r$ is called the open ball of radius $r$ centered at $x_{0}$
Neighborhood: A neighborhood of $x_{0} \in X$ is an open ball of radius r > 0 in $X$ that is centered at $x_{0}$
I'm using Real and Complex Analysis written by Christopher Apelian and Steve Surace. In my mind, open ball = a collection of points satisfy certain requirement = neighborhood. I do not find out any differences between open ball and neighborhood. Could any one explain it? Thanks!
-
3
By those definitions neighborhood of $x$ and open ball centred at $x$ are indeed the same thing; that’s not the usual definition of neighborhood, however. – Brian M. Scott Jan 9 at 15:42
## 2 Answers
Let's discuss the 1-dimensional case. An open ball in $\Bbb R$ is a set given by $$B(x,r):=\{y\in \Bbb R:|y-x|<r\}.$$ These sets are very important as they allow us to define the topology on $\Bbb R$, i.e. it allows us to say which sets are open and which are not. Topology is one of the key structures to work with uncountable spaces, $\Bbb R$ in particular.
The neighborhood of a point $x\in \Bbb R$ is any subset $N_x\subseteq \Bbb R$ which contains some ball $B(x,r)$ around the point $x$. Note that in general one does not ask neighborhood to be open sets, but it depends on the author of a textbook you have in hands.
For example, if $x = 1$ then $(0.5,1.5)$ is a ball (of a radius $0.5$) around $x$, and $(0.5,1.5)\cup [2,4]$ is a neighborhood of $x$ which is not a ball (neither around $x$, nor around any other point).
As Brian has mentioned, indeed in your case these definitions are equivalent - but this is an unusual way to define neighborhoods.
-
An open ball about x is a ball about a point $x$. A neighborhood of $x$ is commonly defined as an set containing $x$ in its interior.
A neighborhood need not be a ball/other trivial shape just a set containing $x$.
-
@BrianM.Scott true. I originally had the neighborhoods defined as an open set. I'll update it. – AvatarOfChronos Jan 9 at 15:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9589348435401917, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/106417/solving-lyapunov-like-equation | ## solving Lyapunov-like equation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The following matrix equation might be a Lyapunov-like equation, but it seems hard for me to develop a simpler way to solve it. From the computation effort, I need some help for solving the special case of the following Lyapunov equation:
Let $X$ be an $n\times n$ symmetric matrix, and $I$ an identity matrix, and $A$ is a matrix whose entries are all between 0 and 1, and $A$ is invertible. I need to solve $X$ in the following equation:
$$AX+XA^T=I$$
Previously, I found some article discussing on using Krylov subspace to solve the following Lyapunov equation:
$$AX+XA^T=b \cdot b^T$$
where $b$ is a vector. Due to $b \cdot b^T$ being a rank-one matrix, Krylov subspace appraoch is highly efficient. Now in my case it is the identity matrix $I$, but $X$ in my case is symmetric. I found that in my equation $AX$ and $XA^T$ are symmetric. So by letting $Y=AX$, my equation can be reduced to :
$$Y+Y^T=I \quad \textrm{with } \ \ Y=AX$$.
I don't know how to continue this.
Another common way is to use tensor product to rewrite my equation as:
$$(I \otimes A + A \otimes I) vec(X) = vec(I)$$
but the LHS of the above equation is $n^2 \times n^2$ size, which is too large to solve.
Is there any other efficient way to solve this? Any advices are warmly welcome!
-
How large is your $n$ in practice? (an order of magnitude will be enough) – Federico Poloni Sep 5 at 13:15
What exactly does "solve" mean for you in this contex? Do you want a numerical solution for a given $A$ or a closed-form formula that you can analyze? Or do you want to know when is the equation solvable? (That's an easy one, iff $A$ has "regular inertia", see the Ostrowski-Schneider theorem). Btw, it is a Lyapunov equation alright. – Felix Goldberg Sep 6 at 19:43
## 3 Answers
I hope the answer below is somewhat helpful.
Let me first summarize some basic facts.
It is known that the equation
\begin{equation*} AX + XA^T = B, \end{equation*}
has a unique solution if the matrix $A$ is positively stable (i.e., has spectrum in the right half plane). If $A$ is diagonal with entries $a_1,\ldots,a_n$, then the solution to the equation can be given in closed form
\begin{equation*} X = D \circ B, \end{equation*} where $D$ is a diagonal matrix with entries $1/(\bar{a}_i+a_j)$.
In the more general case, for positively stable $A$, the solution to the above equation can be represented as
\begin{equation*} X = \int_0^\infty e^{-tA}B(e^{-tA})^Tdt \end{equation*}
But that does not seem to be computationally that nice.
If $n$ is largish, one can still solve the linear system written using tensor products by using an iterative algorithm for solving the linear system, as long as the iterative algorithm (e.g., conjugate gradient, or other related methods) depends on just "matrix-vector" products. Because you would need to only compute $(A \otimes I + I \otimes A)x$ several times, and that can be done using matrix multiply without actually forming the tensor products.
-
In your closed-form solution, when $B=I$, can we compute $$\int_{0}^{\infty} e^{-tA} \cdot {({e^{-tA}})}^T dt$$ in an easier way ? – Hellen Sep 5 at 13:02
Let me underline that $X$ is symmetric whenever $B$ is (proof: clear from the integral formula). If I am interpreting it correctly, Hellen meant that $AX$ and $XA^T$ are one the transposed of the other with "$AX$ and $XA^T$ are symmetric". So what she thinks is a special case is in fact the general behaviour for symmetric $B$. – Federico Poloni Sep 5 at 21:07
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As Federico Poloni pointed out, the Hessenberg-Schur algorithm, used by MATLAB's lyap.m function is a much better choice. It is a refined version of the older Bartels-Stewart algorithm (which also works pretty well). Here's the original paper for Hessenberg-Schur, by Golub-Nash-van Loan:
https://www.cs.cornell.edu/cv/ResearchPDF/Hessenberg.Schur.Method.pdf
In your case, since $A=B^T$, things are even a bit simpler, in that only one matrix needs to be decomposed.
-
Let $A$ be an invertible $n\times n$ matrix which is antisymmetric: $A^T=-A$, e.g. the symplectic matrix $$\begin{pmatrix} 0&1 \\ -1&0 \end{pmatrix}.$$ The equation $AX+XA^T=I$ cannot have a matrix solution $X$ since that would imply $$AX-XA=I,$$ which is impossible since $trace (AX-XA)=0$. Of course that does not contradict the previous answer, but shows that some further conditions should be imposed on $A$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322487711906433, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/80323?sort=votes | ## Logarithm of a hypergeometric series
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am sorry if the answer to my question is well-known. I am quite new in this topic, so it will be also nice to have a reference, if it exists.
I was wondered if there exists a nice closed formula for a logarithm of an arbitrary hypergeometric series in the terms of, say, a linear combination of some other hypergeometric series. The reason that makes me believe in the existence of such a formula is the following.
It is an easy exercise to show that the derivative of a hypergeometric series can be expressed as follows: $\frac{d}{dx} {}_nF_m (a_1,\ldots, a_n;b_1\ldots b_m; x) = \frac {a_1\cdots a_n}{b_1\cdots b_m} {}_nF_m (a_1+1,\ldots, a_n+1;b_1+1\ldots b_m+1; x)$. From the other hand, for an arbitrary function $G(x)$ we have $(\log G(x))' = \frac {G'(x)}{G(x)}$.
It follows that it suffices to find a ratio of two hypergeometric series to find a logarithmic derivative. In some cases this ratio is known to be a hypergeometric series again. So after the integration we'll obtain the desired result.
Thank you in advance for any help.
-
## 2 Answers
No. This is because all hypergeometrics are holonomic, and holonomic functions can only have a finite number of singularities, which themselves can only be of certain types. If the logarithm of all hypergeometrics could be so expressed, then you could have a holonomic function with a $\ln \ln (x)$ singularity, which is not possible.
I find the paper On the non-holonomic character of logarithms, powers, and the nth prime function by Flajolet, Gerhold and Salvy (The Electronic Journal of Combinatorics, 2005, vol. 11) to be a wonderful compendium of useful tools for disproving holonomicity. Searching through the literature to find these tools is tedious, and so these authors ought to be commended for assembling so many into one pleasant paper.
-
Ah, great. But, if I am not mistaken, this argument seems to work in the case n≥m+1. Is anything known for the case n≤m, when the corresponding hypergeometric series is an entire function? – Max Karev Nov 8 2011 at 21:35
1
@Max: I am not aware of any specific results in that area. And clearly there are hypergeometric functions whose logarithm is also hypergeometric/holonomic. I would try to figure out what conditions to impose on the ODE satisfied by G so that ln(G) also satisfies an ODE (with polynomial coefficients), and see where that leads. The Weierstrass and Hadamard factorization theorems are likely to be useful, as well as the Structure Theorem (from paper cited above). – Jacques Carette Nov 8 2011 at 22:20
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Check out this very cool paper (in Proceedings of the National Academy of the US< so freely available, if you care):
Classification of hypergeometric identities for π and other logarithms of algebraic numbers D. V. Chudnovsky* and G. V. Chudnovsky
I am not sure it answers your question, but it comes very close.
-
1
the link to the paper: pnas.org/content/95/6/2744.full – jc Nov 7 2011 at 21:48
Thank you for the link. – Max Karev Nov 8 2011 at 21:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366285800933838, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/41692/generate-correlated-ar-process-for-given-correlation-between-demand-series | # Generate correlated AR process for given correlation between demand series
How can I generate two correlated $AR(1)$ data series with given correlation between $d_{1,t}$ and $d_{2,t}$, $r_{12}$, where $\rho_{12}$ is correlation between the two error series $$d_{1,t}=\mu+\phi_1d_{1,(t−1)}+e_{1}(t)\quad\quad d_{2,t}=\mu+\phi_2d_{2,(t-1)}+e_{2}(t)$$
$e_1$ and $e_2$ are not the same and $r_{12}$ is desired correlation between $d_{1,t}$ and $d_{2,t}$. The relations between $r_{12}$ and $\rho_{12}$ is:
$$r_{12}=\rho_{12}*\sqrt{(1-\phi_1^2)*(1-\phi_2^2)}/ (1-\phi_1\phi_2)$$
one example for: $\phi_1=.3$, $\phi_2=-.8$ AND $r_{12}=-.8$; the problem is, for this example $\rho_{12}$ is $-1.7$, now how can I generate the error terms where the correlation between the two error series $\rho_{12}$ is not between $-1$ and $+1$. please help me if i'm wrong somewhere. THANKS YOU
-
You need to generate random numbers from $e_{1}$ and $e_{2}$ jointly, such as by a multivariate normal distribution (using the covariance between the error terms). Then using those values and the other known variables you can simply plug them in to $d_{1}$ and $d_{2}$. – John Nov 1 '12 at 20:06
Thanks for your answer, the PROBLEM is: I'd like to generate two correlated series with r12=-.8 or -.9 but for these values ρ12=-1.7 while ρ12 must be -1<ρ12<+1 – Roji Nov 1 '12 at 21:54
to generate e1 and e2 jointly I need ρ12, how can i calculate ρ12 ? – Roji Nov 1 '12 at 21:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936682939529419, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Cost-of-living_index | # Cost-of-living index
A cost-of-living index is a theoretical price index that measures relative cost of living over time or regions. It is an index that measures differences in the price of goods and services, and allows for substitutions to other items as prices vary.[1]
There are many different methodologies that have been developed to approximate cost-of-living indexes, including methods that allow for substitution among items as relative prices change.
A Konüs index is a type of cost-of-living index that uses an expenditure function such as one used in assessing expected compensating variation. The expected indirect utility is equated in both periods.
## Application to price index theory
The United States Consumer Price Index (CPI) is a price index that is based on the idea of a cost-of-living index. The US Department of Labor's Bureau of Labor Statistics (BLS) explains the difference:
"The CPI frequently is called a cost-of-living index, but it differs in important ways from a complete cost-of-living measure. BLS has for some time used a cost-of-living framework in making practical decisions about questions that arise in constructing the CPI. A cost-of-living index is a conceptual measurement goal, however, not a straightforward alternative to the CPI. A cost-of-living index would measure changes over time in the amount that consumers need to spend to reach a certain utility level or standard of living. Both the CPI and a cost-of-living index would reflect changes in the prices of goods and services, such as food and clothing that are directly purchased in the marketplace; but a complete cost-of-living index would go beyond this to also take into account changes in other governmental or environmental factors that affect consumers' well-being. It is very difficult to determine the proper treatment of public goods, such as safety and education, and other broad concerns, such as health, water quality, and crime that would constitute a complete cost-of-living framework."[2]
## Economic theory
The basis for the theory behind the cost of living index is attributed to Russian economist A.A. Konüs.[3] The economic theory behind the cost of living index assumes that consumers are optimizers and get as much utility as possible from the money that they have to spend. These assumptions can be shown to lead to a "consumer's cost function", C(u,p), the cost of achieving utility level u given a set of prices p.[4] Assuming that the cost function holds across time (i.e., people get the same amount of utility from one set of purchases in year as they would have buying the same set in a different year) leads to a "true cost of living index." The general form for Konüs's true cost of living index compares the consumer's cost function given the prices in one year with the consumer's cost function given the prices in a different year:
$P_K(p^0,p^1,u)=\frac{C(u,p^1)}{ C(u,p^0)}$
Since u can be defined as the utility received from a set of goods measured in quantity, q, u can be replaced with f(q) to produce a version of the true cost of living index that is based on price and quantities like most other price indices:
$P_K(p^0,p^1,q)=\frac{C(f(q),p^1)}{ C(f(q),p^0)}$[4]
In simpler terms, the true cost of living index is the cost of achieving a certain level of utility (or standard of living) in one year relative to the cost of achieving the same level the next year.
Utility is not directly measurable, so the true cost of living index only serves as a theoretical ideal, not a practical price index formula. However, more practical formulas can be evaluated based on their relationship to the true cost of living index. One of the most commonly used formulas for consumer price indices, the Laspeyres price index, compares the cost of what a consumer bought in one time period (q0) with how much it would have cost to buy the same set of goods and services in a later period. Since the utility from q0 in the first year should be equal to the utility from q0 in the next year, Laspeyres gives the upper bound for the true cost of living index.[4] Laspeyres only serves as an upper bound, because consumers could turn to substitute goods for those goods that have gotten more expensive and achieved the same level of utility from q0 for a lower cost. In contrast, a Paasche price index uses the cost of a set of goods purchased in one time period with the cost it would have taken to buy the same set of goods in an earlier time period. It can be shown that the Paasche is a lower bound for true cost of living index.[5] Since upper and lower bounds of the true cost of living index can be found, respectively, through the Laspeyres and Paasche indices, the geometric average of the two, known as the Fisher price index, is a close approximation of the true cost of living index if the upper and lower bounds are not too far apart.[6]
## References
1. "BLS Information". Glossary. U.S. Bureau of Labor Statistics Division of Information Services. February 28, 2008. Retrieved 2009-05-05.
2. ^ a b c | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395053386688232, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/38795/borel-sets-on-rn | ## Borel Sets on R^n
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Define the Borel sigma-algebra on R^n as the smallest sigma-algebra containing all n-rectangles (a1, b1) x...x (an, bn).
Is it true that the Borel sigma algebra contains all sets of the form A1 x...x An, where each Ai is some Borel set in R ?
I thought this would be trivially true, but I had a lot of trouble trying to prove it, and I'm not even sure its true anymore.
If this is a well-known result, could you please refer me to a text where it has been (dis)proved ?
-
## 1 Answer
A way to prove it: 1/ any set of the form $A_1 \times \mathbb R \ldots \times \mathbb R$, where $A_1$ is Borel, or more generally a "Borel rectangle" with only one slice not equal to the whole space, is in the Borel sigma-algebra (this is essentially a 1-dimensional Borel set, and those are generated by open intervals). 2/ any product $A_1 \times \ldots \times A_n$ (with each $A_i$ Borel) is a finite intersection of sets of the above form.
Not sure I should have answered this, it may be a homework problem... I'd have just written a comment but I'm not reputable enough to do so :)
Any standard reference on measure theory will provide a proof of the result you're asking about (say, Dudley's book).
-
No, this wasn't a homework problem. I've been wondering whether B(R^n) should be defined as the smallest sigma algebra containing all rectangles, or the smallest sigma algebra containing all products of Borel sets in R. I was trying to prove that these are equivalent, and getting worried that I couldn't and I was getting stuck trying to do step 1 as you suggested. Ok, just need to try harder then... – Cosmonut Sep 15 2010 at 11:15
When considering more general sigma algebras both definitions might not coincide - the reasonable definition is the first one (smallest sigma-algebra containing the open sets, so the usual definition of Borel sigma-algebra in a topological space), while the second definition would be a good way to define the product of sigma algebras. So in that case what you wanted is to show "the Borel sigma-algebra on the product space is equal to the product of the Borel sigma-algebras". – Julien Melleray Sep 15 2010 at 12:35
Ok, done. Proved Step 1, with the standard trick of: - Consider all sets of the form A1 x R^(n-1) which belong to Borel sets of R^n, where A1 is a set in R - Showed that was a sigma algebra - Since (a, b) x R^(n-1) is in Borel sets of R^n, A1 can any Borel set of R. Thanks for the help and the addendum. – Cosmonut Sep 16 2010 at 8:44
1
@Julien: As far as I can see, this is a different problem. In general, the $\sigma$-algebra generated by rectangles may be smaller than the $\sigma$-algebra generated by products of Borel sets (i.e., the product $\sigma$-algebra), since Borel (or open) sets in the original space are not necessarily generated by intervals (assuming the the space is linearly ordered so that it at least makes sense). On the other hand, the $\sigma$-algebra generated by open sets (i.e., the Borel algebra in the product) may be larger than the product $\sigma$-algebra, since ... – Emil Jeřábek Aug 17 2011 at 11:09
1
... in general an open set need not be a countable union of rectangles (or products of Borel sets, for that matter). So we are discussing three different $\sigma$-algebras. – Emil Jeřábek Aug 17 2011 at 11:12
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343901872634888, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/103529/proof-of-a-combinatorial-identity-possibly-using-trigonometric-identities/103662 | ## Proof of a combinatorial identity (possibly using trigonometric identities)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For integers $n \geq k \geq 0$, can anyone provide a proof for the following identity?
$$\sum_{j=0}^k\left(\begin{array}{c}2n+1\\ 2j\end{array}\right)\left(\begin{array}{c}n-j\\ k-j\end{array}\right) = 2^{2k} \left(\begin{array}{c} n+k\\ 2k \end{array}\right)$$
I've verified this identity numerically for many values of $n$ and $k$, and suspect it to be true.
I found similar identities in http://www.math.wvu.edu/~gould/Vol.6.PDF, most notably:
$$\sum_{j=0}^k\left(\begin{array}{c}2n\\ 2j\end{array}\right)\left(\begin{array}{c}n-j\\ k-j\end{array}\right) = 2^{2k} \frac{n}{n+k}\left(\begin{array}{c} n+k\\ 2k \end{array}\right)$$
which is Eq. (3.20) in the above link, and
$$\sum_{j=0}^k\left(\begin{array}{c}2n+1\\ 2j+1\end{array}\right)\left(\begin{array}{c}n-j\\ k-j\end{array}\right) = 2^{2k} \frac{2n+1}{n-k}\left(\begin{array}{c} n+k\\ 2k+1 \end{array}\right)$$
which is Eq. (3.34) in the above link. The derivations of these two identities seem to rely on trigonometric identities, which I've been having trouble reconstructing.
-
4
math.upenn.edu/~wilf/AeqB.html – Steve Huntsman Jul 30 at 18:12
1
Maple knows this one: > sum(binomial(2*n+1,2*j)*binomial(n-j,k-j),j=0..k) assuming k::nonnegint, n::nonnegint,n>=k; $${\frac {{2}^{2\,k}\Gamma \left( 1+k+n \right) }{\Gamma \left( 2\,k+1 \right) \Gamma \left( n-k+1 \right) }}$$ – Robert Israel Jul 30 at 19:51
## 3 Answers
I'm surprised no one has posted a proof by double counting yet. First, I will rewrite the sum as $$F(n,k)=\sum_{j=0}^k\binom{2n+1}{2n-2j+1}\binom{n-j}{n-k}.$$ This counts the number of ways I can pick $2n-2j+1$ squares out of a $1\times (2n+1)$ grid, color them alternately black, white, black... etc. and then place a mark on $n-k$ white squares.
By deleting the first square as well as any immediate square following a marked white square we end up with a sequence of $n+k$ squares, $n-k$ of which are marked white squares and several others are colored.
Another way to count this is to choose the marked white squares first. This can be done in $\binom{n+k}{n-k}$ ways. And then specify the unmarked colored squares, this can be done in $2^{2k}$ ways. We get that $$F(n,k)=2^{2k}\binom{n+k}{n-k}$$ which is what we wanted.
Notice that I've left it as an exercise to show that deleting the squares is actually a bijection between the two sets, but this is quite easy to show. Indeed you only have to worry about adding back a square and deciding whether it should be colored or not. This can be done with a parity check on the number of colored boxes between consecutive marked boxes.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let us compute the ordinary generating function of $k \mapsto c_{n+k,k}$, i.e. $S(X) = \sum_{k \geq 0 } c_{n+k,k} X^k$ (with notations as in Lierre's answer above) : $$S(X^2) = \sum_j \sum_k \binom{2n + 2k + 1}{2j} \binom{n+k-j}{k-j} X^{2k}$$ $$= \sum_j \sum_l \binom{2n + 2l + 2j + 1}{2j} \binom{n+l}{l} X^{2l+2j}$$ $$=\frac{1}{2} \sum_l \binom{n+l}{l} X^{2l} \sum_{\varepsilon = \pm 1} \sum_j \binom{2n + 2l + j + 1}{j} (\varepsilon X)^{j}$$ $$=\frac{1}{2} \sum_{\varepsilon = \pm 1} \sum_l \binom{n+l}{l} \frac{X^{2l}}{(1-\varepsilon X)^{2n+2l+2}}$$ $$=\frac{1}{2} \sum_{\varepsilon = \pm 1} \frac{1}{(1-\varepsilon X)^{2n+2}(1-\frac{X^2}{(1-\varepsilon X)^2})^{n+1}}$$ $$=\frac{1}{2} \sum_{\varepsilon = \pm 1} \frac{1}{(1- 2\varepsilon X)^{n+1}} =\sum_k \binom{n+2k}{2k} 2^{2k} X^{2k}$$ hence the result.
-
This is certainly not the proof you wanted, but it does work.
Let $a_{j,k,n}$ the term you want to sum, and let $b_{k,n}$ the right hand side. I denote by $S_x$ the shift operator w.r.t. the variable $x$. For example $S_k \cdot a_{j,k,n} = a_{j,k+1,n}$.
You can check that $$\left((-1+k-n)S_n+(1+k+n)\right)\cdot a_{j,k,n} = (S_j-1)\cdot \left(\frac{-j+2 j^2}{-3+2 j-2 n} a_{j,k,n}\right)$$ and $$\left((1 + 3 k + 2 k^2)S_k + (2 k + 2 k^2 - 2 n - 2 n^2)\right)\cdot a_{j,k,n} = (S_j-1)\cdot \left( \frac{\left(-j+2 j^2\right) (k-n)}{-1+j-k} a_{j,k,n}\right).$$
What is interesting in these identities is that the right hand side is zero when you sum over $j$ from $0$ to $k$. More over, the operators on the left hand side does not contain $j$, so they commute with the summation w.r.t. $j$.
In the end, you obtain that the sum $c_{k,n} = \sum_{k=0}^n a_{j,k,n}$ satisfies the recurrence relations $$\left((-1+k-n)S_n+(1+k+n)\right)\cdot c_{k,n} = 0$$ and $$\left((1 + 3 k + 2 k^2)S_k + (2 k + 2 k^2 - 2 n - 2 n^2)\right)\cdot c_{k,n} = 0.$$
These recurrence relations are also satisfied by $b_{k,n}$ ! With a careful checking of some initial conditions, this is enough to prove the equality.
I did not compute by hand these so called telescoping relations, Mathematica did, with the package HolonomicFunctions.
`CreativeTelescoping[Binomial[2 n + 1, 2 j]*Binomial[n - j, k - j], S[j] - 1, {S[k], S[n]}]`
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336754679679871, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/69815?sort=oldest | ## How to motivate and interpret the geometric solutions of Hamilton-Jacobi equation?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Studying the Hamilton-Jacobi equation, I meet a generalization of the notion of its solutions, which is found already in the work of Sophus Lie.
For an H-J eqn, I mean a first order pde $H\circ dS=0$ in an unknown scalar function $S$ defined on a smooth manifold $M$, where $H\in C^\infty (T^\ast M,\mathbb{R})$.
If $S$ is a solution then the image $\Lambda$ of its differential $dS$ is included in $H^{-1}(0)$ and has the following properties:
1. $\Lambda$ is a lagrangian submanifold of $(T^\ast M,d\theta_M)$,
2. $\Lambda$ is transversal to the fibers of $\tau_M^{\ast}:T^\ast M\to M$,
3. the restriction of $\tau_M^{\ast}$ to $\Lambda$ is injective.
Conversely, if a submanifold $\Lambda$ of $T^\ast M$, included in $H^{-1}(0)$, satisfies the properties 1, 2, and 3, then it is equal to the image of the differential of a solution, unique up to a constant.
But if a submanifold $\Lambda$ of $T^\ast M$, included in $H^{-1}(0)$, satisfies only the conditions 1 and 2, then, around each of its points, it is again equal to the image of the differential of a solution, but this can fail to holds globally.
The idea of Sophus Lie was to give up both conditions 2 and 3.
Adopting this point of view, we define a generalized (or geometric) solution of $H\cic dS=0$ to be any lagrangian submanifold $\Lambda$ of $(T^\ast M,d\theta_M)$ which is included in $H^{-1}(0)$.
I don't think that this generalization is only due to the sake of abstractness. Infact, considering generalized solutions, it is possible, arguing with tecniques from symplectic geometry, to prove the local existence and uniqueness theorem, at the same time, for generalized and usual solutions.
But I am hoping to find "more" practical applications which illustrate the meaningfulness of geometric solutions. I would like to learn if ther is some physical or geometrical problem involving an H.-J. eqn, whose comprehension is sensibly augmented by the consideration of generalized solutions. So my question is:
What are the possible arguments and applications that motivate and help to interpret the notion of geometric solutions for an Hamilton-Jacobi equation?
-
@Mathphysicist: I have merged two of your tags in one, hoping to be more descriptive of the content. – Giuseppe Jul 9 2011 at 8:23
@Giuseppe: there's really no point for an geometric-theory-of-pdes tag. If you must, you should use the already existing geometric-analysis tag. – Willie Wong Jul 16 2011 at 2:04
## 3 Answers
A very interesting practical application is the problem of state estimation - for linear systems the answer is called the Kalman filter. Given a vector field $\dot{x} = a(x,v)$ and a measurement equation $y=c(x,w)$, compute the initial condition $x(t_0)$, the perturbation $v(t)$, and the measurement error $w(t)$ that minimize a cost function $J$. The cost is usually expressed as an integral over time of some function of $v$ and $w$.
Using Pontryagin's maximum principle or Bellman's dynamic programming, one arrives at a HJ equation which is used to find $v$. The additional step needed is to determine $x(t_0)$. It is a static minimization problem, which however needs to be repeated at each instant $t$ in the interval of interest. This is not a very practical answer. For linear systems with quadratic costs, the Kalman filter provides a recursive solution to the complete problem. In more general cases, the problem is much less studied either by engineers or by mathematicians. This is unlike the optimal control problem which has been studied extensively.
I think the geometry of the solutions is crucial. My understanding is that the filter equation is a particular symmetry of the Hamilton-Jacobi-Bellman partial differential equation - at least when everything is smooth. Meanwhile, the Hamiltonian vector field is a characteristic of the partial differential equation - also a particular symmetry, but not the one that gives a recursive solution to the estimation problem.
-
Dear Pait, I appreciate very much your thoughtful responce. I have given a look at the corresponding sections of the book of Agrachev and Sachkov, but I have not found the lagrangian submanifolds not transversal to the fibers considered as solution generalized for the H.-J. eqn for the optimal cost. Where could I look for such objects in the context of control theory? I would like to learn if considering generalized solutions is possible obtain more information rather than using only usual solution. Thank you. – Giuseppe Jul 9 2011 at 7:50
Would you mind helping me with (or pointing to) explanations for the terms "lagrangian submanifolds not transversal to the fibers" and "generalized solutions"? That would help me translate between the two sides of the literature, the mathematical and the engineering. Thanks! – Pait Jul 10 2011 at 21:38
I described this notion already in the text of my question. Please I woulde like to know the points of my question that are not enough clear, or are not written in proper english, so that I could correct them. Thank you in advance. – Giuseppe Jul 11 2011 at 13:37
Given the HJ eqn $H\circ dS=0$ in the unknown function $S$ on the smooth manifold $M$, where $H\in C^\infty(T^\ast M)$. A generalized solution is defined to be a submanifold of $T^\ast M$, the cotangent space of $M$, which is included in $H^{−1}(0)$ and lagrangian w.r.t. the canonical symplectic form $dθ_M$. Here $θ_M$ is the tautological, or Liouville, 1-form on $T^\ast M$. – Giuseppe Jul 11 2011 at 17:46
I think I wanted a reference, maybe to a book, with a more leisurely explanation. It's not that your text is in any way unclear, it's just that I have a different background and need to do my homework to learn the language better. – Pait Jul 13 2011 at 15:07
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As you suspect these generalized solutions and their apparent singularities (=points of the Lagrangian submanifold where condition 2. fails) are unavoidable.
First observe that any Lagrangian submanifold contained in $H^{-1}(0)$ must be tangent to the Hamiltonian field $X_H$ (this is the method of characteristics). I assume here that $H^{-1}(0)$ is smoot and $2n-1$ dimensional. Now start with some non-characteristic classical initial data (= an $n-1$ dimensional submanifold in $H^{-1}(0)$ transversal to $\tau^*_M$ and transverslat to $X_M$). If you let the initial datum flow with $X_H$ this will swipe out the unique solution in $T^*M$. For short times this Lagrangian manifold will be transversal but at some point it can start to bend so that condition 2. fails. The projection to $M$ of points where transversality fails are called caustics in the literature.
Here's the classical physics example which you'll find for example in Arnolds books (his PDE course but I think also in his mechanics book): in the particle picture, light particles all move along straight lines with the same speed $c$ in possibly different directions (but they don't interact). An initial data would be given by a surface in the room and a direction field along this surface giving the initial direction of light rays. Initially the light rays don't intersect, but after some times they might start to intersect. The solution S(q) of HJ in this example describes the time after which the wave front arrives at a point q in space. If light rays intersect this function becomes multivalued.
By the way I'd be interested in the original source of Lie, could you add that to your question?
-
Dear Michael, I stated that historical attribution not for having read the original papers of Lie on transformation groups written in the seventies of ninenteenth century, but I learned it from Ch.5 §2.2 "The Geometry of Differential Equations" in "Geometry I, EMS 28" of Alekseevskij, Vinogradov, Lychagin. – Giuseppe Jul 9 2011 at 13:41
The famous KAM tori arose out of HJ considerations. They are Lagrangian torii. They were found by attempting to solve the HJ equation generally, and then finding one can only solve it when certain appropriately irrational frequency conditions hold. They occur in perturbations of integrable systems, or near `typical' linearly stable periodic orbits in a fixed Hamiltonian systems. You can read about them in an Appendix to Arnol'd's Classical Mechanics, and also get some idea from Chris Golé's book 'Symplectic Twist Maps', or from Siegel and Moser's 'Stable and Random Motion'.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9362366199493408, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/thermodynamics?sort=unanswered&pagesize=30 | # Tagged Questions
Covers the study of (mostly homogeneous) macroscopic systems from a heat/energy/entropy point of view. Maybe combine with [tag:statistical-mechanics].
learn more… | top users | synonyms
0answers
145 views
### How is the logarithmic correction to the entropy of a non extremal black hole derived?
I`ve just read, that for non extremal black holes, there exists a logarithmic (and other) correction(s) to the well known term proportional to the area of the horizon such that \$S = \frac{A}{4G} + K ...
0answers
70 views
### Mechanical Equivalent of Heat
Recently I have been looking up James Joule's experiment regarding the mechanical equivalent of heat. After viewing some drawings of the apparatus, I assumed that the lines holding the weights would ...
0answers
121 views
### Thermodynamic relations from Gibbs-Duhem
Given the Gibbs-Duhem relation $V dp = S dT - N d \mu$, I am having trouble deriving the following identity: $\ (\frac{\partial N}{\partial \mu})_{V,T} = N (\frac{\partial \rho}{\partial p})_T$ ...
0answers
210 views
### What's the efficency of a steam jet pump?
Jet pumps or venturi pumps are often stated as having a "terribly low" efficiency, steam jet pumps specifically are usually describes as "only justifiable when there's an abundant steam supply anyway" ...
0answers
775 views
### Inflating a balloon (expansion resistance)
I am doing a quick calculation on how to calculate the pressure needed to inflate a perfectly spherical balloon to a certain volume, however I have difficulties with the fact that the balloon (rubber) ...
0answers
26 views
### Lattice model completely constrained by boundary data
I am dealing with a lattice model that has the peculiar property that if I specify all the spins on the boundary, by local conservation laws, the whole lattice configuration (throughout the whole ...
0answers
39 views
### What is the physical meaning of fact, that Reissner-Nordstrom black hole is thermodynamically unstable?
It is known, that Reissner-Nordstrom black hole is thermodynamically unstable [1]. Does it mean, that there is no Reissner-Nordstrom black hole in physical world? Does it mean, that there may be ...
0answers
71 views
### Spontaneous conversion of heat into work at negative temperatures
Consider a heavy macroscopic object moving in a gas. Friction causes its kinetic energy to be converted into heat. Thermodynamically, there is (effectively) no entropy associated with the kinetic ...
0answers
85 views
### experiment proposal to validate microcausality
I've been wondering about microcausality for some time now (a recent question of mine regarding the topic) and i'm wondering if its possible to devise an experiment to detect potential violations I ...
0answers
266 views
### How do I determine the Internal Energy of ammonia at a pressure in temperature when the steam table doesn't say it
I just bought a steam table for my thermodynamics class since they don't allow use to use the one from the book for the tests. This one is different from the one in my textbook as it doesn't give the ...
0answers
73 views
### Finding two dimensional critical point
I'm reading an article about bi layered membranes which state that for the free energy function $f(\theta) = \theta \ln \theta + (1-\theta)\ln(1-\theta) + \chi \theta (1-\theta)$ Where $\phi_i$ is ...
0answers
451 views
### How do I derive the critical temperature for bose condensation in two dimensions?
In class we derived the 3D case, but there's a step I don't understand: N = g \cdot {V \over (2 \pi \hbar)^3} \cdot \int\limits_{0}^{\infty}{1 \over{e^{\left( E_p \over{K_B T}\right)}-1}} d^3 p = ...
0answers
19 views
### Can one get clear ice crystals from a dirty suspension?
Euteictic freeze crystallization is a method where an electrolytic solution is cooled and separated into a stream of (relativly) clean, pure ice and a salty brine. I know anectdotally of wine ...
0answers
30 views
### Will humid air mitigate airborn dust due to neutralization of static electricity?
I have came to understand that humid air will help prevent electrostatic forces that can propel dust and cause it to cling to surfaces. My first question: is this above statement true? If the answer ...
0answers
31 views
### In a non-degenerate plasma, why are e-e collision negligible compared to e-ion for thermal conduction?
I'm trying to make some order of magnitude estimates of heat transfer in stars - to better understand 1) why conduction is said to be negligible (for non-degenerate matter) and 2) when convection ...
0answers
85 views
### Dissipation when the temperature is not constant
Consider a process where some chemical species diffuses from one part of a system (which I'll call $A$) to another ($B$) at a rate $r$ $\text{mol}\cdot \mathrm s^{-1}$. If the system's temperature is ...
0answers
152 views
### 2nd law of thermodynamics: equivalency of statements
Clausius statement (according to Wikipedia): No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature. Clausius equality: ...
0answers
50 views
### Explain to me how this water cooler works?
Goodday all, I was recently reading up on a few projects that might be of interest to me when I found "CPU Bong water coolers", there isn't much online on them so I figure I would ask y'all. If you ...
0answers
48 views
### Calculate how hot PLA will become
I am trying to attach the shaft of a brass heating tip to a PLA component. My problem is that the tip will have to reach a temperature of about 200°C and the PLA can only handle a temperature of about ...
0answers
27 views
### Calculating the change in entropy in a melting process
I have a homework question that I'm completely stumped on and need help solving it. I have a $50\, \mathrm{g}$ ice cube at $-15\, \mathrm{C}$ that is in a container of $200\, \mathrm{g}$ of water at ...
0answers
42 views
### What defines the adiabatic flame temperature?
In a case, I have to solve, I need to describe the combustion of natural gas (Groningen natural gas to be more specific). However I am having some problems understanding the adiabatic flame ...
0answers
64 views
### what's the difference between linear n-atom molecule and nonlinear n-atom molecule?
I am reading a material about the degree of freedom for linear n-atom molecule and nonlinear n-atom molecule. Here is my analysis for a diatomic molecule, if there are two atoms, we have to use 3 ...
0answers
18 views
### Henry's Law in metals
For fixed temperature the concentration of a gas dissolved in a solution is directly proportional to the gas pressure $p$, i.e. $c_s (T,p) = f(T)p$. Since $H_2$ only dissolves in metals as single ...
0answers
31 views
### What temperatures can be reached in an air-to-air thermocompressor nozzle and why?
People are generally of the opinion that the boiler injector cannot be redesigned to run on air. In other words, an air-to-air thermocompressor that puts fresh air into a tank without a mechanical ...
0answers
30 views
### What is the meaning of $h_L - h_H$ for a heat engine?
My problem gives me a Carnot cycle heat engine with water as its working fluid, with $T_H$, $T_L$, and the fact that it starts from saturated liquid to saturated vapor in the heating process. I need ...
0answers
92 views
### Difficulties with understanding total entropy change and unavailabillty
Of course, I know the fact that the entropy of an isolated system never decreases. Neverthless what makes me confused about the entropy(or change of entropy) of an isolated system is the explanation ...
0answers
76 views
### Rotational Constant and Moment of Inertia of Fluorine gas
I have come across some homework question on thermodynamics which needs me to calculate $B$ of $F_2$ My attempt: $B= \frac{h}{8\pi^2cI}$ where $I=\mu r^2=\frac{m_1m_2}{m_1+m_2} r^2$ Atomic mass of ...
0answers
43 views
### How to solve state parameters using these givens for an ideal gas?
In a thermodynamic turbine using air as an ideal gas, given that you have a known inlet temperature value $T_i$, a known exit pressure value $P_e$, a known inlet and exit velocity $V_i$ and $V_e$, a ...
0answers
70 views
### How much money does an unused but plugged-in cellphone-charger waste in a year, if its not getting warm?
Is it right as xkcd states: You can use heat flow to come up with simple rule of thumb: If an unused charger isn’t warm to the touch, it’s using less than a penny of electricity a day. Or, more ...
0answers
59 views
### What's the underlying particle physics of endothermic reactions?
I don't just mean reactions that require heat to proceed, storing surplus energy in chemical bonds. I wonder about strongly endothermic reactions that suck heat out of environment. You take some ...
0answers
115 views
### An explanation for the Landauer's principle
Has anyone understood the Landauer's principle? What is the current status? In specific, is there a theoretical derivation of the Landauer's Principle?(not the heuristic one based on Salizard's ...
0answers
37 views
### Free Energy and quantum measurement
Free Energy must be expended to reset the state of an measurement apparatus. Is this statement valid in all situations? Is there a Definitive mathematical exposition?
0answers
96 views
### “This is not a perpetual motion machine, because reservoir temperatures are changing.” Is it a valid argument?
I've already faced this situation several times: given a statement (in area of thermodynamics) I used it to provide an example of some perpetual motion machine (of first or second kind). Therefore, I ...
0answers
49 views
### Temperature measurement, precision
I am having problems with one of the problems I have done (a) which is basically $\theta = 273.16K (19/15) = 346.00K$ For (b) I am given that the length is 15.00cm when the thermometer is incontact ...
0answers
35 views
### What is the fluid approximation for BCS action?
For a system with free fermion gas at finite density (i.e. free Dirac action), we have an ideal fluid description with rho, p, n. What is the fluid description for BCS action? Suppose we have a box ...
0answers
64 views
### Temperature of the CMB when the Earth formed and the faint young Sun paradox
The cosmic microwave background (CMB) has a modern temperature of about 2.7 K. At the time of the origin of the CMB, about 13.6 billion years ago, it had a temperature of about 3000 K. ...
0answers
154 views
### Centrifugal Compressor Flow Rate
For a centrifugal compressor, as found in most turbochargers on internal combustion engines, is there a noticeable change in flow rate versus a naturally aspirated flow rate? In other words, does the ...
0answers
166 views
### Is the $mL_c$ value for triangular and rectangular fins the same value?
I am looking at the solutions that my professor put up and I feel that he did something wrong. Here is the question and I will give my stab at the solution so you can see why I think that it is wrong. ...
0answers
237 views
### How equivalent are heat energy and work energy in connection with a spinning flywheel?
Let's say we have two identical spinning flywheels, that have arbitrary geometry, and are made of copper. Now we apply some heat energy at the center point of flywheel A, causing it to slow down a ...
0answers
59 views
### How many times brighter could the stars shine without raising the temperature of space?
If my understanding is correct, the temperature of space (as defined by the temperature that a black-body will reach) has been decreasing since the big bang. It has never increased. Additionally, ...
0answers
279 views
### Do the properties of two streams flowing into an engine add together when they mix?
Let's say I have two stream supplys that have separate mass flow rates, enthalpies, and velocities that flow into an engine. Upstream of the engine's inlet, the two streams mix to form one stream that ...
0answers
44 views
### Air pressure in balloon
I have to calculate the air pressure inside of an hot air balloon. After some searching I found out that I can use the ideal gas law: PV = nRT (from Wikipedia) So to get the pressure in the balloon I ...
0answers
20 views
### Total massflow through heat exchanger
I am working on a project and I stumbled on a problem. The project is to design a heat pump to replace the old system (actual problem, not some homework problem). There are 100 or so induction units ...
0answers
30 views
### How connected thermodynamical stability and dynamical stability for black holes?
Criteria for thermodynamical stability is the convex of entropy. But for black hole entropy is non-additive.
0answers
13 views
### What is the effect of an increase in pressure on latent heat of vaporization?
What is latent heat of vaporization ($L_v$) in the first place? Wikipedia seems to indicate that it is the energy used in overcoming intermolecular interactions, without taking into account at all any ...
0answers
19 views
### Heat transfer in fluid between two horizontal plates vs unconfined case
I often see the correlation for turbulent heat transfer between liquid cells published by Globe and Dropkin (1959). In the original paper the fluid was confined between two horizontal plates and ...
0answers
39 views
### Lambda transition data points of $\require{mhchem}\ce{^4He}$
I'm looking to get some data on the lambda transition of $\require{mhchem}\ce{^4He}$. I need the data points of the specific heat vs. temperature graph, if that makes sense.
0answers
39 views
### What does “condensation releases heat” mean?
In meteorology it is said that in an air parcel cooled below dew point, condensation "releases heat". What does "releases" mean? to where? If the heat results in a rise of temperature, what gets ...
0answers
26 views
### Heat of adsorption from fugacity data
I have a set of data relating the fugacity ($\approx$ pressure) to the loading for a given set of temperatures. There are three temperature sets each having five fugacity vs. loading points. The ...
0answers
26 views
### Increase in number of micro states explanation or restatement of second law?
Is the boltzmann's expression of entropy as log of micro states leading to the formulation that system is more likely to be in a macrostate with more no. Of micro states really is an explanation or ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192334413528442, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/97769/approximation-theory-reference-for-a-bounded-polynomial-having-bounded-coefficien | ## Approximation theory reference for a bounded polynomial having bounded coefficients
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $P(x)$ be a real polynomial of degree at most $d$. Assume $|P(x)| \leq 1$ for $|x| \leq 1$. I would like a bound saying that each coefficient of $P(x)$ is at most $C^d$ in magnitude, for some absolute constant $C$.
This is surely a well-known, basic fact in approximation theory and I'm looking for a proper reference. I know one very recent paper which writes out a proof using the standard simple idea (Lagrange interpolation) -- Lemma 4.1 from a paper of Sherstov here:
Sherstov obtains $C = 4e$; I don't think either of particularly cares about getting the sharpest constant.
In any case, Sherstov and I agree that this must have appeared somewhere long ago. Could anyone provide a reference? Thanks!
-
1
Just out of curiosity, what's a lower bound on the best constant $C$? E.g. it can't be smaller than $2$, because by the Chebyshev approximation theory, the leading coefficient of $P$ satisfies a sharp inequality $|a_d|\le 2^{d-1}\|P\|_{\infty,[-1,1]}$. – Pietro Majer May 24 at 9:22
That's a good question; I also do not know a lower bound higher than $2$. – Ryan O'Donnell May 24 at 15:35
## 3 Answers
Dear Ryan, I hope the following references will be useful for you:
V.A. Markov has solved your posed problem back in 1892, see pages 80-81 in
http://www.math.technion.ac.il/hat/fpapers/vmar.pdf
Compare also the book
I.P. Natanson: Constructive Function Theory, Vol. I. Uniform Approximation, F. Ungar Publishing, New York, 1964, page 56.
You will find more detailed information in the papers
H.-J. Rack: On V.A. Markov´s and G. Szegö´s inequality for the coefficients of polynomials in one and several variables, East Journal on Approximations 14 (2008), pages 319 - 352
H.-J. Rack: On the length and height of Chebyshev polynomials in one and two variables, East Journal on Approximations, 16 (2010), pages 35 - 91.
-
Thank you, the top of p. 81 is exactly what I'm looking for. And indeed it jibes with Fedja's response; it's not too hard to show that the bounds Markov gives are maximized when '2i' = 1 - 1/sqrt(2), leading to Fedja's bound. – Ryan O'Donnell Sep 20 at 16:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is an answer to Pietro rather than to Ryan. To find the sharp $C$ is easy. Note first that the maximal coefficient and the maximal value on the unit circle are pretty much the same things as far as the exponential rate of growth is concerned: the maximal coefficient is dominated by the maximum on the unit circle by Cauchy and $d+1$ times the maximal coefficient dominates the maximum on the circle by the triangle inequality. Now, the Chebyshev polynomial is defined by $$P(\frac{z+z^{-1}}2)=\frac{z^d+z^{-d}}2$$ Plugging in $z=i(\sqrt 2+1)$, we see that $|P(i)|\approx(\sqrt 2+1)^d$ (up to a constant factor), so $\sqrt 2+1$ is unbeatable. On the other hand, this value is easy to obtain. Take any polynomial $P$ that is bounded by $1$ on $[-1,1]$ and consider the analytic function $$F(z)=z^{-d}P(\frac{z+z^{-1}}2)$$ in the domain $|z|\ge 1$. It is bounded there, so by the maximum principle, it is bounded by its maximum on the unit circle, which is $1$. Thus, $|P(\frac{z+z^{-1}}2)|\le |z|^d$ for every $z$ outside the unit disk. The preimage of the unit circumference under the mapping $z\mapsto \frac{z+z^{-1}}2$ lies in the disk $|z|\le\sqrt 2+1$ (all points outside that disk satisfy $|\frac{z+z^{-1}}2|\ge \frac{|z|-|z|^{-1}}2>1$), so $P(w)|\le(\sqrt 2+1)^d$ for $|w|=1$.
Returning to the (much more difficult) Ryan's question "Where is that all written?", I can more or less guarantee that Bernstein knew it well but the earlier history is lost in a dense fog and my eyesight is rather weak, so I prefer to leave it to someone else...
-
Thanks Fedja! I'm accepting your answer for now since I learned a lot from it. I'd still be happy if someone in the area could suggest a generic citation I could put in the paper I'm working on :) – Ryan O'Donnell Jun 2 at 17:46
I think that an interesting historical point to mention is that this problem was first posed and solved for quadratic polynomials by Medeleev (a chemist). There is a nice little article about the problem, including its origins, in the American Mathematical Monthly:
• R.P. Boas, Extremal problems for polynomials, Amer. Math. Monthly 85 (1978), No. 6, 473--475.
You can also find Markov's theorem written up (including generalizations to polynomials of several variables) in the following textbooks:
• P.B. Borwein and T. Erdelyi, Polynomials and Polynomial Inequalities, GTM 161, Springer, 1995.
• P. Borwein, Computational Excursions in Analysis and Number Theory, CMS Books in Mathematics, Springer, 2007.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271985292434692, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/250096/calculating-the-mean-of-a-discrete-r-v-in-a-question | # Calculating the mean of a discrete R.V in a question
I have the following HW question:
There are $N$ balls in a box, $m$ balls with an $S$ for success and $N-m$ balls with an $F$ for failure. Choose $n$ balls at random ($n\leq N$) and let $X$ = the number of successes.
a. Calculate the probability function $p(\cdot)=P(X=\cdot)$
b. Calculate $EX$
I have managed to solve the first part of the question: $$P(X=k)=\frac{\binom{m}{k}\cdot\binom{N-m}{n-k}}{\binom{N}{n}}$$
But I am having difficulty with the second part, I need to evaluate $$EX=\sum_{k=0}^{\min\{n,m\}}\frac{\binom{m}{k}\cdot\binom{N-m}{n-k}}{\binom{N}{n}}\cdot k$$
I have opened the binomials by definitions and I am stuck there.
How can I calculate $EX$ ? any help is appreciated!
-
## 1 Answer
By symmetry, the probability of success on each trial is the same as on each other trial.
(Here's a point where people get confused: They say "Doesn't the probability of success on the second trial depend on whether there's a success on the first trial?". The answer is that the conditional probability of success on the second trial, given the outcome of the first trial, does depend on the outcome of the first trial. But marginal (or "unconditional") probability of success on the second trial is nonetheless the same as the probability of success on the first trial.)
Let $$X_i = \begin{cases} 1 & \text{if success occurs on the $i$th trial,} \\ 0 & \text{if not.} \end{cases}$$ Then $\mathbb E(X) = \mathbb E(X_1+\cdots+X_n) = \mathbb E(X_1)+\cdots+\mathbb E(X_n)$, and all the terms in that last sum are equal.
-
Thank you for the answer, I didn't even think to use linearity and I got really stuck working with the definition. – Belgi Dec 9 '12 at 23:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446888566017151, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/12030/equation-of-motion-for-explicit-time-dependent-potential?answertab=oldest | # Equation of motion for explicit time dependent potential
What is the equation of motion for a single scalar field Lagrangian density in which the potential explicitly depends on time? For example: $$U(\phi,t)=\frac{1}{2}\phi^2 - \frac{1}{3} e^{t/T}\phi^3 + \frac{1}{8}\phi^4$$ where $T$ is a constant.
-
## 1 Answer
It's just a Klein-Gordon equation with a RHS. Explicitly, the E-L equation for a scalar field is $$\partial_{\mu} {\partial {\mathcal L} \over \partial \phi_{,\mu}} - {\partial {\mathcal L} \over \partial \phi} = 0$$ so for your potential we have $$\square \phi + \phi - e^{t/T} \phi^2 + {1 \over 2} \phi^3 = 0$$
-
Can u derive these equation from the first principle of variation of action.My point is that why coefficient in cubic phi term not differentiated with respect to time? – Rakesh Jul 9 '11 at 7:00
1
– Marek Jul 9 '11 at 17:31
@Rakesh: For a slightly more interesting model with explicit time dependence, consider moving the factor $e^{t/T}$ to instead multiplying the kinetic term... – Qmechanic♦ Jul 9 '11 at 19:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8690780997276306, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/129873-prove-lhs-rhs-using-only-properties-det.html | # Thread:
1. ## Prove that LHS=RHS using ONLY properties of Det.
Q) Without directly expanding the det., but using only the well-known properties, prove that:
__________________________________________________ ________
Tried solving many times but couldn't find a proper method/approach. Keep getting stuck every time...
Attached Thumbnails
2. Let A denote the matrix $\begin{bmatrix}-bc&b^2+bc&c^2+bc\\ a^2+ac&-ac&c^2+ac\\ a^2+ab&b^2+ab&-ab\end{bmatrix}$. The eigenvalues of A are the roots of the equation $\begin{vmatrix}-bc-\lambda&b^2+bc&c^2+bc\\ a^2+ac&-ac-\lambda&c^2+ac\\ a^2+ab&b^2+ab&-ab-\lambda\end{vmatrix} = 0$. If we try putting $\lambda = -(ab+bc+ca)$, then that equation becomes $\begin{vmatrix}a(b+c)&b(b+c)&c(b+c)\\ a(a+c)&b(a+c)&c(a+c)\\ a(a+b)&b(a+b)&c(a+b)\end{vmatrix} = 0$. The vector (x,y,z) will be an eigenvector for $\lambda$ if $\begin{bmatrix}a(b+c)&b(b+c)&c(b+c)\\ a(a+c)&b(a+c)&c(a+c)\\ a(a+b)&b(a+b)&c(a+b)\end{bmatrix}\begin{bmatrix}x\ \y\\z\end{bmatrix} = \begin{bmatrix}0\\0\\0\end{bmatrix}$. But that system of equations reduces to $ax+by+cz=0$, which has two linearly independent solutions. Therefore $\lambda = -(ab+bc+ca)$ is indeed an eigenvalue of A, with multiplicity at least 2. But the sum of the three eigenvalues of A is equal to the trace of A, which is $-(ab+bc+ca)$. Therefore the third eigenvalue must be $ab+bc+ca$. Finally, the determinant of A is the product of its eigenvalues, namely $(ab+bc+ca)^3$.
I don't know if that counts as a proof "using ONLY properties of Det.", but at least it didn't involve expanding the determinant.
3. Thanks for reply Opalg, much appreciated!... Using eigenvalues will work to. Main thing, was not to use expansion.
BTW "properties of determinants" mean manipulation of the det. by basic math functions (+-/*), changing of rows/columns/etc...
Yeah! got it, after a fresh start...
$<br /> L.H.S.=<br /> \begin{bmatrix}<br /> -bc&b^2+bc&c^2+bc\\<br /> a^2+ac&-ac&c^2+ac\\<br /> a^2+ab&b^2+ab&-ab\\<br /> \end{bmatrix}<br />$
$<br /> =1/abc<br /> \begin{bmatrix}<br /> -abc&ab^2+abc&ac^2+abc\\<br /> a^2b+abc&-abc&bc^2+abc\\<br /> a^2c+abc&b^2c+abc&-abc\\<br /> \end{bmatrix}<br />$
$<br /> =abc/abc<br /> \begin{bmatrix}<br /> -bc&ab+ac&ac+ab\\<br /> ab+bc&-ac&bc+ab\\<br /> ac+bc&bc+ac&-ab\\<br /> \end{bmatrix}<br />$
$<br /> =<br /> \begin{bmatrix}<br /> -bc+ab+bc+ac+bc&ab+ac-ac+bc+ac&ac+ab+bc+ab-ab\\<br /> ab+bc&-ac&bc+ab\\<br /> ac+bc&bc+ac&-ab\\<br /> \end{bmatrix}<br />$
$<br /> =<br /> \begin{bmatrix}<br /> ab+bc+ca&ab+bc+ca&ab+bc+ca\\<br /> ab+bc&-ac&bc+ab\\<br /> ac+bc&bc+ac&-ab\\<br /> \end{bmatrix}<br />$
$<br /> =<br /> (ab+bc+ca)<br /> \begin{bmatrix}<br /> 1&1&1\\<br /> ab+bc&-ac&bc+ab\\<br /> ac+bc&bc+ac&-ab\\<br /> \end{bmatrix}<br />$
$<br /> =<br /> (ab+bc+ca)<br /> \begin{bmatrix}<br /> 0&1&1\\<br /> 0&-ac&bc+ab\\<br /> ac+bc+ca&bc+ac&-ab\\<br /> \end{bmatrix}<br />$
$<br /> =<br /> (ab+bc+ca)^2<br /> \begin{bmatrix}<br /> 1&1\\<br /> -ac&bc+ab\\<br /> \end{bmatrix}<br />$
$<br /> =<br /> (ab+bc+ca)^2<br /> \begin{bmatrix}<br /> 1&0\\<br /> -ac&ab+bc+ca\\<br /> \end{bmatrix}<br />$
$<br /> =<br /> (ab+bc+ca)^3<br />$
$<br /> =R.H.S. [Proved]<br />$
4. Originally Posted by romit
Yeah! got it, after a fresh start...
Nice proof! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9016396403312683, "perplexity_flag": "middle"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.