url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://math.stackexchange.com/questions/18226/setting-up-an-integral-to-find-a-cones-surface-area/18245
# Setting Up an Integral to Find A Cone's Surface Area I tried proving the formula presented here by integrating the circumferences of cross-sections of a right circular cone: $$\int_{0}^{h}2\pi sdt, \qquad\qquad s = \frac{r}{h}t$$ so $$\int_{0}^{h}2\pi \frac{r}{h}tdt.$$ Integrating it got me $\pi h r$, which can't be right because $h$ isn't the slant height. So adding up the areas of differential-width circular strips doesn't add up to the lateral surface area of a cone? EDIT: I now realize that the integral works if I set the upper limit to the slant height - this works if I think of "unwrapping" the cone and forming a portion of a circle. The question still remains though: why won't the original integral work? Won't the value of the sum of the cylinders' areas reach the area of the cone as the number of partitions approaches infinity? - 1 The problem is that you need to scale your surface area element appropriately, see en.wikipedia.org/wiki/Surface_integral, en.wikipedia.org/wiki/Surface_of_revolution and also en.wikipedia.org/wiki/Pappus%27s_centroid_theorem I hope you're aware of the fact that you need absolutely no calculus here: just observe that the cone can be built out of a circular sector of the plane. – t.b. Jan 20 '11 at 3:07 You can't integrate circumferences to get a surface area for the same reason you can't integrate points to get a length. – Qiaochu Yuan Jan 20 '11 at 3:31 Yuan: But the circumferences are multiplied by $dt$ – G.P. Burdell Jan 20 '11 at 3:43 ## 5 Answers You seem to be ignoring the fact that s and r vary as the segment you consider varies. By using the same variable names it appears that you are confusing them to be constants... Anyway, for a derivation, look at the following figure: This is a cross-section of the cone. The area of the strip of width $\displaystyle dh$ that corresponds to $\displaystyle h$ (from the apex) is $\displaystyle 2\pi r \frac{dh}{\cos x}$ Now $\displaystyle r = h \tan x$ Thus $\displaystyle dA = 2 \pi h \frac{\tan x}{\cos x} dh$ Thus the total area $$= \int_{0}^{H} 2 \pi h \frac{\tan x}{\cos x} dh = \pi H^2 \frac{\tan x}{\cos x} = \pi (H \tan x) \left(\frac{H}{\cos x}\right) = \pi R S$$ Hope that helps. - See the discussion to a previous question here which might help - I appreciate the thought process and did something similar as did by Mr G.P Burdell. To add to the confusion (I apologise!) let me put my point forward as this: Consider a Cone with height H and radius R. Let dh and dr be the respective changes in H and R. Therefore $$Volume = \int_0^H \pi{r^2} dh$$ Now “r” is a function of “h” so $$h = H- \frac{H}{R}r$$ $$dh = - \frac{H}{R}dr$$ Substituting the same in the above equation we get the integral as $$\int_R^0 \pi{r^2} \frac{(-H)}{R}dr$$ $$Volume = \frac{\pi}{3}R^2H$$ If you do the same process for the surface area you will end up getting $$Surface Area = \pi RH$$ Which is not true. The Surface Area of a Cone is $$= \pi RS$$ where S is Slant Height of the Cone. If we try to argue on this explanation of Surface Area then our explanation for Volume is in contradiction. - Ok, so I've been thinking about this for a few days, and I asked the same question on physicsforums.com. And luckily, someone posted an answer that explained the problem. Here is the question that I posted: http://www.physicsforums.com/showthread.php?p=4031752. If you scroll all the way to the second to last post, you will see a person who says, "The problem I think is that you wouldn't get the surface area if you used cylinders whose sides aren't parallel to the sides of the shape. For example, consider trying to "square" the perimeter of a circle. For illustration: http://qntm.org/trollpi "The sides of the square are cut into many pieces, and then "steps" are created from it, but you can always combine the steps back into the side of the square. Consider the fourth picture in the link, look at the upper half of the circle. The whole upper side of the square is there, it is just in pieces. So you can jag them all you want, the perimeter is the same and doesn't start approximating a circle. "In the cross section of the cone, considering it as a 2-d object, you can also try to calculate the perimeter of the slice. If you add up the sides of the rectangles, they will always add up to 2H, no matter how small you make them, however you can check using Pythagoras that it should be more. because its twice (for two sides of the slice) the square root of H^2+R^2" That answer made a lot of sense to me, and I hope that it makes sense to everyone else here :) - The diagram makes sense to me. The area around our slice clearly seems to me to be the circumference at that point multiplied by dS. Not dh, as we might be tempted to do just because dh is easily defined in terms of r. handily dS is just a constant*dr: dS=dr/cos(x) - 1 Welcome to MSE! It helps readability to format questions/answers using MathJax (see FAQ). Regards – Amzoti Apr 30 at 17:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299775958061218, "perplexity_flag": "head"}
http://mathoverflow.net/questions/79270/a-hypercube-related-graph
## A hypercube-related graph ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For integer $n\ge 3$, consider the graph on the set of all even vertices of the $n$-dimensional hypercube $\{0,1\}^n$ in which two vertices are adjacent whenever they differ in exactly two coordinates. This is an $(n(n-1)/2)$-regular graph on $2^{n-1}$ vertices. Is there any standard name / notation for this graph? Is there a way to construct it from some "basic" graphs using standard graph operations (like products of graphs)? Has anybody ever studied the isoperimetric problem for this graph? Thanks! - isoperimetry looks very similar to that of usual hypercube graph, as distance between two vertices is half of the distance between them in hypercube – Fedor Petrov Oct 27 2011 at 15:52 @Fedor: The isoperimetric problem for a given graph is to determine, for every given $n$, the maximum possible number of edes of an induced subgraph of order $n$. I cannot see any immediate relation between the isoperimitric problem for the hypercube and the graph I am interested in. – Seva Oct 27 2011 at 17:14 Isn't that just the line graph of the ordinary hypercube graph? – Zsbán Ambrus Nov 1 2011 at 22:39 @Zsban: certainly, not! To begin with, the line graph of the hypercube has order $n\cdot 2^n$ (the number of edges of the hypercube), while the graph in question has order $2^{n-1}$. – Seva Nov 3 2011 at 7:18 Seva: you're right, sorry. – Zsbán Ambrus Nov 4 2011 at 8:14 ## 2 Answers Conway & Sloane's "Sphere Packings, Lattices and Groups" references Coxeter's "Regular Polytopes" for the phrase "halfcube", but Coxeter only uses the notation $h\Pi_n$, saying $h$ can be taken to stand for half- or hemi-, for an arbitrary polytope $\Pi_n$ {$p, q, \ldots, w$} with even $p$ (in your case, {$4,3,3,\ldots, 3$}) This construction is section 8.6 in Coxeter. Since then, halfcube seems to have lost favour, and hemi-cube has become the name for a construction of quotienting out vertices, while the term demicube (or demihypercube if you want to be explicit about using hypercubes and not cubes) is reserved for the construction of deleting vertices of a hypercube. See Conway, Burgiel and Goodman-Strass's "Symmetries of Things." Chapter 26 covers this, where they call them hemicubes, and draw some lovely pictures. Specific dimensional cases have different names. Your $n=3$ case is the complete $K_4$. $n=4$ is the 16-cell, also called a hexadecachoron in older books, and happens to be a cross-polytope (this does not continue in higher dimensions). By $n=5$, the polytopes begin to take shape as their own specific family and no longer have multiple names. See http://en.wikipedia.org/wiki/Demihypercube, and various dimension specific pages there. I do not know anything about the isoperimetric problem for these graphs, but there has likely been work done on the $n \leq 4$ cases, since those graphs also show up as other constructions. - 1 For $n=5$ it's the configuration of $16$ lines on a generic Del Pezzo surface of degree $4$ (complete intersection of two quadrics in 4-space): two lines meet iff the corresponding vertices are disjoint. [The induced graph on the 10-vertex co-neighborhood is Petersen.] This generalizes to higher odd $n$; apprently this was first shown in Miles Reid's thesis. – Noam D. Elkies Oct 28 2011 at 0:41 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This graph is known as the half-cube. I don't know about the other question. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388811588287354, "perplexity_flag": "middle"}
http://stats.stackexchange.com/tags/variance/info
Tag info About variance The expected squared deviation of a random variable from its mean; or, the average squared deviation of data about their mean. The variance of a random variable $X$ is the expected squared deviation from its mean: $$\mbox{Var}\left[X\right] = \mbox{E}\left[\left(X - \mbox{E}\left[X\right]\right)^2\right] = \mbox{E}\left[X^2\right] - \left(\mbox{E}\left[X\right]\right)^2.$$ As such, the variance captures the "spread" of a random variable around its expected value. The square root of the variance is the standard deviation. The variance of a dataset is the mean squared deviation from its mean, sometimes called a "population variance." The two kinds of variance are related. Variance in the first sense is a property of a random variable. One way to estimate that property from data (viewed as $n$ independent realizations of the variable) uses the population variance of the data. A related estimator called the "sample variance." It is equal to $n/(n-1)$ times the population variance. Not all random variables have finite variance. This occurs when $\mbox{E}\left[X^2\right]$ diverges. For example, the Cauchy distribution (Student t distribution with 1 degree of freedom) does not have a finite variance.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9195050597190857, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/121079-without-choice-axiom.html
# Thread: 1. ## With or without Choice-axiom? Is it possible to make in injection X*X --> P(X) (*= carthesian product, P(X) is the powerset of X, that is, the set of all subsets from X) without the choice-axiom? I tried to construct such an injection but it seems somewhat hard there's ofcourse an injection X--> P(X) given by x -> {x} But X*X--> P(X) doesn't work with <x,y> -> {x,y} U need a way to distuinguish between <x,y> and <y,x> somehow. 2. Originally Posted by Dinkydoe Is it possible to make in injection X $\times$X --> P(X), P(X) is the powerset of X, that is, the set of all subsets from X) without the choice-axiom? This may be of no use to you. But this is the usual theorem. Prove that $A\times B\subseteq \mathcal{P}\left(\mathcal{P}(A\cup B)\right)$. This follows from the definition of ordered pair: $(a,b) \equiv \left\{ {\{ a\} ,\{ a,b\} } \right\}$. That theorem gives you a natural injection: $\Phi:A\times B\mapsto \mathcal{P}\left(\mathcal{P}(A\cup B)\right)$. But as I said this does not answer the question as stated. However, it may give you some ideas. 3. Yes, I know this This gives a natural injection: X*X--> P(P(X)) (As you said, by using Kuratowski's definition of an ordered pair: by sending <x,y> -> {{x},{x,y}}) I was hoping to even get one step further and construct an injection X*X -> P(X) wich should be possible for any infinite set X. But I suspect it's impossible to construct such an injection without the Choice-axiom. Was just a curiosity. 4. Originally Posted by Dinkydoe But I suspect it's impossible to construct such an injection without the Choice-axiom. Was just a curiosity. I also suspect that is correct. By the way, using some basic code you can be clear in your posts. For example [tex]A \times B[/tex] gives the output $A \times B$. That means learning basic LaTeX. 5. I am actually familiar with Latex 6. Sometimes you can't make it:for example, the case of X has three element. If X has four element, then it is very easy since they have the same cardinal, every injection satisfy our requirement. If X has more elements, |X|>4: Since every set can be well ordered. w.l.o.g. suppose (X, <) is well ordered set. define the mapping F:X*X to P(X) as following: F(x,x)={x}; F(x,y)={x,y} if x,y are distinct, and x<y; F(x,y)=the complement of {x,y}, if x,y are distinct ,and x>y. 7. Since every set can be well ordered. w.l.o.g. suppose (X, <) is well ordered set. Well yes, but then you've actually assumed the choice-axiom. The choice axiom is equivalent to the wellordering-theorem of Zermelo. And the set X is not necessarily a ordered set! And I like to know whether we can construct such an injection without assuming the choice-axiom And I suspect this is impossible, since it seems the only way to determine the difference between <x,y> and <y,x> is having some wellorder on X 8. If |X| is less than or equal to 4, it is very easy. Othwise,Classify the element into two groups: for the element of the form (x,x) in X*X, associate with {x}. for the element pair (x,y) and (y,x) wherex,y are distinct,in X*X, associate with pair $\{x,y\} \text{ and }\{x,y\}^c$,no matter the detail on how they are associated in the sysmetric pair, since we are focus on "injection mapping". 9. Originally Posted by Shanks If |X| is less than or equal to 4, it is very easy. Othwise,Classify the element into two groups: for the element of the form (x,x) in X*X, associate with {x}. for the element pair (x,y) and (y,x) wherex,y are distinct,in X*X, associate with pair $\{x,y\} \text{ and }\{x,y\}^c$,no matter the detail on how they are associated in the symmetric pair, since we are focus on "injection mapping". That does not avoid the Axiom of Choice, because for each pair (x,y) and (y,x) you have to choose which of them will go to $\{x,y\}$ (with the other one going to $\{x,y\}^c$). 10. Originally Posted by Opalg That does not avoid the Axiom of Choice, because for each pair (x,y) and (y,x) you have to choose which of them will go to $\{x,y\}$ (with the other one going to $\{x,y\}^c$). I know what you mean. How about first proving the following proposition and then applying to the problem: $\text{ If }\{A_i:i\in I\}\text{ is pairwise disjoint set collection, so is }\{B_i:i\in I\},$ $\text{ and for each }i \in I,\text{ there is injective mapping }f_i:A_i \to B_i,$ $\text{ then there is injective maping }f\text{ from the union of }\{A_i:i\in I\}$ $\text{ to the union of }\{B_i:i\in I\}.$ Still can't Avoid the AoC. 11. If X is finite or countable infinite, then we can construct such injective mapping by induction, Otherwise, we can't make it without using the Axiom of Choice. 12. Originally Posted by Shanks If X is finite or countable infinite, then we can construct such injective mapping by induction, Otherwise, we can't make it without using the Axiom of Choice. What else do you think this discussion has been about?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304382801055908, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/10736?sort=oldest
## Families of number fields of prime discriminant ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) When I am testing conjectures I have about number fields, I usually want to control the ramification, especially minimize to a single prime with tame ramification. Hence, I usually look for fields of prime discriminant (sometimes positive, sometimes negative). I get the feeling that I cannot be the only one who does this... And so, are there families of number fields of prime discriminant for each degree? Or at least degree 3 and 4? (They are the coolest. Except quadratics. Of course.) What about: given a prime - can I find a polynomial of degree d with the prime as its discriminant? - hum...but it seems we only have good formulas of discriminant for good (simple) field extensions. If we want to good (simple) discriminants, then the field extension itself would be quite complicated...no? – natura Jan 4 2010 at 21:59 I might be missing something, but doesn't Stickleberger's theorem imply that your final question is false for all primes of the form 4k+3? – Ben Linowitz Jan 4 2010 at 22:10 @basic: No. For instance $\mathbb{Q}(\zeta_p)/\mathbb{Q}$ (here $\zeta_p$ is a primitive $p$th root of unity) is a very simple extension for which a very precise formula for the discriminant is available. In particular, it is only ramified at $p$ and is tamely ramified there. It doesn't quite answer Dror's question, because there is at most one such extension of any given degree. – Pete L. Clark Jan 4 2010 at 22:11 @Ben Linowitz: sure, as written. Maybe he means $\pm$... – Pete L. Clark Jan 4 2010 at 22:11 1 @Pete, Yes that's what I mean. Good simple extensions like cyclotomic extensions have good formula of Discriminants, but if we want to have a discriminant as simple as a prime number, then the extension itself probably won't be very simple. – natura Jan 4 2010 at 22:22 ## 5 Answers Klueners Malle online might be just the thing you're looking for. Make your own lists! And here's some they made themselves, if you run out of ideas. - so e.g. reh.math.uni-duesseldorf.de/cgi-klueners/… gives you a list of number fields of degree 5 with prime power discriminant, and many of the later entries have prime discriminant. – Kevin Buzzard Jan 4 2010 at 22:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. John Jones' Tables http://hobbes.la.asu.edu/NFDB/ are my favorite. I think his data tables are the most complete online. Check for example in Klueners-Male tables for cubic fields of prime discriminant -3299, and you'll see that there are no results shown. However Jones' tables contains the 4 cubic fields with such discriminant. Now about your question on the primes, as Ben mentioned p must be 1 mod 4 so the question has some hope. Even for p that are "allow" to be discriminants there might not exist a field of fix degree d of discriminant p. For example, class field theory tells you that there is a cubic field of discriminant p if and only if the 3-Sylow part of $Cl(\mathbb{Q}(\sqrt{p}))$ is non-trivial. In fact you can even tell how many of them there are! As an example of this we can conclude that there are no cubic fields of discriminant p=-3, even though p is a fundamental discriminant. A similar analysis can be done for quartic fields, and I think for them the behavior is related to the 2-Sylow part of $Cl(\mathbb{Q}(\sqrt{p}))$. Also you might want to look at this paper of Jone's which I think is very close to your question http://hobbes.la.asu.edu/papers/OnePrimeJR.pdf - very interesting links! do you mind explaining this statement? thank you! "For example, class field theory tells you that there is a cubic field of discriminant p if and only if the 3-Sylow part of Cl(Q(\sqrt{p})) is non-trivial." – natura Jan 5 2010 at 4:25 2 @basic: I have written a paper that uses this, so instead of copy and paste, I better give you the link to it. math.wisc.edu/~mantilla/PagProb/pruebas.pdf What you are wondering can be found at the beginning of section 4. – Guillermo Mantilla Jan 5 2010 at 4:59 this sounds real cool, I didn't know of the relation to the 3 part... Btw, I mention in my question that positive and negative is fine, so negative 3 mod 4 is also fine. – Dror Speiser Jan 5 2010 at 5:45 yes -3 mod 4 is the same as 1 mod 4. Now, the impossibility I was referring to is p=2,3 mod 4. The point that Ben was trying to make in his comment is that the discriminant of a number field is always congruent to 0 or 1 mod 4(this is known as Stickleberger's theorem). In particular if a prime p is a discriminant, so the question has some hope to be answered, then p has to be 1 mod 4 (regardless if p is positive or negative). Hence, p = 2,3 mod 4 is never a possibility. – Guillermo Mantilla Jan 5 2010 at 17:19 Usually p=2,3 mod 4 refers to a positive prime, and it really does seem that that is what Ben meant, in which case, negative of a prime 3 mod 4 is fine. Hence my comment, as well as Pete's. – Dror Speiser Jan 5 2010 at 19:11 A recent paper of Bhargava and Ghate discusses the enumeration of quartic fields of prime discriminant (see section 7). - I was not familiar with Bhargava or his thesis work, but I must say, as I am reading his series of papers on composition laws, that it is awesome! So glad I posted this question... – Dror Speiser Jan 5 2010 at 20:04 As for your second question: if $f$ is an irreducible polynomial woth degree $n$ and prime discriminant (actually, squarefree discriminant is sufficient), then the roots of $f$ generate an extension with Galois group $S_n$ (thus these examples are not necessarily the best candidates for testing conjectures since you miss out on all the more interesting Galois groups). Since $S_n$ is not solvable for $n \ge 5$, class field theory is your friend only for $n = 2, 3, 4$. • For $n = 2$, the situation is trivial. • For $n = 3$, there is a number field with prime discriminant $p$ if and only if the quadratic field with discriminant $p$ has class number divisible by $3$. • For $n = 4$, there is an $S_4$-extension with prime discriminant if and only if there is a quadratic number field with discriminant $p$ and class number divisible by $3$ such that one of its unramified cubic extensions has class number divisible by $2$ (and thus necessarily by $4$). There's a very nice article by Shanks (A survey of quadratic, cubic and quartic algebraic number fields (from a computational point of view), Proc. 7th southeast. Conf. Comb., Graph Theory, Comput.; Baton Rouge 1976, 15-40 (1976)) where you will find more. Scholz, during the 1930s, showed how to construct (using class field theory, which means you will not get generators, just the existence) of Galois groups with small solvable groups; in his construction, the number of ramified primes can be controlled. - Then, who is my friend at $n>=5$ and $G=S_n$? Also, for 3 or 4, is there a fast computational method to find a class of order 3 in the quadratic field? Would be interesting only if it didn't involve computing the whole class group. – Dror Speiser Feb 16 2010 at 20:49 "if $f$ is an irreducible polynomial woth degree $n$ and prime discriminant (actually, squarefree discriminant is sufficient), then the roots of $f$ generate an extension with Galois group $S_n$" How does one prove this? – norondion Apr 14 2010 at 18:37 This goes back to Arnold Scholz (1934); a recent reference is Nakagawa, "On the Galois group of a number field with square free discriminant", Comm. Math. Univ. St. Pauli 37 (1988), 95-99 – Franz Lemmermeyer Apr 26 2010 at 12:58 @Franz: Just reading the review on Nakagawa's paper on Mathscinet, I see he did the case when the discriminant of the field is a prime. However it is not clear to me that when the polynomial has prime discriminant, its splitting field would have prime discriminant. (while I am waiting for my interlibrary requested article) Would you be a little more precise as did Nakagawa/Scholz do that? – Ying Zhang May 2 2010 at 18:01 I am going to annoy all the people who answered above, but I am pretty sure the answer to Dror's question is basically no. In particular, is it known that there are infinitely many cubic fields of prime discriminant? I have not heard of such a result -- if one is out there then I would be extremely grateful if someone would share the appropriate references with me. As is pointed out above, there is a classical correspondence between such fields and subgroups of $Cl(\mathbb{Q}(\sqrt{p}))$ of index 3. However, I'm not aware that this makes the question easier to answer. There is also the work of Bhargava and Ghate, Delone-Faddeev, Davenport-Heilbronn, etc. which says that cubic (and quartic and quintic) fields are parameterized by integral orbits on nice prehomogeneous vector spaces which meet certain local conditions. For example, in the cubic case, cubic rings are parameterized by integral binary cubic forms up to $GL_2(\mathbb{Z})$ equivalence, and maximal cubic orders are those cubic rings which meet a certain local condition at each prime. This allows you to prove formulas for the number of cubic fields with $Disc(K) < X$ with good error terms, and this works if you ask for the condition $d | Disc(K)$. This allows you to run a sieve. However sieves are notoriously bad at finding primes! The information above is also essentially available in the twin prime problem, but all we can prove there is that there are infinitely many primes $p$ so that $p + 2$ has at most two prime factors. You can use this argument to find cubic fields with three (I think) prime factors -- there is a paper of Belabas and Fouvry that does this. Maybe you could push their arguments a little bit better. But one cannot hope to find primes this way. Of course there are excellent computational results, and I don't want to take anything away from these. But I feel like the question is asking if there are infinite families, and I'm pretty sure this is widely expected but not at all known. - You can find infinitely many cubic fields of prime discriminant if you assume a conjecture of Hardy and Littlewood on primes in quadratic progressions -- just take the "simplest cubic fields" where the quadratic function giving the discriminant represents a prime value. – Cam McLeman Nov 17 2010 at 2:07 Simplest cubic fields have a square discriminant. – Dror Speiser Nov 17 2010 at 7:00 Oops. . – Cam McLeman Nov 17 2010 at 12:43 1 @Frank: I wonder how far Heath-Brown's results on primes represented by binary cubic forms can be extended to proving infinitely many primes represented by the discriminant form (of a general cubic). Sure - the degree goes up by one, but the number of variables goes up by two! – Dror Speiser Nov 17 2010 at 20:02 Dror - interesting suggestion! I'm not familiar enough with Heath-Brown's results to hazard a guess. – Frank Thorne Nov 18 2010 at 1:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358218908309937, "perplexity_flag": "head"}
http://superweak.wordpress.com/2007/12/
# Charm &c. Or, fun from building 54 ## Information Entropy and Experiments ### December 24, 2007 There’s a new paper out (arxiv:0712:3572) which aims to provide a “figure of merit” for proposed experimental programs. It revolves around the concept of information entropy – an old concept from communication/information theory developed by Claude Shannon. The basics of entropy: a communicated “symbol” – a letter or a word of a text, for example – carries information content that increases as it becomes less likely (more surprising). Intuitively this makes sense: if you know exactly what you’re going to hear (say, an airline safety announcement), you tune out because there’s no information transfer, while you pay the most attention when you can’t anticipate what’s next. Mathematically, the information content of a received symbol $x$ with a probability $p(x)$ of occurring is $-\log p(x)$. Note that this is sweeps the meaning of “information” into $p(x)$; a string of digits may seem completely random (and thus each one has information $-\log 0.1 = 1$), but if you know it happens to be $\pi$ starting from the 170th decimal place, suddenly you can predict all the digits and the information content is essentially zero. We would like to get is an expectation value (average) of the transmitted information: you’d like to transmit the maximum content per symbol. The expectation value – entropy – is $H = \sum_x -p(x) \log p(x)$ The logarithm factor means that transmitting an occasional highly unlikely symbol is less useful than symbols which appear at roughly equal rates – for two symbols, you get more entropy out of both appearing with a 50% probability than one at 99% and the other at 1%. How does this relate to physics experiments? The author suggests that the proper figure of merit for an experiment (or analysis) is the expected information gain from it – or, perhaps, the information per dollar. The symbols are replaced by outcomes, like “observation/nonobservation of the Standard Model Higgs boson.” The $p(x)$ function is obtained from our a priori theoretical biases, so for example “confirmation of Standard Model” or “discovery of low-scale supersymmetry” carry relatively high probabilities. This leads to results he considers at odds with conventional wisdom – for example, the search for single top production, a well-predicted Standard Model process that everyone expects to be there, has low entropy (since there’s one large and one small probability), while a low-energy muon decay experiment which has good sensitivity to supersymmetry has high entropy (people think SUSY has a reasonable chance of being realized). There’s an additional wrinkle that in general you get more entropy by having more symbols/results (in this case the log factor helps you); so the more possible outcomes an experiment has, the more information content you expect. In particular this means that global analyses of the author’s VISTA/SLEUTH type, where you try to test as many channels as possible for departures from the Standard Model, get a boost over dedicated searches for one particular channel. It’s an interesting and thought-provoking paper, although I have a few concerns. The main one is that the probabilities $p(x)$ are shockingly Bayesian: they are entirely driven by current prejudice (unlike the usual case in communication theory, where things are frequentist). Recall that there’s not much entropy in experiments which have one dominantly probable outcome. On the other hand, should an extremely unlikely outcome be found, the information content of that result is large. (The author determines the most significant experimental discoveries in particle physics since the start of the 70s to be those of the τ and J/ψ. I think this implies that Mark I was the most important experiment of the last four decades.) We are thus in the paradoxical situation that the experiments that produced the most scientific content, by this criterion, are also the ones with the least a priori entropy. The J/ψ was discovered at experiments that weren’t designed specifically to search for it! How does one compare merit between experiments? We hope the LHC can provide more than a binary yes/no on supersymmetry, for example; if it exists, we would try to measure various parameters, and this would be much more powerful than rare decay experiments that would essentially have access to one or two branching fractions. The partitioning of the space of experimental outcomes has to be correctly chosen for the entropy to be computed, and the spaces for two different experiments may be totally incommensurable. (It’s a bit simpler if you look at everything through “beyond the Standard Model” googles; with those on, your experiment either finds new physics, or it doesn’t.) My last major complaint is that the (practical) scientific merit of certain results may be misstated by this procedure (though this is a gut feeling). The proposed metric may not really account for how an experiment’s results fit into the larger picture. Certain unlikely results – the discovery of light Higgs bosons in Υ decays, electroweak-scale quantum gravity, or something similar – would radically change in our theoretical biases, and hence expectations for other experiments. This is a version of the $\pi$ digit problem above; external information can alter your $p$ function in unanticipated ways. It’s unclear to me whether this can be handled in a practical manner, though I can’t claim to be an expert in this statistical realm. In short: interesting idea, but I would be wary of suggesting that funding agencies use it quite yet. Posted in Particle Physics, Physics, Statistics | 8 Comments »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269020557403564, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/08/07/the-complex-numbers/?like=1&source=post_flair&_wpnonce=ae4d6bb289
# The Unapologetic Mathematician ## The Complex Numbers Yesterday we defined a field to be algebraically closed if any polynomial over the field always has exactly as many roots (counting multiplicities) as we expect from its degree. But we don’t know a single example of an algebraically complete field. Today we’ll (partially) remedy that problem. First, remember the example we used for a polynomial with no roots over the real numbers. That is $p=X^2+1$. The problem is that we have no field element whose square is $-1$. So let’s just postulate one! This, of course, has the same advantages as those of theft over honest toil. We write our new element as $i$ for “imaginary”, and throw it in with the rest of the real numbers $\mathbb{R}$. Okay, just like when we threw in $X$ as a new element, we can build up sums and products involving real numbers and this new element $i$. But there’s one big difference here: we have a relation that $i$ must satisfy. When we use the evaluation map we must find $\mathrm{ev}(X^2+1,i)=0$. And, of course, any polynomial which includes $X^2+1$ as a factor must evaluate to ${0}$ as well. But this is telling us that the kernel of the evaluation homomorphism for $i$ contains the principal ideal $(X^2+1)$. Can it contain anything else? If $q\in\mathbb{R}[X]$ is a polynomial in the kernel, but $q$ is not divisible by $X^2+1$, then Euclid’s algorithm gives us a greatest common divisor of $q$ and $X^2+1$, which is a linear combination of these two, and must have degree either ${0}$ or ${1}$. In the former case, we would find that the evaluation map would have to send everything — even the constant polynomials — to zero. In the latter case, we’d have a linear factor of $X^2+1$, which would be a root. Clearly neither of these situations can occur, so the kernel of the evaluation homomorphism at $i$ is exactly the principal ideal $(X^2+1)$. Now the first isomorphism theorem for rings tells us that we can impose our relation by taking the quotient ring $\mathbb{R}[X]/(X^2+1)$. But what we just discussed above further goes to show that $(X^2+1)$ is a maximal ideal, and the quotient of a ring by a maximal ideal is a field! Thus when we take the real numbers and adjoin a square root of $-1$ to get a ring we might call $\mathbb{R}[i]$, the result is a field. This is the field of “complex numbers”, which is more usually written $\mathbb{C}$. Now we’ve gone through a lot of work to just add one little extra element to our field, but it turns out this is all we need. Luckily enough, the complex numbers are already algebraically complete! This is very much not the case if we were to try to algebraically complete other fields (like the rational numbers). Unfortunately, the proof really is essentially analytic. It seems to be a completely algebraic statement, but remember all the messy analysis and topology that went into defining the real numbers. Don’t worry, though. We’ll come back and prove this fact once we’ve got a bit more analysis under our belts. We’ll also talk a lot more about how to think about complex numbers. But for now all we need to know is that they’re the “algebraic closure” of the real numbers, we get them by adding a square root of $-1$ that we call $i$, and we can use them as an example of an algebraically closed field. One thing we can point out now, though, is the inherent duality of our situation. You see, we didn’t just add one square root of $-1$. Indeed, once we have complex numbers to work with we can factor $X^2+1$ as $(X-i)(X+i)$ (test this by multiplying it out and imposing the relation). Then we have another root: $-i$. This is just as much a square root of $-1$ as $i$ was, and anything we can do with $i$ we can do with $-i$. That is, there’s a symmetry in play that exchanges $i$ and $-i$. We can pick one and work with it, but we must keep in mind that whenever we do we’re making a non-canonical choice. ## 10 Comments » 1. Of course, the extent to which singling out $i$ is non-canonical in $\mathbb{C}$, constructed this way, is no more and no less than the extent to which singling out $X$ is non-canonical in $\mathbb{R}[X]$. Comment by Sridhar Ramesh | August 8, 2008 | Reply 2. This is true: there are some great examples of situations where it’s useful to view certain algebras of polynomials (subalgebras of the full polynomial algebra) are actually isomorphic to full polynomial algebras in other collections of variables. Comment by | August 8, 2008 | Reply 3. [...] Properties of Complex Numbers Today I’ll collect a few basic properties of complex numbers. [...] Pingback by | August 8, 2008 | Reply 4. In your first sentence, “Yesterday we defined a field to be algebraically closed if it always has exactly as many roots (counting multiplicities) as we expect from its degree,” the first use of the word “it” should perhaps be replaced by something like “any polynomial over it”. Comment by John Palmieri | August 8, 2008 | Reply 5. Sorry, you’re right. distractions abound. Comment by | August 8, 2008 | Reply 6. [...] Okay, we know that we can factor any complex number into linear factors because the complex numbers are algebraically closed. But we also know that real polynomials can have too few roots. Now, there are a lot of fields out [...] Pingback by | August 14, 2008 | Reply 7. [...] of last time, we can write this as and find two solutions — and — by taking the two complex square roots of . But the equation doesn’t use any complex numbers. Surely we can find real-valued [...] Pingback by | October 13, 2008 | Reply 8. [...] Until further notice, I’ll be assuming that the base field is algebraically closed, like the complex numbers [...] Pingback by | February 2, 2009 | Reply 9. [...] of finite dimension over an algebraically closed field . If you want to be specific, use the complex numbers [...] Pingback by | March 4, 2009 | Reply 10. [...] Numbers and the Unit Circle When I first talked about complex numbers there was one perspective I put off, and now need to come back to. It makes deep use of [...] Pingback by | May 26, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544108510017395, "perplexity_flag": "head"}
http://www.msri.org/seminars/20058
# Mathematical Sciences Research Institute Home » Commutative Algebra and Algebraic Geometry (Eisenbud Seminar) # Seminar Commutative Algebra and Algebraic Geometry (Eisenbud Seminar) April 30, 2013 (03:45pm PDT - 06:00pm PDT) Location: UC Berkeley Speaker(s) No Speakers Assigned Yet. Description No Description Abstract/Media Commutative Algebra and Algebraic Geometry Seminar Tuesdays, 3:45-6:00 939 Evans Hall Organized by David Eisenbud More information at http://hosted.msri.org/alg/ 3:45 PM Quantitative Cocycles Speaker: Persi Diaconis Abstract: Choosing sections for natural maps is a basic mathematical activity. In joint work with Soundarajan and Shao we study the problem of choosing "nice" sections. Here is a typical theorem: Let $G$ be a finite group, $H$ a normal subgroup. Let $X$ be coset representatives for $H$ in $G$. suppose the proportion of $x,y$ in $X$ with $xy \in X$ is more than $1-1/60$. Then the extension splits. There are many variations, many open problems and applications to basic arithmetic and computer science. 5:00 PM Clifford Algebras and the Ranks of Modules in Free Resolutions Speaker: David Eisenbud Abstract: A conjecture of Horrocks, and independently of Buchsbaum and myself, asserts that the sum of the ranks of the modules in the free resolution of a module annihilated by a regular sequence of length c is at least $2^{c}$. I'll describe a related new conjecture by Irena Peeva and myself and show how we proved a special case, in our work on complete intersections, using results on Clfford algebras and enveloping algebras. No Notes/Supplements Uploaded No Videos Uploaded Video No Video Uploaded • Seminar Home
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.843366265296936, "perplexity_flag": "middle"}
http://electronics.stackexchange.com/questions/29535/energy-in-capacitors
# energy in capacitors The energy stored in a capacitor is $$U= \dfrac{1}{2} CV^2$$ So when I have a 1F supercap charged to 1V the energy is 0.5 J. When I connect a second supercap, also 1F in parallel the charge will distribute and the voltage will halve. Then $$U = \dfrac{1}{2} 2F (0.5V)^2 = 0.25 J$$ What happened to the other 0.25 J? - @W5VO: How's that? I don't see anything about losses in the equations. – Federico Russo Apr 9 '12 at 15:00 W5Vo: You are forgetting that charge must also be conserved. – Olin Lathrop Apr 9 '12 at 15:43 @OlinLathrop Yes, you're right. – W5VO♦ Apr 9 '12 at 15:59 Federico, given a spherical cow on a frictionless surface :) (which is what your math is doing), why do you assume the voltage will end up at 1/2V? If the charge is constant, I'd imagine both caps would settle in at something more like 0.71V ... preserving the stored energy. – insta Jul 11 '12 at 18:47 @insta: just try it. You'll see that it's V/2. – Federico Russo Jul 12 '12 at 5:22 ## 3 Answers You moved energy from one place to another and you can't do that unpunished. If you connected the two capacitors via a resistor the 0.25J went as heat in the resistor. If you just shorted the caps together much of the energy will have radiated in the spark, the rest again is lost as heat in the internal resistances of the capacitors. further reading Energy loss in charging a capacitor - I'll add that since the equalizing process is spontaneous, it must happen at the expense of energy. As in the water analogy, if you split the water between two containers placed at the same height, the average height of it will be lower, which means less potential energy (mgh). – clabacchio♦ Apr 9 '12 at 11:56 @clabacchio - your "less potential energy" doesn't show the energy loss, just like the energy loss isn't obvious from the lower voltage without the formula. – stevenvh Apr 9 '12 at 12:31 I know, it wasn't meant to be a rigorous demonstration, just to show that the less energy is justified by the fact that "entropy", or disorder, is increased and that decreases the energy. – clabacchio♦ Apr 9 '12 at 13:07 "you can't do that unpunished". Why not? Laws of thermodynamics? – Federico Russo Apr 9 '12 at 13:51 @Federico - Yes, the first. You have to perform work (energy) to move energy in or out a closed system (the capacitor). – stevenvh Apr 9 '12 at 14:00 show 3 more comments Suppose we had two nice and perfect 1 F capacitor. These have no internal resistance, no leakage, etc. If one cap is charged to 1 V and the other at 0 V, then it's hard to see what really happens if they were connected because the current would go infinite. Instead, let's connect them with a inductor. Let this be another ideal perfect part with no resistance. Now everything behaves nicely and can be calculated. Initially, the 1 V difference starts current flowing in the inductor. This current will increase until the two caps reach the same voltage, which is 1/2 V. Now you've got 1/8 J in one cap and 1/8 J in the other cap for a total of 1/4 J as you said. However, now we can see where the extra energy went. The inductor current is maximum at this point, and the remaining 1/4 J is stored in the inductor. If we kept everything connected, energy would slosh back and forth between the two caps and the inductor forever. The inductor acts like a flywheel for current. When the caps reach equal voltage, the inductor current is at its maximum. The inductor current will continue, but now will decrease due to reverse voltage accross it. The current will continue until the first cap is at 0 V and the second at 1 V. At that point, all the energy has been transferred to the second cap and none is in the first cap or the inductor. Now we are at the same point we started at except that the caps are reversed. Hopefully you can see that the 1/2 J of energy will continue to slosh back and forth forever with the cap voltages and the inductor current being sine waves. At any one point, the energies of the two caps and the inductor add to the 1/2 J we started with. Energy is not lost, just constantly moved around. ## Added: This is to more directly answer your original question. Suppose you connected the two caps with a resistor in between. The voltage on both caps will be a exponential decay towards the 1/2 V steady state as before. However, there was current thru the resistor which heated it. Obviously you can't use some of the original energy to heat the resistor and end up with the same amount. To explain this in terms of Russell's water tank analogy, instead of opening a valve between the two tanks you could put a small turbine in line. You can extract energy from that turbine as it is driven by the water flowing between the two tanks. Obviously that means the end state of the two tanks can't contain as much energy as the initial state since some was extracted as work via the turbine. - 1 And, considering that any closed loop is in fact an inductor, this even happens when you directly connect two idealized capacitors. – leftaroundabout Apr 9 '12 at 15:00 Another thing to note is that while one can't directly calculate the power loss in the case of zero resistance and zero inductance, one may observe that for any non-zero amount of resistance, the amount of energy lost will asymptotically approach half of the original amount. When the inductance is zero, the time required to lose any particular fraction of that energy will be inversely proportional to the resistance. Thus, an infinitesimal resistance will dissipate half the energy in the cap in an infinitesimal amount of time. – supercat Apr 9 '12 at 20:42 The transfer is lossy - whether by I^2R drop in the connecting circuit or electromagnetic energy radiation or spark or other coupling. That this is so is shown a priori by the fact that you know what the end result must be (V/2 each) and that this must result in an energy decrease using any "normal" connecting method. If you use near perfect wire you get near infinite currents. Every time you have the wire resistance you get double the current and losses increase linearly with decreasing resistance (decrease with R, increase with I^2). You can get a different result using an "abnormal" method. If you use an ideal buck converter it will take Vin x Iin at the input and convert it to the "correct" Vout x Iout at the output to allow no resistive or other losses. The result is easily determined but non intuitive. Making the buck converter non-ideal can give you a result in the 95% - 99% of theoretical range. As we have 0.5 Joule in a 2 Farad capacitor at the end of the process we know that U = 0.5 x C x V^2 so 0.5 = 0.5 x 2 x V^2 So V = sqrt(0.5) - 0.7071 Volts. We can try that again using just one of the capacitors. As we have 0.5 J initially we get 0.25 J in one cap at the end. 0.25 = 0.5 x 1 x V^2 V = sqrt(0.5) = 0.7071 V (again) as expected. At first glance I thought the water tank analogy was wrong in this case, but it also works quite well for part of the problem. The difference is that, while we can model the lossy case well enough, the loss free case does not make sense physically. ie A 10,000 litre tank 4 metres tall has energy of 0.5mgh. h is average height = 2 metres. Lets's have g=10 (MASCON nearby :-) ). 1 litre weighs 1 kg. E = 0.5mgh = 0.5 x 10000 x 10 x 2 = 100 kJ Now siphon half the water into a second identical tank. New depth = 2m. New average depth = 1 m. New content = 5000 litre Per tank energy = 0.5mgh = 0.5 x 5000 x 10 x 1 = 25,000 Joule Energy in 2 tanks = 2 x 25 000 J = 50 kJ. Half of our energy has gone missing. With a "water buck converter" each tank would be 70.71% full and we'd have made more water. On this aspect the model fails. Unfortunately :-). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466915726661682, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/26271-three-trigo-problems.html
# Thread: 1. ## three trigo problems three trigo problems Attached Thumbnails 2. Hello, afeasfaerw232312331 22) Prove that in $\Delta ABC$: (a) $\cot\frac{A}{2} \:=\:\tan\left(\frac{B}{2}+\frac{C}{2}\right)\text {, by using the identity: }\cot x \:=\:\tan(90^o-x)$ (b) Hence show that: . $\cot\frac{A}{2} + \cot\frac{B}{2} + \cot\frac{C}{2} \;=\;\cot\frac{A}{2}\cdot\cot\frac{B}{2}\cdot\cot\ frac{C}{2}$ You solved part (a) . . . Nice work! (b) We have: . $\cot\frac{A}{2} \;\;=\:\;\tan\left(\frac{B}{2} \:+ \:\frac{C}{2}\right) \quad\Rightarrow\quad\frac{1}{\tan\frac{A}{2}} \;\;=\;\;\frac{\tan\frac{B}{2} \:+ \:\tan\frac{C}{2}}{1-\tan\frac{B}{2}\cdot\tan\frac{C}{2}}$ . . . . $1 \:- \:\tan\frac{B}{2}\cdot\tan\frac{C}{2} \;\;=\;\;\tan\frac{A}{2}\cdot\tan\frac{B}{2} \:+ \:\tan\frac{A}{2}\cdot\tan\frac{C}{2}$ . . . . $\tan\frac{B}{2}\cdot\tan\frac{C}{2} \:+ \:\tan\frac{A}{2}\cdot\tan\frac{C}{2} \:+ \:\tan\frac{A}{2}\cdot\tan\frac{B}{2} \;\;=\;\;1$ . . . . $\frac{1}{\cot\frac{B}{2}\cdot\cot\frac{C}{2}} \:+ \:\frac{1}{\cot\frac{A}{2}\cdot\cot\frac{C}{2}} \:+ \:\frac{1}{\cot\frac{A}{2}\cdot\cot\frac{B}{2}} \;\;=\;\;1$ Multiply through by: . $\cot\frac{A}{2}\cdot\cot\frac{B}{2}\cdot\cot\frac{ C}{2}$ . . . . $\cot\frac{A}{2} \:+ \:\cot\frac{B}{2} \:+ \:\cot\frac{C}{2} \;\;=\;\;\cot\frac{A}{2}\cdot\cot\frac{B}{2}\cdot\ cot\frac{C}{2}$ 3. Hello again, afeasfaerw23231233! 26. $\Delta ABC$ is equilateral with $AB = a.$ $M$ is a point outside the triangle such that: . $\angle AMB = 20^o,\;\angle AMC = 30^o,\;\angle BAM = \alpha.$ (a)(i) By considering $\Delta ABM$, express $AM$ in terms of $a\text{ and }\alpha.$ Your answer is correct, but I would write it this way: . $AM \:=\:\frac{a\cdot\sin[180^o -(20^o+\alpha)]}{\sin20^o} \quad\Rightarrow\quad AM\;=\; \frac{a\cdot\sin(20+\alpha)}{\sin20^o}$ .[1] (a)(ii) By considering $\Delta ACM$, express $AM$ in terms of $a\text{ and }\alpha.$ Since $\angle = 60^o,\;\angle MAC \:=\:60^o-\alpha$ .Then: . $\angle MCA \:=\:180^o - 30^o - (60^o-\alpha) \;=\;90^o + \alpha$ We have: . $\frac{AM}{\sin(90+\alpha)} \:=\:\frac{a}{\sin30^o} \quad\Rightarrow\quad AM \;=\;\frac{a\cdot\sin(90^o+\alpha)}{\frac{1}{2}}$ . . $AM \;=\;2a\cdot\sin(90^o+\alpha)\quad\Rightarrow\quad AM \;=\;2a\cdot\cos\alpha$ .[2] (b) Hence, find $\alpha.$ Equate [1] and [2]: . $\frac{a\cdot\sin(20^o + \alpha)}{\sin20^o} \;=\;2a\cdot\cos\alpha$ . . . . . . $\sin(20^o+\alpha) \;=\;2\cdot\sin20^o\cos\alpha$ . . $\overbrace{\sin20^o\cos\alpha + \sin\alpha\cos20^o} \;=\;2\cdot\sin20^o\cos\alpha$ . . $\underbrace{\sin\alpha\cos20^o - \sin20^o\cos\alpha} \;=\;0$ . . . . . . $\sin(\alpha-20^o) \;=\;0$ . . . . . . . $\alpha - 20^o \;=\;0^o$ . . . . . . . . $\alpha \:=\:20^o$ 4. i thought that the the diagram was a 3D pyramid-like solid! so i spent a great deal of time and could not do [a][ii]. now i know it is a 2D diagram. thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7847048044204712, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/205619-radius-convergence-complex-power-series.html
1Thanks • 1 Post By johnsomeone # Thread: 1. ## Radius of Convergence for Complex Power Series I'm trying to find the Radius of Convergence of a complex power series, and I can see that I can use the Ratio test (i.e. the limit of a_{n}/a_{n+1} exists). However, the first coefficient, a_{0} = 0 (but a_{n} is non zero for all other n). Is it true that the radius of convergence of a power series from 0 to infinity is the same as the the radius of convergence for the same power series but changing to a sum from m to infinity. (for some arbitrary natural number m). That is, the radius of convergence is independent of where you chose to start your summation from? This seems logical to me but I'm not 100% certain. Thanks for any help! 3. ## Re: Radius of Convergence for Complex Power Series Originally Posted by Ant Is it true that the radius of convergence of a power series from 0 to infinity is the same as the the radius of convergence for the same power series but changing to a sum from m to infinity. (for some arbitrary natural number m). That is, the radius of convergence is independent of where you chose to start your summation from? This seems logical to me but I'm not 100% certain. Be certain. It is true. It's completely analogous to the fact that, for any positive integer m, the first m terms of a series of real numbers has no bearing on whether or not the infinite series of those real numbers converges (because it doesn't impact the convergence/non-convergence of the limit of the partial sums). There are two ways to see this, one of which you stated: 1) The ratio test will be $\lim_{n \to \infty} \left\lvert \frac{a_{n+1}}{a_n} \right\rvert \lVert z - z_0 \rVert ,$ so the first m terms of $\{a_n(z-z_0)^n \}_{n \in \mathbb{N}}$ are irrelevant. Also, remember that the ratio test isn't the definition of convergence (hence nor of the radius of convergence). It's just a test, and so when it doesn't apply, you'll need some other argument to determine the radius of convergence. Would you dispute that $\cos(z) = \sum_{n = 0}^{\infty} \frac{(-1)^n}{(2n)!}z^{2n}$ converges everywhere, simply because every other term there is 0, and so a naively direct application of the ratio test can't apply? 2) Consider it as $f(z) = p_m(z-z_0) + \sum_{n = m+1}^{\infty} a_n(z-z_0)^n$, where $p_m$ is a polynomial of degree m. Clearly the radius of convergence of $f$ should be the same as the radius of convergence of $f - p_m$, since $p_m$ has nothing to do with the convergence of the power series. Yet an additional persepctive is that, since $p_m$ is entire, and since the radius of convergence is the miminal distance from $z_0$ to a singularity of $f$, that must be the same distance as the it is for $f - p_m$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935329794883728, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27386/is-the-distinction-between-the-poincare-group-and-other-internal-symmetry-groups
# Is the distinction between the Poincaré group and other internal symmetry groups artificial? For instance, given a theory and a formulation thereof in terms of a principal bundle with a Lie group $G$ as its fiber and spacetime as its base manifold, would a principle bundle with the Poincaré group as its fiber and $\mathcal{M}$ as its base manifold, where $\mathcal{M}$ is a manifold the group of whose isometries is $G$, lead to an equivalent formulation? Why? Why not? On a related note, can any Lie group be realized as the group of isometries of some manifold? - 5 On the related note, the answer is trivially true and not very interesting. Any finite-dimensional Lie group is a group of isometries of any left-invariant metric on its underlying manifold. – José Figueroa-O'Farrill Nov 8 '11 at 12:02 1 Thank you. Yes, that is trivial indeed. Sorry, didn't see that. – Arpan Saha Nov 8 '11 at 14:32 I don't understand why you expect this sort of duality to exist. Normally, switching spacetime and the target space in a non-linear sigma model leads to something completely differently. Mappings from M to N are one thing, mappings from N to M completely other. – Squark Nov 12 '11 at 12:13 I am just wondering - isn't trying to write a theory with the Poincare group as the fibre on the space-time the "same" as doing Einstein's gravity? Of course the later part of the question doesn't make sense to me - I mean in the usual theory of connections on some G-bundle (i.e Yang-Mill's theory!) I don't see how the gauge group is acting as isometry on the space-time!? That doesn't look right at all. . – user6818 Jan 22 '12 at 22:00 I would think that the vielbein language of gravity is sort of the correct formulation in which these two pictures are manifest that in some sense $G\times Poincare\text{ }Group$ is the local gauge group of a Yang-Mill's theory with the gauge group $G$ on a space-time. I am not sure. I would love to be corrected! – user6818 Jan 22 '12 at 22:00 show 1 more comment ## 2 Answers I think no. In a local trivialization of your G-bundle $\mathcal{M}\times G$, we have a right action $g(x,h)=(x,gh)$ - in other words, $G$ does not act on $\mathcal{M}$ in that sense. So moving to the second case must change the base space to some manifold $\mathcal{M}$ whose Poincare group is isomorphic to $G$ (see what Squark said). They have equivalent fibers but inequivalent base spaces. - I know only very basic things about fiber bundles, but I think that this "swapping" you propose fails even in simplest cases. I mean, lets take $\mathbb{R}$ as a base space and $S^1$ as a fiber space -- we will get a cylinder, right? Now lets exchange them: $S^1$ is a base space and $\mathbb{R}$ is a fiber space -- that way we can get either cylinder or Möbius strip. The two things seem to be inequivalent. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584441184997559, "perplexity_flag": "head"}
http://www.math.uah.edu/stat/games/Poker.html
$$\newcommand{\P}{\mathbb{P}}$$ $$\newcommand{\E}{\mathbb{E}}$$ $$\newcommand{\R}{\mathbb{R}}$$ $$\newcommand{\N}{\mathbb{N}}$$ $$\newcommand{\bs}{\boldsymbol}$$ $$\newcommand{\var}{\text{var}}$$ $$\newcommand{\jack}{\text{j}}$$ $$\newcommand{\queen}{\text{q}}$$ $$\newcommand{\king}{\text{k}}$$ ## 2. Poker ### The Poker Hand A deck of cards naturally has the structure of a product set and thus can be modeled mathematically by $D = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, \jack, \queen, \king\} \times \{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$ where the first coordinate represents the denomination or kind (ace, two through 10, jack, queen, king) and where the second coordinate represents the suit (clubs, diamond, hearts, spades). Sometimes we represent a card as a string rather than an ordered pair (for example $$\queen \, \heartsuit$$). There are many different poker games, but we will be interested in standard draw poker, which consists of dealing 5 cards at random from the deck $$D$$. The order of the cards does not matter in draw poker, so we will record the outcome of our random experiment as the random set (hand) $$\bs{X} = \{X_1, X_2, \ldots, X_n\}$$ where $$X_i = (Y_i, Z_i) \in D$$ for each $$i$$ and $$X_i \ne X_j$$ for $$i \ne j$$. Thus, the sample space consists of all possible poker hands: $S = \left\{\{x_1, x_2, x_3, x_4, x_5\}: x_i \in D \text{ for each } i \text{ and } x_i \ne x_j \text{ for all } i \ne j \right\}$ Our basic modeling assumption (and the meaning of the term at random) is that all poker hands are equally likely. Thus, the random variable $$\bs{X}$$ is uniformly distributed over the set of possible poker hands $$S$$. $\P(\bs{X} \in A) = \frac{\#(A)}{\#(S)}$ In statistical terms, a poker hand is a random sample of size 5 drawn without replacement and without regard to order from the population $$D$$. For more on this topic, see the chapter on Finite Sampling Models. ### The Value of the Hand There are nine different types of poker hands in terms of value. We will use the numbers 0 to 8 to denote the value of the hand, where 0 is the type of least value (actually no value) and 8 the type of most value. Thus, the hand value $$V$$ is a random variable taking values 0 through 8, and is defined as follows: 1. No Value. The hand is of none of the other types. 2. One Pair. The hand has 2 cards of one kind, and one card each of three other kinds. 3. Two Pair. The hand has 2 cards of one kind, 2 cards of another kind, and one card of a third kind. 4. Three of a Kind. The hand has 3 cards of one kind and one card of each of two other kinds. 5. Straight. The kinds of cards in the hand form a consecutive sequence but the cards are not all in the same suit. An ace can be considered the smallest denomination or the largest denomination. 6. Flush. The cards are all in the same suit, but the kinds of the cards do not form a consecutive sequence. 7. Full House. The hand has 3 cards of one kind and 2 cards of another kind. 8. Four of a Kind. The hand has 4 cards of one kind, and 1 card of another kind. 9. Straight Flush. The cards are all in the same suit and the kinds form a consecutive sequence. Run the poker experiment 10 times in single-step mode. For each outcome, note that the value of the random variable corresponds to the type of hand, as given above. ### The Probability Density Function Computing the probability density function of $$V$$ is a good exercise in combinatorial probability. In the following exercises, we need the two fundamental rules of combinatorics to count the number of poker hands of a given type: the multiplication rule and the addition rule. We also need some basic combinatorial structures, particularly combinations. The number of different poker hands is $\#(S) = \binom{52}{5} = 2\,598\,960$ $$\P(V = 1) = 1\,098\,240 / 2\,598\,960 \approx 0.422569$$. Proof: The following steps form an algorithm for generating poker hands with one pair. The number of ways of performing each step is also given. 1. Select a kind of card: $$13$$ 2. Select 2 cards of the kind in part (a): $$\binom{4}{2}$$ 3. Select 3 kinds of cards, different than the kind in (a): $$\binom{12}{3}$$ 4. Select a card of each of the kinds in part (c): $$4^3$$ $$\P(V = 2) = 123\,552 / 2\,598\,960 \approx 0.047539$$. Proof: The following steps form an algorithm for generating poker hands with two pair. The number of ways of performing each step is also given. 1. Select two kinds of cards: $$\binom{13}{2}$$ 2. Select two cards of each of the kinds in (a): $$\binom{4}{2} \binom{4}{2}$$ 3. Select a kind of card different from the kinds in (a): $$11$$ 4. Select a card of the kind in (c): $$4$$ $$\P(V = 3) = 54\,912 / 2\,598\,860 \approx 0.021129$$. Proof: The following steps form an algorithm for generating poker hands with three of a kind. The number of ways of performing each step is also given. 1. Select a kind of card: $$13$$ 2. Select 3 cards of the kind in (a): $$\binom{4}{3}$$ 3. Select 2 kinds of cards, different than the kind in (a): $$\binom{12}{2}$$ 4. Select one card of each of the kinds in (c): $$4^3$$ $$\P(V = 8) = 40 / 2\,598\,960 \approx 0.000015$$. Proof: The following steps form an algorithm for generating poker hands with a straight flush. The number of ways of performing each step is also given. 1. Select the kind of the lowest card in the sequence: $$10$$ 2. Select a suit: $$4$$ $$\P(V = 4) = 10\,200 / 2\,598\,960 \approx 0.003925$$. Proof: The following steps form an algorithm for generating poker hands with a straight or a straight flush. The number of ways of performing each step is also given. 1. Select the kind of the lowest card in the sequence: $$10$$ 2. Select a card of each kind in the sequence: $$4^5$$ Finally, we need to subtract the number of straight flushes in Exercise 6 to get the number of hands with a straight. $$\P(V = 5) = 5108 / 2\,598\,960 \approx 0.001965$$. Proof: The following steps form an algorithm for generating poker hands with a flush or a straight flush. The number of ways of performing each step is also given. 1. Select a suit: $$4$$ 2. Select 5 cards of the suit in (a): $$\binom{13}{5}$$ Finally, we need to subtract the number of straight flushes in Exercise 6 to get the number of hands with a flush. $$\P(V = 6) = 3744 / 2\,598\,960 \approx 0.001441$$. Proof: The following steps form an algorithm for generating poker hands with a full house. The number of ways of performing each step is also given. 1. Select a kind of card: $$13$$ 2. Select 3 cards of the kind in (a): $$\binom{4}{3}$$ 3. Select another kind of card: $$12$$ 4. Select 2 cards of the kind in (c): $$\binom{4}{2}$$ $$\P(V = 7) = 624 / 2\,598\,960 \approx 0.000240$$. Proof: The following steps form an algorithm for generating poker hands with four of a kind. The number of ways of performing each step is also given. 1. Select a kind of card: $$13$$ 2. Select 4 cards of the kind in (a): $$1$$ 3. Select another kind of card: $$12$$ 4. Select a card of the kind in (c): $$4$$ $$\P(V = 0) = 1\,302\,540 / 2\,598\,960 \approx 0.501177$$. Proof: By the complement rule, $$\P(V = 0) = 1 - \sum_{k=1}^8 \P(V = k)$$ Note that the probability density function of $$V$$ is decreasing; the more valuable the type of hand, the less likely the type of hand is to occur. Note also that no value and one pair account for more than 92% of all poker hands. In the poker experiment, note the shape of the density graph. Note that some of the probabilities are so small that they are essentially invisible in the graph. Now run the poker hand 1000 times and note the apparent convergence of the relative frequency function to the density function. In the poker experiment, set the stop criterion to the value of $$V$$ given below. Note the number of poker hands required. 1. $$V = 3$$ 2. $$V = 4$$ 3. $$V = 5$$ 4. $$V = 6$$ 5. $$V = 7$$ 6. $$V = 8$$ Find the probability of getting a hand that is three of a kind or better. Answer: 0.0287 In the movie The Parent Trap (1998), both twins get straight flushes on the same poker deal. Find the probability of this event. Answer: $$3.913 \times 10^{-10}$$ Classify $$V$$ in terms of level of measurement: nominal, ordinal, interval, or ratio. Is the expected value of $$V$$ meaningful? Answer: Ordinal. No. A hand with a pair of aces and a pair of eights (and a fifth card of a different type) is called a dead man's hand. The name is in honor of Wild Bill Hickok, who held such a hand at the time of his murder in 1876. Find the probability of getting a dead man's hand. Answer: $$1584 / 2\,598\,960$$ #### Drawing Cards In draw poker, each player is dealt a poker hand and there is an initial round of betting. Typically, each player then gets to discard up to 3 cards and is dealt that number of cards from the remaining deck. This leads to myriad problems in conditional probability, as partial information becomes available. A complete analysis is far beyond the scope of this section, but we will consider a comple of simple examples. Suppose that Fred's hand is $$\{4\,\heartsuit, 5\,\heartsuit, 7\,\spadesuit, \queen\,\clubsuit, 1\,\diamondsuit\}$$. Fred discards the $$\queen\,\clubsuit$$ and $$1\,\diamondsuit$$ and draws two new cards, hoping to complete the straight. Note that Fred must get a 6 and either a 3 or an 8. Since he is missing a middle denomination (6), Fred is drawing to an inside straight. Find the probability that Fred is successful. Answer: $$32 / 1081$$ Suppose that Wilma's hand is $$\{4\,\heartsuit, 5\,\heartsuit, 6\,\spadesuit, \queen\,\clubsuit, 1\,\diamondsuit\}$$. Wilma discards $$\queen\,\clubsuit$$ and $$1\,\diamondsuit$$ and draws two new cards, hoping to complete the straight. Note that Wilma must get a 2 and a 3, or a 7 and an 8, or a 3 and a 7. Find the probability that Wilma is successful. Clearly, Wilma has a better chance than Fred. Answer: $$48 / 1081$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 77, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287750124931335, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/165753-conditional-expectation-help.html
# Thread: 1. ## Conditional Expectation Help Suppose you flip one fair coin and roll one fair six-sided die. Let X be the number showing on the die, and let Y = 1 if the coin is heads with Y = 0 if the coin is tails. Let Z = XY. Compute E(Z): Am I correct when I do E(Z) = E(X)E(Y) = (3.5)(1/2) = 7/4 Also, how do i do this? Compute E(Z|X = 4) Thanks 2. Originally Posted by theprestige Suppose you flip one fair coin and roll one fair six-sided die. Let X be the number showing on the die, and let Y = 1 if the coin is heads with Y = 0 if the coin is tails. Let Z = XY. Compute E(Z): Am I correct when I do E(Z) = E(X)E(Y) = (3.5)(1/2) = 7/4 I am guessing X and Y are independent, in which case you are correct. Also, how do i do this? Compute E(Z|X = 4) Thanks Given that you rolled a 4, what is the expectation of Z? There are only 2 values Y could take with equal probability, so look at what happens at each. You could look at this as $\mathbb{E}[4Y]$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281908869743347, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/30059/basic-easy-rules-for-statistics/30062
# Basic easy rules for statistics In a binomial experiment, if we observe $x=0$ positive individual among $n$ individuals, then the proportion of positive individuals is significantly lower than $3/n$ with a type 1 error less than and very close to $5\%$. This fact, sometimes called the "rule of three", is a consequence of the inequalities $$\exp\left(-\frac{np}{1-p}\right) \leq \Pr(X=0) \leq \exp(-np).$$ Do you know other such basic easy rules for statistics? I find them very interesting and useful. This principle is not really a "rule of thumb" because it has a reliable theoretical foundation, but I don't see another tag for this question (I hope it is not off-topic) - "Normally, more than two-thirds of the people are average" (meaning within one standard deviation of the mean)? – Dilip Sarwate Jun 8 '12 at 11:42 One very simple one that comes to mind is how the variance of a sample proportion of successes out of $n$ Bernoulli trials is no more than $1/4n$ (which is achieved when the success probability is $1/2$) – Macro Jun 8 '12 at 12:11 Many "rules of thumb" are based on theoretically rigorous analyses or approximations. – whuber♦ Jun 8 '12 at 13:02 ## 1 Answer Check out Gerald van Belle's book "Statistical Rules of Thumb" a very nice little paperback text loaded with examples of rules of thumb and explanations including the "Rule of three" that you mention above. - (+1) That seems like a nice book. Not only does he give rule-of-thumbs, but he also motivates them and discuss their validity. – MånsT Jun 8 '12 at 12:13 Nice! I will ask to my boss to buy it :) – Stéphane Laurent Jun 8 '12 at 12:13 Ahah ! I've just send an email to my boss and actually we already have this book :) – Stéphane Laurent Jun 8 '12 at 12:20 Yes, it's a great book. It's fun just to read it. – jbowman Jun 8 '12 at 13:28 1 – Stéphane Laurent Jun 8 '12 at 14:35 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490627646446228, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/217412/how-to-compute-the-number-of-pairwise-non-isomorphic-7-regular-graphs-on-10-vert?answertab=votes
# how to Compute the number of pairwise non-isomorphic 7-regular graphs on 10 vertices? Compute the number of pairwise non-isomorphic 7-regular graphs on 10 vertices? - 4 What have you tried? Also, it is usually best not to phrase your question in the form of a command. The question may be to compute something, but you're basically telling us to do it for you. At least say, "I'm stuck on this, can any one please help?" or something like that. – Graphth Oct 20 '12 at 14:14 ## 2 Answers You do that by looking at the complement, the graph of missing edges. - If you're interested in this question from a computational mathematics point of view (and this is a specific instance of a broader problem), then this can be computed using Brendan McKay's `geng` (part of the `gtools` package, which can be downloaded with nauty). Specifically: ````geng -n 10 -d7 -D7 ```` computes the number of non-isomorphic graphs with $10$ vertices, minimum degree $7$ and maximum degree $7$. If you want to understand why the number is what it is, Hendrik Jan's hint is excellent. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196910262107849, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/53116/what-grounds-the-difference-between-space-and-time
# What grounds the difference between space and time? We experience space and time very differently. From the point of view of physics, what fundamentally grounds this difference? Dimensionality (the fact that there are three spatial dimensions but only one temporal) surely cannot be sufficient, as there are tentative proposals among string theories for models with multiple spatial dimensions, and two time dimensions. One of the most lauded answers in the philosophy of spacetime has to do with the fact that our laws predict temporally, rather than spatially. That is to say, if we are given enough information about the state of the world at one moment, we can predict (quantum considerations aside) the future state of the world. However if we reverse the roles of time and space here, and instead give information about a single point of space for all of time, it seems we cannot predict spatially. Are there equations in physics that can be considered to predict across space (for a given time)? - I think the main point IS dimensionality. Let's assume time is still 1D. Then if we have information about the entire extent of space at one point, we can only travel in one direction, so we need no more information. If we have 1 point in (let's say 3D) space, there's an infinite number of directions to consider. We can shrink this down to three independent directions, so I would think we should be perfectly capable of making predictions if we had information about the entire extent of time in 3 non-collinear spatial points. But this is by no means a rigorous argument, could be wrong. – Wouter Feb 5 at 12:54 2 Regarding just your last sentence: You might be interested to read up on IVPs (and the associated notion of hyperbolic equations) vs. BVPs (and elliptic equations). – Chris White Feb 6 at 0:40 One important difference is that the time coordinate has order (causality), which is absent in the spatial coordinates. – chaohuang Feb 6 at 0:44 Can you give specific references to the assertion of the "most lauded answers in philosophy", I am not sure I follow the philosophical assertion, nor agree with the way its phrased. – Hal Swyers Feb 9 at 13:19 – Kathryn Boast Mar 3 at 15:09 ## 7 Answers @Kathryn Boast I assume you are looking for an answer that is based on the available experimental evidence we have about nature, not wild new speculations that are not firmly established and supported by experiment. It is very interesting to see how a question as simple as this, has many of us going into a spin… There are several answers one can give, depending on whether your frame of reference is Quantum Mechanics, Special Relativity, General Relativity, Super String Theories, Thermodynamics, Philosophy etc. There is however, one aspect of space and time that is the umbilical cord under all these views. This is the fact that space and time are linked together by the speed of light, c, and form the single geometrical structure we call space-time. If it was not for the invariance and constancy of the speed of light, this space-time structure would be impossible. Our ignorance about this fascinating role played by light in nature, before Einstein taught us, had lead to the notion that space and time are totally different from each other. One can say that the only difference between these two is their functionality. Space offers the ‘room’ for matter to exist and move in, and time offers the facility of keeping track of what matter is doing and in which order. This view has been expressed in various forms by some of the other respondents. As for the last part of the question, yes, there are many equations in physics that ‘predict’ what is happening all over space, globally, at a particular instant in time. Take for example the electromagnetic scalar potential V(x, y, z, t), chose a particular value for time and you get what is happening to V all over the space (x, y, z), by solving the Poisson differential equation. Another famous example is the Time Dependent Schrodinger Equation. Fixing the time to a particular instant, the solution of the equation will give you what is happening elsewhere in space, in a probabilistic sense of course. - Before reading my answer, please keep in mind that I am just an undergraduate student, so it might not be entirely correct when it comes to such deep questions. Time and Space are fundamentally different. In case of quantum mechanics, position is an observable, but time is a parameter. And in field theory even space becomes a parameter along with time. Though lorentz transformation connects spatial dimensions and temporal dimensions, still there is a difference. In a lorentz transformation, no transformation from $t \rightarrow -t$, but rotation can take $x \rightarrow -x$. Physically it means, time is unidirectional, that is in all physical processes one can't go from $t_1 \rightarrow t_2$ such that $t_1 > t_2$, but a particle in space can go both in positive and negative direction. - Philosophical discussion of this can be started from Zeno's Paradoxes and will go for thousand of pages, but physically speaking we have less choices: In physics, time goes in one direction, and the most fundamental reason for that is: cause and effect (other reasons can be constructed, but this one is the most intuitive and fundamental), this makes time immediately not totally equivalent to spatial dimensions, and cause and effect forces as to "number" events, such that first event, then second event which is the effect of the first...etc, this numbering creates the "illusion" of the possibility of time measurement, and I say illusion because in this context it seems that this numbering is absolute, but it is not because relativity tells us that information exchange speed is limited, thus this "numbering" is relative. This possibility of measurement tells us that time is very similar to spatial dimensions, even so cause and effect should not be violated, for that Minkowski space is actually pseudo-Euclidean, and not Euclidean, this gives time it's deserved position: it can be measured as spatial dimensions, but it still not same as spatial dimensions, because it is responsible of "parametrizing" cause and effect. Regarding That is to say, if we are given enough information about the state of the world at one moment, we can predict (quantum considerations aside) the future state of the world. This one can be given a very deep explanation in the framework of Bundle foliation that use in differential geometry, if you familiar with this topic, I can explain it for you. - My guess would be that time is special for us by definition (assuming the universe is causally connected). I wouldn't speculate on what would be if there were 3 time dimensions and 1 spatial, let's imagine instead that we live in 1+1 dimensions. Then all the observers/particles/people/cats/forest animals/etc would be divided into three groups: 1) Those, which move with the speed of light with respect to anybody from other groups, 2a) Those which, move slower than the speed of light with respect to each other, 2b) Another group, members of which also move slower than the speed of light with respect to each other, but for whom the members of 2a) move faster than the speed of light. However, 2a and 2b cannot interact, because it would violate causality for each of them. To make it more illustrative, imagine we belong to a group 2a. Nothing can get accelerated to the speed faster than light - alright (except for light itself, which belongs to group 1). If one imagines something faster than light - it is called tachyons, hence they would belong to group 2b. However, tachyons would violate causality, if they could interact with us (there would be systems of reference where effect precedes the cause). Hence, we do not see tachyons and we have our own time coordinate defined this way. However, I could have written the same paragraph, replacing 2a with 2b and vice versa. Had we been tachyons, we could also say that we evolve in time, and the laws of physics would be the same for us. Hence the answer to your first questions: We belong to one or another group, and this makes one coordinate special with respect to the other one, and this is why it is called time. One might argue (see other answers), that the above reasoning contradicts classical quantum mechanics. However, it doesn't, if one considers relativistic quantum mechanics istead of non-relativistic one, unless one considers quantum measurements. Another argument, perhaps, would be, that our consiousness does not need to be time-oriented. Or, in other words, one might ask, why does it take time to think. The answer for this I don't know. Finally, one might refer to irreversible processes, such as enthropy increase in the systems acquiring equilibrium, as another case, where time plays a special role. This all perhaps has something to do with the fact that our consciousness is aligned with time, hence we tend to write physical laws in time-dependent form and specify initial conditions at a constant time. For reversible processes the laws can be also written in a form, where spatial coordinates are an argument. However, typically, we don't have boundary conditions to provide in order to solve the equations. So, because of consciousness such an approach is not practical. It may be practical, though, for the cases, where we do not have to specify boundary conditions, for example for relativistic wave equations. Hence, the answer to your second question is yes, there are many such equations (such as wave equation, for example), which allow predictions, considering x as a time coordinate. However, some do not seem to allow such predictions (such as equations dealing with irreversible processes) and for some it is not possible to write practically useful boundary (initial) conditions. - Are there equations in physics that can be considered to predict across space (for a given time)? If you take time out of the equation, then you take "change" out of the equation. (And without change, things remain the same "all the time".) If you take all time out of the equation except for one specific moment, then you take causality out of the equation. Things that happen at the same time in different places, cannot have a causal relationship at that time. A "prediction across space" for one specific point in time would appear random. Therefore even though I am not a physics student, it would appear to me that logically there can be no such equations, by definition. We experience space and time very differently. From the point of view of physics, what fundamentally grounds this difference? From the view of physics, I don't think that there is an accepted answer. I have seen explanations based on "causality" as well as "entropy" as well as "no difference, it's spacetime". This may be because physics does not necessarily seem to match or explain what "we experience". Many physical laws are reversible in theory, yet we never observe them in reverse. Other laws are merely statistical (like entropy), but we never see nature "roll two sixes" (or when was the last time you saw milk and coffee unmix itself?) One of the more interesting points about "time" I ran into is the fact that without time there would be no space. If there was no time, but space, then you could go anywhere in no time. This is equivalent to saying that you can be everywhere at the same time, and this is the same as saying that there is no space because there is no difference between different points in time. - Thanks for this. Do you have any references / links etc. with more detail on the idea in your final paragraph that without time there would be no space? – Kathryn Boast Feb 6 at 9:33 I first ran across the idea in one of the popular science books on time. Unfortunately, I have many and am not sure anymore where I read it first. I think it is “Once before time : a whole story of the universe / by Martin Bojowald - eISBN: 978-0-307-59425-9” but I am not sure. Other good books on time: From Eternity to Here, Sean Carroll, and About Time, Paul Davies. – user1459524 Feb 6 at 13:39 We experience space and time very differently. From the point of view of physics, what fundamentally grounds this difference? Kathryn, in one sense your question answers itself. It may not be the most satisfying sense, but it's an important one. The distinction is the one Einstein first made when he proposed special relativity, before his former professor Minkowski re-framed Einstein's theory in more graphical terms: Time is the ticking of a clock, that is, the quantification of measurably cyclic processes. Einstein at first did not attempt to frame this idea into highly graphical terms, but of course did so later after some initial mild grumblings about how Minkowski had made his own theory incomprehensible to him. It is of course trivially easy to take the concept of regular clock ticks and make that into a concept of distance, but it is not a trivial mapping from the perspective of what must be available to make the mapping. It just seems that way because as organisms capable of existing and surviving in our particular universe, we come pre-equipped with the necessary hardware, and with a situation that makes the idea meaningful. Does all that sound too complicated for something as simple as counting clock ticks and then representing them along a length-like line? Not really. You can't access the past as you can a distance, so where does the knowledge of the past reside? In something called a "memory" or a "storage device," which must be independent of the part of the clock that does the ticking. So, you have memory. You cannot interpret memory without some set of operations that recognize and can act on such a representation of past cycles, treating them within some kind of very different construction as if they existed again. Such operations constitute form of intelligence, including a rather remarkable ability to "simulate" or reproduce the physics of a past event, despite having nothing left of that event except a very wispy pattern of information (itself a very strange concept) about key features of that past event. This collective ability to simulate and operate on wispy, ephemeral, extraordinarily incomplete images of the past, yet somehow manage to achieve a meaningful reproduction of their consequences, we call "intelligence." But how is that even possible? It seems rather absurd that such slender representations can in fact make meaningful predictions of a much denser and tremendously complex collection of matter an energy. There the universe itself helps us out both by being based not on total chaos, but on slender, simple, and uniform rules of operation. In the case of time, the universe assists us tremendously by being rich in something called cycles, or almost exact repetitions of patterns. Light is cycle. Electrons going around atoms are cycles. Orbits of planets and bodies are cycles. Vibrations of matter, including the gentle swaying of a pendulum in a grandfather clock, are cycles. Are cycles simple, then? No, emphatically not! Cycles are walks on the edge of a razor, with chaos on one side and locked-down, frozen simplicity on the other. Planets have cyclic orbits, but add too many bodies or too much time and their simple cycles self-destruct into some form of chaos. But if you go the other way and lock cycles into such extreme sameness that there is no measurable change of any kind from one tick to the next, you achieve not a clock, but perfect oscillator that has no more sense of time than does the world of chaos. It is only that careful balance of recognizable but slightly different cycles -- that is, of repeated patterns that an intelligence can look at and say "that is still the same light, or that is still the same pendulum, despite the slight changes in position or energy or momentum" -- that makes measurable time possible, and through that allows intelligence -- memory plus meaningful, simulation-like operations that somehow mimic and predict the external world -- to perceive "time." In special relativity this cyclic concept of time is typically represented by the concept of "proper time" $\tau$, which is time as measured by an actual clock. To achieve length-like time only requires one more comparatively simple step, but that last step also runs the greatest risk of deceiving us. We take our model and out counts of almost-identical cycles, and say "this is like a line, this is like a length. I will represent the progression of cycles as a length, using this distance X I have borrowed from the world I can perceive right now. I will call this axis "time" or $t$, and I will postulate that it exists in addition to the axes of length that I can perceive directly. It's a very good postulate, and special relativity in particular immediately provides us with some non-trivial substantiation of it by showing us experiments where the easiest way to model the results of velocities near the speed of light as cases where the "time" axis $t$ of the speeding object has been bent and rotated into one of the observer's XYZ axes. But even there, beware! The actual events that get measured are again in terms of cycles -- the cycles or $\tau$ time of the observed object appears (to the observer) to be slowed down. That slowing down can again be mapped into a length-like concept of time, but the mapping still exists. Even there, time as a length-like measurement is indirect in a fashion that should be recognized as part of the process, if you want a more complete picture. The bottom line of all of this is that if you want to think clearly about complex or advanced concepts of time, don't forget poor old cyclic-only, grandfatherly $\tau$ time as the starting point for all time concepts. It is a deceptively complicated concept, one that says for example that classical physics is just as much "observer dependent" as quantum physics. Why? Because every time you use $t$ in an equation of classical physics, you have implied cycles, and wispy, ephemeral memories of cycles, and remarkable sentient operations that use those misty memories to understand and predict what will happen next, then reason on them. When you say $t$, you imply $\tau$, and when you imply $\tau$... you imply us. You have asked more than one question. In your last paragraph, I believe your main question is this: ... if we reverse the roles of time and space here, and instead give information about a single point of space for all of time, it seems we cannot predict spatially. Are there equations in physics that can be considered to predict across space (for a given time) No. A space-like slice (subspace) of spacetime stores information, whereas a time-like slice (a worldline) does not. More to the point, the very way a "particle" is defined is an attempt to trim away as much variable information as possible, so that the continuity of conserved quantities such as mass, charge, and spin is emphasized. In classical physics this focus on reductionism across the time axis leaves you with not much more than the history of how the particle was "bumped," billiard-ball style, as it moves across its path across time. Those deflections provide a small amount of information about the universe as a whole, but the total information encoded is quite trivial compared to that contained in any space-like slice, and is certainly never enough to reconstruct the universe as a whole. The amount of information contained in a single particle path is also highly variable. It approaches zero in the case of a particle that simply sits in a very dark corner of intergalactic space and never interacts with much of anything, so in that case you are left in the dark pretty much about everything else going on in space. Incidentally, you may wonder why in defining time slices I have focused on the worldlines of individual particles, instead say of a single fixed point in space from the start to the end of the universe. You could use the latter approach, and it gives the same result, since for example a single dark spot in deep intergalactic space has even less information about the rest of the universe than a very bored particle sitting there. However, I don't use that definition because "where" that specific point in space really is quickly becomes entangled with the question of "where it it relative to some set of particle worldlines." Since that is the only way to make such an assertion meaningful, it's easier and more honest simply starting with the particle worldlines. You also noted in your second paragraph: ... Dimensionality (the fact that there are three spatial dimensions but only one temporal) surely cannot be sufficient ... Yes. In fact, if you look at what I just wrote, it applies just as well to a 1-to-1 ratio of space axes to time axes as it does to a 3-to-1 ratio. Time is simply the axis of quantity (e.g. mass) conservation, while space is the axis on which the relationships (information) that capture the variable relative configurations of those conserved quantities is expressed. So what does any of this have to do with my earlier answer about time operating first as cyclic rather than length-like? Quite a bit, actually. The cycles are just repeating patterns of relationships between the conserved, particle-like conserved quantities. So, the conserved-over-time mass of a planet orbits the similarly conserved mass of the sun, and from that detectably similar pattern we define as the year. It's not that you cannot have change without cycles. It's just that without the concept of some patterns "repeating," you cannot create a truly metric concept of time that like space includes definite lengths and distances. That makes the time version of "distance" rather odd, and a lot more complicated than the space-like version. - 1 Even beautiful and brilliant discussions typically posess clear underlying ideas beneath. I see two in your answer: 1) cyclic proper time and 2) memory (and correct me if I'm wrong). As to 1) there is nothing special about cycles. I use digital watch, some other person could just have a continuous variable in her pocket. Concerning proper time and memory - what about them after all? Why do we actually store memory temporally and experience $\tau$ as we live rather than some $x$? I don't find a clear answer to this in your discussion. – Alexey Bobrick Feb 5 at 21:37 Alexey, oops, I noticed your question but then forgot about it. Hmm. I... don't know an easy way to answer it. My day job requires examining "obvious" assumptions about how intelligence works in a very skeptical fashion, because computers are exceedingly unforgiving in ways that human intelligences are not. I think the issue is this: The consistent, recognizable persistence of anything is one of those deep and distressingly anthropic mysteries of our universe, because many other options (e.g. a hot ball of gas that never condenses) don't even support "objects," let along "cycles" in them. – Terry Bollinger Feb 12 at 3:57 You asked: "Why do we actually store memory temporally and experience $\tau$ as we live rather than some $x$?" Length-like $t$ time is an exceedingly stringy and fibrous thing, an axis whose very direction is defined by those persistences (local conservation laws) I just mentioned. Persistences are what form the fibers and worldlines of length-like $t$, and through those its overall direction. But precisely because such $t$ fibers define "sameness," it is only their braiding and mutual relations in the much richer and infinitely complex axes of space that one can define memories and meanings. – Terry Bollinger Feb 12 at 4:07 Thank you! I see a very good point here. If you let me rephrase it, it would go as: Conserning conciousness and mind, it is dimensionality, that matters. 3 dimensions provide much broader range for possible structures, than 1 dimension. Hence, space is more suitable for defining a state, than time. – Alexey Bobrick Feb 14 at 21:24 You make a good point your self, but alas, I can't claim that one. 3D is indeed close to optimal for connectivity without degenerating into isolationism, but my point above was that particles in spacetime are represented as world lines, with the line part following the "stringy" time axis. So, just at a thread is boring if you only look along its length and ask "did anything change?", a particle in spacetime is boring along $t$ for much the same reason. But if you have additional axes where the particle can be at just one location, you can e.g. tie knots that can become very complex. – Terry Bollinger Feb 16 at 20:05 i think time and space are substancial equivalent but differ by the scale of their cycle. when you move in space, you can turn around go back to the start or go on too ifinity and you will end up at the starting point, this is at least for me the meanning of the determistic view. with time it's the same but causality will prevent you from just turning around and going back in time,the starting point has changed too many aspect ( caused by the other actors) and you wan't recognize him , but this is true aswell if you move around in space. a nother point is , i think nobody of us toke in acount the axiom of choice.this means the futur will never be determinate by the present but the past is unchangable and fixed. for exsample take the conway's game of life: personly iwould say the game has to be played backwards , because in the game each position has one and only one futur followers, but a infity many of past precurser. in reality it seems to me excactly the opesite. But now theirs my point: if we go beyound the horizont of event and we aloud us to go as long as we need to go to find a constelation where in some futur the game will lead up to take us back to the starting point. it is clear that this constelation must be possible and if it is possible, it means that it has to excist. still a nother point : our memory can stock past events in mind and retrive some simulation and remind us of the past. there must be some material way to save or enclose past events in our mind and if it was just one photon we received by our eyes. now in the moment of remembering, we have to build up some kind of connection to this past and what is it else then to travel in time.here it was maybe just one electron, but what was possible for this one electron must have a basic thruth in this world. in the end, the axiom of choice can't be ignored and has to be taken in acount by some mathematical serius atempt to define this world as we observe her . no body can just turn around and go back to start. neither in space neither in time ; it's just not how i percive this world and don't forget mathematics are maid out of the humin minds and reflect only a part of the world. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9610337615013123, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/85642-trig-function-word-problem-2-a.html
Thread: 1. Trig function word problem #2 A boat tied up at a dock bobs up and down with passing waves. THe vertical distance between its high and low point is 1.8 m and the cycle is repeated every 4 s. a) Determine a sinusoidal equation to model the vertical position in metres of the boat versus the time in seconds. b) Use your model to determine when, during each cycle the boat is 0.5 m above its mean position. Round your answers to the nearest hundredth of a second. Ok, for part a) my equation is: V(t)=0.9sin(pi/2)(t) for part b) I subbed in 0.5 for V(t) and my answer was 0.37 sec which is correct. However, there are two answers in the back of the book. 0.37 sec and 1.63 sec. I am not sure how to get the second answer: 1.63 sec. I can't figure out how to get this second time, because since sin is positive in 1st and 2nd quadrant. I got first quadrant, but if you take the 2nd quadrant, pi-0.55555 gives you the angle, which is too big. Sine cannot be over 1, in this case it is 2.58. How would I figure out the second time? Thank you 2. Originally Posted by skeske1234 A boat tied up at a dock bobs up and down with passing waves. THe vertical distance between its high and low point is 1.8 m and the cycle is repeated every 4 s. a) Determine a sinusoidal equation to model the vertical position in metres of the boat versus the time in seconds. b) Use your model to determine when, during each cycle the boat is 0.5 m above its mean position. Round your answers to the nearest hundredth of a second. Ok, for part a) my equation is: V(t)=0.9sin(pi/2)(t) for part b) I subbed in 0.5 for V(t) and my answer was 0.37 sec which is correct. However, there are two answers in the back of the book. 0.37 sec and 1.63 sec. I am not sure how to get the second answer: 1.63 sec. I can't figure out how to get this second time, because since sin is positive in 1st and 2nd quadrant. I got first quadrant, but if you take the 2nd quadrant, pi-0.55555 gives you the angle, which is too big. Sine cannot be over 1, in this case it is 2.58. How would I figure out the second time? Thank you $0.5=0.9 \sin \left(\frac{\pi t}{2}\right)$ $\sin \left(\frac{\pi t}{2}\right)=\frac{5}{9}=0.55556$ since sin is +ve, it will have two answers, in first quadrant, and in second quadrant. $\frac{\pi t}{2}=0.589 \;\;and \;\;\pi-0.589$ $\frac{\pi t}{2}=0.589 \;and \;2.552$ t=0.37, 1.63 Also if you see the graph, (attached), the line at y = 0.5 cuts the positive cycle at two points, first at 0.37 sec, and, second at 2 - 0.37 = 1.63 sec Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959890604019165, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/110214/list
## Return to Answer 1 [made Community Wiki] It's not clear whether you are working in the setting of Lie theory, or abstract group theory, or something else. This answer addresses the Lie theory aspect of the matter. Let's focus on Lie groups whose Lie algebra is semisimple (as solvable radicals mess up things in too many ways, as usual). By serious theorems, the functor $\mathbf{G} \rightsquigarrow \mathbf{G}(\mathbf{C})$ from connected semisimple $\mathbf{C}$-groups to connected complex Lie groups with semisimple Lie algebra is an equivalence of categories. So all connected complex Lie groups with semisimple Lie algebra admit a unique and functorial "linear algebraic" structure. Let's say that a connected Lie group $G$ with semisimple Lie algebra is linear if $G = \mathbf{G}(\mathbf{R})^0$ for a connected semisimple $\mathbf{R}$-group $\mathbf{G}$. From the viewpoint of "semisimple" Lie theory, the failure of this condition is a bit tricky to think about because non-isomorphic connected semisimple $\mathbf{R}$-groups can yield isomorphic connected Lie groups of $\mathbf{R}$-points, the most famous being the degree-$n$ isogeny ${\rm{SL}}_n \rightarrow {\rm{PGL}}_n$ over $\mathbf{R}$ with an odd $n > 1$ (this becomes an isomorphism on $\mathbf{R}$-points). Nonetheless, we can characterize it in terms of the complex-analytic theory as follows. Consider a connected Lie group $G$ over $\mathbf{R}$ whose Lie algebra $\mathfrak{g}$ is semisimple. Dropping any semisimplicity hypotheses on Lie algebras for a moment, there is a general notion of complexification of $G$, namely a homomorphism $r_G:G \rightarrow G_{\mathbf{C}}$ to a complex Lie group $G_{\mathbf{C}}$ that is initial among all homomorphisms $\rho:G \rightarrow H$ to a complex Lie group (i.e., there is a unique holomorphic homomorphism $f:G_{\mathbf{C}} \rightarrow H$ such that $f \circ r_G = \rho$). This is constructed in complete generality in Bourbaki LIE Chapter III, for example. In general ${\rm{Lie}}(G_{\mathbf{C}})$ is a quotient of `$\mathfrak{g}_{\mathbf{C}}$`, so when $\mathfrak{g}$ is semisimple this quotient is semisimple and hence $G_{\mathbf{C}}$ is (canonically) linear over $\mathbf{C}$. Obviously if $G = \mathbf{G}(\mathbf{R})^0$ for a connected semisimple $\mathbf{R}$-group $\mathbf{G}$ then the resulting closed embedding $G \rightarrow \mathbf{G}(\mathbf{C})$ factors uniquely through a holomorphic map $G_{\mathbf{C}} \rightarrow \mathbf{G}(\mathbf{C})$ via composition with $r_G$, so $\ker r_G = 1$. Remarkably, the converse holds: if $\ker r_G = 1$ then $G$ is the identity component of the group $\mathbf{G}(\mathbf{R})$ of $\mathbf{R}$-points of a connected semisimple $\mathbf{R}$-group $\mathbf{G}$ (and $r_G$ is actually a closed embedding). Indeed, the canonical "algebraization" of $G_{\mathbf{C}}$ has Weil restriction $G'$ over $\mathbf{R}$ that is a connected semisimple $\mathbf{R}$-group such that $r_G$ is identified with an injective map $G \rightarrow G'(\mathbf{R})$ between connected Lie groups. In particular, $\mathfrak{g}$ is identified with a semisimple Lie subalgebra of ${\rm{Lie}}(G')$, so by the algebraic theory over $\mathbf{R}$ (as over any field of characteristic 0) it has the form ${\rm{Lie}}(\mathbf{G})$ for a unique connected semisimple closed $\mathbf{R}$-subgroup $\mathbf{G} \subset G'$. Thus, $r_G$ factors through $\mathbf{G}(\mathbf{R})^0$. The resulting injective map $G \rightarrow \mathbf{G}(\mathbf{R})^0$ between connected Lie groups is an isomorphism on Lie algebras and thus is surjective, so it is an isomorphism of Lie groups. The upshot is that a connected Lie group $G$ with semisimple Lie algebra is linear if and only if $\ker r_G = 1$, in which case $r_G$ is a closed embedding. You may therefore think of the non-triviality of $\ker r_G$ (i.e., the absence of "enough" homomorphisms to complex Lie groups to separate points) as the exact obstruction to $G$ being linear in the sense defined above.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271690845489502, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/178627-3rd-order-recurrence-relation.html
# Thread: 1. ## 3rd Order Recurrence Relation Hi, I have a 3rd Order Homogenous Recurrence Relation, with a characteristic polynomial of x^3 - r1x^2 - r2x - r3. This polynomial has three roots, \lambda1, \lambda2, \lambda3, which are the same. I know for a 2nd Order Recurrence Relation with equal roots, the general solution of the relation is C1\lambda^n + C2n\lambda^n, for some constants C1, C2, but what is the general solution for a 3rd Order with equal roots? Thankyou. 2. See Linear homogeneous recurrence relations with constant coefficients in Wikipedia and the main theorem on the same page. 3. Hello, jsbach! I have a 3rd Order Homogenous Recurrence Relation, . . with a characteristic polynomial of: . $x^3$ - $r_1$ $x^2$ - $r_2$ $x$ - $r_3$ This polynomial has three roots: $\lambda_1$, $\lambda_2$, $\lambda_3$, which are equal. I know for a 2nd Order Recurrence Relation with equal roots, the general solution is: . $C_1$ $\lambda^n$ $+$ $C_2$ $n$ $\lambda^n$, for some constants $C_1$, $C_2$. But what is the general solution for a 3rd Order with equal roots? It is: . $C_1$ $\lambda^n$ + $C_2$ $n$ $\lambda^n$ $+$ $C_3$ $n^2$ $\lambda^n$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083511233329773, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/78129/a-brouwer-fixed-point-theorem-on-finite-sets/78137
## A Brouwer fixed point theorem on finite sets ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have casually almost (i.e. up to details that shoud work) proved the following discrete version of Brouwer's fixed point theorem. I should have obtained this result as a corollary of quite complicated things and I do not understand if the result is trivial and can be easily proved directly or it deserves to be stressed. I would like to hear your opinion about that. Let $n\geq1$ be a fixed integer and denote by $X=[-n,n]^2\subseteq\mathbb Z^2$. Given $(x,y)\in X$ I denote by $A(x,y)$ the set formed by the following at most five points: $(x-1,y),(x,y),(x+1,y),(x,y-1),(x,y+1)$. At most means that if one of those points does not belong to $X$, I will not consider it. The result would be: let $f:X\rightarrow X$ such that for all $(x,y)\in X$ one has $f(A(x,y))\subseteq A(f(x,y))$. Then $f$ has a fixed point. Is that trivial? Thank you in advance, Valerio - 1 You probably mean to universally quantify that condition: $\forall (x, y) \in X^2 (f(A(x, y)) \subseteq A(f(x, y))$. – Todd Trimble Oct 14 2011 at 13:20 1 When writing $[−n,n]$, do you mean $\{-n,-n+1,\dots,n\}$ or the real interval? – Emil Jeřábek Oct 14 2011 at 13:23 1 yes, thanks. I ve just corrected – Valerio Capraro Oct 14 2011 at 13:24 Did you check $n=1$? – Andreas Thom Oct 14 2011 at 14:20 1 Actually, I am quite convinced that I have misunderstood my own property! The right formulation of my application is little different. If you are interested, I have opened another topic: mathoverflow.net/questions/78147/… – Valerio Capraro Oct 14 2011 at 16:42 show 2 more comments ## 2 Answers I believe the following is a counter-example: $f: \lbrace -1,0,1\rbrace^2 \to \lbrace -1,0,1\rbrace^2$ $\forall x:$ $f(-1,x) = (1,x)$ $f(0,x)=(1,x)$ $f(1,x)=(0,x)$ - Well I agree that this is a counter-example and I have also understood where was my mistake (the devil is in the details!). I am trying to add a simple condition in order to make the details working.. – Valerio Capraro Oct 14 2011 at 14:16 1 I am wondering ... Maybe it is still possible to get a version of Brouwer's fixed point theorem. The condition $f(A(x,y))\subseteq A(f(x,y))$ looks a lot like continuity. If you equip $X$ with the Euclidean metric, it just says that whenever $d(x,y) \leq 1$ it follows that $d(f(x),f(y)) \leq 1$. My intuition suggests that it should still be possible to prove that there is an almost-fixed point $x_0$ in the sense that $d(f(x_0),x_0) \leq 1$. This certainly works for the one-dimensional analog of the problem ($X = \lbrace 1,2, ..., n \rbrace$). – Dejan Govc Oct 15 2011 at 16:50 this is a nice idea! Indeed I have thought about that condition as a discrete version of continuity. I will think about that. – Valerio Capraro Oct 15 2011 at 20:57 I have thought about it some more and $d(f(x_0),x_0)\leq 1$ won't always work, but I am quite sure you can prove that there always exists a point $x_0$ such that $d(f(x_0),x_0)\leq C$ where $C$ is a constant depending only on the dimension $d$. This is because you can extend your function by piecewise linear interpolation to a continuous function on $[-n,n]^d \subseteq \mathbb{R}^d$ and then use the original Brouwer fixed point theorem obtaining a fixed point. Now choose the closest lattice point. This point has the property we seek (this follows from your property and piecewise linearity). – Dejan Govc Oct 16 2011 at 1:58 It is a beautiful problem, I must say. – Dejan Govc Oct 16 2011 at 1:59 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Wouldn't the Tietze extension theorem (with range $X$) show that any permutation of $A$ extends to a function $f$? - I don't understand: in that case the extension has a fixed point but it might not belong to $X$. Indeed it is not true that any permutation of $X$ has a fixed point. Am I missing something? – Valerio Capraro Oct 14 2011 at 13:23 Yes, that was my point. As originally stated, I don't believe what you are hoping for is true. But I see that you have edited your question in the meantime, and I'm not sure what you are asking any more. – Carl Offner Oct 14 2011 at 13:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432316422462463, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/49986/what-are-the-consequences-of-relativistic-angular-velocities/51846
# What are the consequences of relativistic angular velocities? If I take a rod of some radius $r$ and length $L$, and I spin this rod with angular velocity $\omega$. How would the geometry of the rod appear to an observer as one converges to $c$? What are the consequences of this to, say, electrons in a solenoid? - – user1664196 Jan 12 at 3:17 @user1664196 Yeah, I'd be happy with whatever the limiting speed is for angular velocity (there must be one, right?). – Kevin Jan 12 at 3:24 @Raindrop Right, I noticed the paper you referenced, but it doesn't help with the specific questions I have: How would the geometry of the rod appear to an observer as one converges to c? What are the consequences of this to, say, electrons in a solenoid? – Kevin Jan 12 at 3:50 Check this research out: [Relativistic Hall Effect][2] "We examine manifestations of the relativistic Hall effect in quantum vortices and mechanical flywheels and also discuss various fundamental aspects of this phenomenon." quoted from the abstract. [2]: prl.aps.org/pdf/PRL/v108/i12/e120403 – Raindrop Jan 12 at 6:54 Relativistic contraction and related effects in noninertial frames xxx.lanl.gov/abs/gr-qc/0307011 en.wikipedia.org/wiki/Ehrenfest_paradox – Raindrop Jan 12 at 7:20 show 1 more comment ## 1 Answer Based on Relativistic description of a rotating disk O. Gron, Am. J. Phys. 43, 869 (1975), DOI:10.1119/1.9969 [I got the link from Wikipedia references on the Ehrenfest article] I think that for a rod instead of a disk: An observer S ("momentarily at rest relative to the disk") "measures an elliptical shape for the" path of the tip of the rod, "and finds that each point of it describes a cycloid-like path, while its center moves along a straight line with constant velocity. S' ("an accelerated observer ... rotating with the" rod) observes a rod at rest, while the surroundings are rotating. He measures a circular shape for the path of the tip of the rod. I have no idea how electrons would behave in a solenoid coiled around a rod with angular acceleration, $\omega \rightarrow cr$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.869303286075592, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47519/when-you-apply-the-spin-operator-what-exactly-is-does-it-tell-you
# When you apply the spin operator, what exactly is does it tell you? The example I'm trying to understand is: $\hat{S}_{x} \begin{pmatrix} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} \end{pmatrix} = 1/2 \begin{pmatrix} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} \end{pmatrix}$ My interpretation of this is that the vector shows you the probabilities of a particle being spin up or spin down if you square them. And I've been told that $\hat{S}_{x}$ gives you the spin as an eigenvalue, but how? Since its 50:50 of getting -1/2 and 1/2. $\hat{S}_{x}$ has only given you one of them. Is it that $\hat{S}_{x}$ only measures the magnitude of spin in the x direction? - You operated on an eigenvector of the spin operator. Generally, wave functions will be linear combinations of these eigenvectors, and their coefficients represent how much of each eigenvector is in the total state. – Alec S Dec 24 '12 at 17:32 ## 2 Answers Your equation says that your "vector" is an an eigenvector of your operator, i.e., that the x-projection of the spin is certain an equal to 1/2. As well it says that probabilities to find certain z-projections are equal to 1/2. This "vector" is not an eigenvector $\begin{pmatrix} 0\\ 1 \end{pmatrix}$ or $\begin{pmatrix} 1\\ 0 \end{pmatrix}$ of the spin z-projection $\hat{S}_z=\frac{\hbar}{2}\sigma_z$ , but is a superposition of them, that is why it is an eigenvector of a non-commuting with $\hat S_z$ operator $\hat{S}_x=\frac{\hbar}{2}\sigma_x$ . - Thanks, it took me awhile to realize everything is based on the z direction. – 9k9 Dec 26 '12 at 19:37 $$\frac{1}{\sqrt{2}} \begin{pmatrix} 1\\ 1 \end{pmatrix}$$ is the eigenvector of $\hat{S}_x=\frac{\hbar}{2}\sigma_x$ with eigenvalue $+\frac{\hbar}{2}$. $$\frac{1}{\sqrt{2}} \begin{pmatrix} 1\\ -1 \end{pmatrix}$$ is the eigenvector of $\hat{S}_x=\frac{\hbar}{2}\sigma_x$ with eigenvalue $-\frac{\hbar}{2}$. So for your vector it isn't 50/50 probability of getting + or -, it's 100% probability of getting +. - But I thought the way in which you arrive at the "vector" is by computing the probabilities for each spin state? – 9k9 Dec 26 '12 at 13:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548482894897461, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/222690/integral-solution-for-x-y-z-10
# Integral solution for $|x | + | y | + | z | = 10$ How can I find the number of integral solution to the equation $|x | + | y | + | z | = 10.$ I am using the formula, Number of integral solutions for $|x| +|y| +|z| = p$ is $(4P^2) +2$, So the answer is 402. But, I want to know, How we can find it without using formula. any suggestion !!! - 1 Or rather,how do we derive the formula? – user43081 Oct 28 '12 at 12:30 yes !!! sir... any help will be appreciate... – ram Oct 28 '12 at 12:36 ## 6 Answers The number $n_1(k)$ of solutions of $$|x|=k$$ is quite obviously given by $$n_1(k)=\begin{cases}2&\text{if }k>0\\1&\text{if }k=0\\0&\text{if }k<0.\end {cases}$$ Then the number of solutions $n_2(k)$ of $$|x|+|y|=k$$ can by obtained as $$n_2(k)=\sum_{i\in\mathbb Z}n_1(i)\cdot n_1(k-i)=\begin{cases}2+(k-1)\cdot 4+2=4k& \text{if }k>0\\1&\text{if }k=0\\0&\text{if }k<0.\end{cases}$$ Finally, the number $n_3(k)$ of solutions of $$|x|+|y|+|z|=k$$ is (using $\sum_{i=1}^n i=\frac{n(n+1)}2$) $$n_3(k)=\sum_{i\in\mathbb Z}n_2(i)\cdot n_1(k-i)\\=\begin{cases}2+2\sum_{i=1}^{k-1}4i + 4k=2+4k(k-1)+4k=4k^2+2& \text{if }k>0\\1&\text{if }k=0\\0&\text{if }k<0.\end{cases}$$ In general, the number of solutions of $$|x_1|+\cdots +|x_m|=k$$ is given by $$n_m(k)=\begin{cases}P_m(k)&\text{if }k>0\\1&\text{if }k=0\\0&\text{if }k<0,\end{cases}$$ where $P_m$ is some polynomial of degree $m-1$. The $P_m$ can be obtained recursively, e.g. via the relations $$P_m(X+1)-P_m(X)= P_{m-1}(X)+P_{m-1}(X+1)$$ $$P_m(1)=2m$$ - nice solution. It is easy to think that there is sort of convolution but to write them down in a systematic way is really good. – Seyhmus Güngören Oct 28 '12 at 13:04 Very nicely done indeed! – LieX Oct 28 '12 at 13:09 1 @SeyhmusGüngören What does convolution mean in this context? (I'm new to this concept.) – bodacydo Oct 30 '12 at 17:57 – Seyhmus Güngören Oct 30 '12 at 20:46 I give a geometric derivation, we want to count all integral points lying on surface $|x|+|y|+|z|=P$. Actually in 3D space, in every octant, the shape of surface is triangle shape. For example, if $P=4$, then the shape in the first octant would be $$\begin{array}[ccccccccc] \ & & & &Q(0,0,4)& & & &\\ & & &D(0,1,3))& &D(1,0,3)& & &\\ & &D(0,2,2)& &S(1,1,2)& &D(2,0,2)& &\\ &D(0,3,1)& &S(1,2,1)& &S(2,1,1)& &D(3,0,1)&\\ Q(0,4,0)& &D(1,3,0)& &D(2,2,0)& &D(3,1,0)& &Q(4,0,0)\\ \end{array}$$ where $Q$ denotes the points that should be shared by 4 octants, $D$ denotes the points that should be shared by 2 octants and $S$ denotes the points belongs only to this octants. So for the total octants, the number of points with S is $$n_S=8\cdot\frac{(P-1)(P-2)}{2}=4P^2-12P+8$$ the number of points with D is $$n_D=8\cdot\frac{3(P-1)}{2}=12P-12$$ the number of points with Q is $$n_Q=8\cdot\frac{3}{4}=6$$ So the total number would be $$n=n_S+n_D+n_Q=4P^2+2$$ - I'll give an argument for computing the number of integer solutions of $\sum_{i=1}^n|x_i|=k$ for any $n,k\in\Bbb N$, then specialize it for $n=3$. First the number of solutions of $\sum_{i=1}^nx_i=k$ with all $x_i\geq0$ is $\binom{k+n-1}{n-1}$: first draw $k+n-1$ vertical strokes, then for $n-1$ of them cross them with a horizontal stroke, turning them into $+$ signs, to obtain a decomposition $x_1+x_2+\cdots+x_n$ of $k$ with each of the $x_i$ in unary notation (possibly with no strokes at all, representing $0$). This counts the purely non-negative solutions to the original problem. To include solutions where $x_i<0$ for some $i$, we record the subset of the positions $i$ where this happens, and then replace each such negative $x_i$ by $-1-x_i$, making it non-negative. The result is a solution to the non-negative problem above, but with $k$ diminished by the number of originally negative $x_i$ (because of the term $-1$ used for each one). Thus we get the number $$\sum_{j=0}^n\binom nj\binom{k+n-1-j}{n-1}$$ of solutions to $\sum_{i=1}^n|x_i|=k$, where the binomial coefficient on the left counts the subsets of negative entries, and the one on the right counts the number of solutions of the corresponding non-negative problem. This summation does not appear to be of the type that can be reduced to a single binomial coefficient (as it would if the sum were alternating). For $n=3$ one gets $\sum_{j=0}^3\binom3j\binom{k+2-j}2$, which gives concretely $$\frac{(k+2)(k+1)+3(k+1)k+3k(k-1)+(k-1)(k-2)}2 =4k^2+2.$$ - This is very similar to Marc's answer, but since I was deriving it when he posted, I decided to keep going and post it anyway. First count the number of solution to $$x+y+z=k, x,y,z\geq 1.$$ By usual bars and stars argument, this is ${k-1\choose 2}$. Now by symetry, each of these solution is in fact $2^3$ solution allowing $\pm j$ in each variable. Now count the number of solution to $$x+y+z=k,$$ where exactly one of the variable is $0$. Using similar techniques and symetry arguments, on gets $${3\choose 1}{k-1\choose 1}2^2.$$ Finally the number of solution to $$x+y+z=k,$$ where exactly $2$ are $0$ is given by $6$, for you have $3$ choices for the non-zero variable and $2$ choices for its sign. This give $$\sum_{i=0}^{2}{3\choose i}{k-1\choose 2-i}2^{3-i}=4k^2+2$$ This generalized to $$\sum_{i=0}^{n-1}{n\choose i}{k-1\choose n-1-i}2^{n-i};$$ - A sketch of derivation goes like this: (I did it on paper but perhaps missed some pluses or minuses). Also this approach is not a generic one. Given the problem we know bound on each variable is $p$. Now if we assume we know $x = p$ then y and z can only be 0. If x is p-1, y and z $\in (0,1)$ which makes 4 cases (not 8 since +0 is same as -0). Similarly for x=p-2 we have y and z $\in (0,1,2)$ making 8 cases. You sum this all till you have x=1, Then double it for considering -ve values of x and finally add, x = 0 case, you would get the formula. :) Not elegant way at all, though there is clear trend in these additions, but would work. - I think it must 8 times the number of intergral solution to equation: $$x+y+z=10$$ So the result is : $$8C_{12}^2=528$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 25, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378214478492737, "perplexity_flag": "head"}
http://mathoverflow.net/questions/69914?sort=votes
## Program transformation as alternative for Hoare logic or temporal logic ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) When trying to prove something about a program, the known techniques are Hoare logic and temporal logics. An alternative is to transform a program in a mathematical (logical) expression. So, rather that mathematics is used to prove some properties of the program, the program itself is a piece of mathematics. Loops become transitive reflexive closures. Example, if one has a program that calculates a Fibonacci number. If the program keeps the last two numbers of the Fibonacci sequence in variables, then this be converted by taking the transitive reflexive closure of the relation P, that is true (and only true) for the following situation: $$P((x,\space y, \space z), \space (x+1, \space z, \space y+z))$$ In the original program, the right value is chosen within the loop. In the transitive reflexive closure, the right value must be selected outside the closure (loop). The transformed program is more like a non-deterministic program. The transformation of a program in a logical expression, can be done automatically. Although, this is not rocket science, I can not find any reference for this approach. I am busy with writing an article, where this is a part of (it is not the main subject). But I want to refer to the right articles and look if there is interesting material. Does someone has interesting references? Many thanks, Lucas Edit: Given the comment of Andreas, some clarification. The goal is to make formal reasoning about the program possible. So, transforming the program in a declarative language is insufficient, because the declarative language may not have means to make conclusions about a program, although the language itself might precisely defined. I was thinking in transforming the program in a FOL + PA expression. After such transformation, formal (that is why I tagged with lo.logic) reasoning can be done about the program. As far to my knowledge, I haven't seen this approach (the methods are always more in the direction of Hoare and temporal logics), although it is not very complicated. In my question I didn't want to restrict to FOL + PA. - 1 What you describe, expressing programs in a logical formalism, is extremely broad and the associated literature is immense. You're more likely to get relevant answers if you give a narrower description of what you're interested in. For example, how does it differ from just "write programs in declarative languages"? How does it differ from denotational semantics of programming languages? For that matter, how does it differ from descriptive complexity theory? Each of these is a huge area, and you surely don't want references for all of them (and more). – Andreas Blass Jul 10 2011 at 1:32 1 As Andreas points out, the known techniques are hardly exhausted by Hoare and temporal logic! However, given the thrust of your question, I would suggest looking at "refinement calculus" (see the book of Beck and von Wright: amazon.com/dp/0387984178). The idea in it is to view both programs and specifications as predicate transformers, with programs being a realizable subclass of the predicate transformers. Back also maintains a bibliography at users.abo.fi/backrj/… – Neel Krishnaswami Jul 10 2011 at 8:02 @Andreas, I edited the question. If you wish, you can fully narrow it down in transforming a program in a FOL + PA expression. Using Hoare logic or transforming it in a FOL + PA expression, is quite a different approach (although at the end it might not be that different). The first approach you find in the standard books and in the Wikipedia, the second not. – Lucas K. Jul 10 2011 at 11:19 ## 2 Answers It appears to me that the gist of your suggestion is to translate a program into a relation and reason about the transitive closure of that relation. It is orthogonal that this relation is definable in first order arithmetic. The idea of translating a program into a relation is rather old and I doubt there is a unique reference for it. I will share what I know, but in each case, there are surely older papers. The first paper below suggests modelling programs as transition systems and reasoning about them. Plotkin's paper provides a way to inductively derive a transition system from program text (though the idea is much older, I'm sure). 1. Robert Keller, 1976, Formal verification of parallel programs. 2. Gordon Plotkin, 1981. Structural Operational Semantics. The transition system is essentially the relation you describe. The transitive closure is a fixed point over this relation. It is one of several objects that can be defined by fixed points. Reasoning about properties of programs using fixed points is very old too. 1. David M. R. Park, 1969, Fixpoint induction and proofs of program properties. 2. Lawrence Flon and Norihisa Suzuki, 1975, Consistent and Complete Proof Rules for the Total Correctness of Parallel Programs. 3. Patrick Cousot and Radhia Cousot, 1977, Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. Finally, there is a precise mathematical sense in which the fixed point (or transitive closure) approach is not significantly different from Floyd/Hoare logic. 1. Edmund M. Clarke Jr., 1977, Program Invariants as Fixed Points. To quote from the abstract of the paper: We argue that soundness and relative completeness theorems for Floyd-Hoare Axiom Systems ([3], [5], [18]) are really fixedpoint theorems. We give a characterization of program invariants as fixedpoints of functionals which may be obtained in a natural manner from the text of a program. We show that within the framework of this fixedpoint theory, soundness and relative completeness results have a particularly simple interpretation. Completeness of a Floyd-Hoare Axiom System is equivalent to the existence of a fixedpoint for an appropriate functional, and soundness follows from the maximality of this fixedpoint. Reasoning about programs by computing fixed points is extremely standard in practice. Rather than relations, we tend to deal with a transformer defined by a relation, such as a predicate or state transformer. If you are genuinely committed to reasoning over relations in a logic, you will require transitive closure logics because properties like graph reachability are not first order definable. I can point to this recent paper, but you will have to dig around for older ones. 1. Neil Immerman, Alexander Rabinovich, Thomas W. Reps, Mooly Sagiv, and Greta Yorsh, 2004, The Boundary Between Decidability and Undecidability for Transitive Closure Logics. Edit: Adding a link. You might want to try the following verifiers that use a combination of automated reasoning and fixed point techniques. Though they may fail on harder examples, they can still discover useful invariants and errors. - Thanks for the answer. The idea that I have is to make a FOL extended with transitive closure, with some axioms for it. The advantage is that it stays rather simple. You don't need an induction axiom anymore, because that can be derived from the axioms of the closure (similar to the Axiom of Infinity). By the way, if you represent graphs as (Godel) numbers, then you can define reachability in FOL + PA. Of course, the graphs must be finite in such case. But you are right, that you can't define reachability if the graph is represented as predicate. – Lucas K. Jul 29 2011 at 21:33 I'm glad if you found it useful. One should distinguish between a reasoning technique that is sufficient in purely logical terms, one that humans can use, and one that machines can use. I personally think using Goedel numbers is sufficient only in purely logical terms. Various techniques works for humans and machines. Fixed points in particular fit well to automated techniques. The axiom of closure is a special instance of fixed point induction and in that form is widely used in practice. – Vijay D Aug 1 2011 at 0:07 Vijay. Thanks for your comments. But I don't agree with your second sentence. In my opinion, modern logic should be automatically verified. Otherwise, I call it just mathematics (nothing wrong with that). There are still plenty of possibilities to make automatic verifiable logic better suitable for humans. Goedel numbers are just the bits in the computer. You don't need to confront humans with it. – Lucas K. Aug 1 2011 at 22:21 Hello Lukas. Just to clarify, I'm talking about machine-generated proofs of program correctness, as opposed to machine-checked proofs. I agree that humans need not confront the gory details of a machine generated proof. Nonetheless, we don't usually want to know a program is correct, we often want some information why, such as invariants. Also, computational complexity and ease of implementation are important concerns in developing a program verifier. I do not know about the ease of implementing a Goedel numbering based verifier. Fixed-point and graph-based methods work well for simple cases. – Vijay D Aug 1 2011 at 23:04 Ah! I see now the difference. Thanks, I learned something again. The disadvantage of rewriting it in a fixed point, is that you need to use higher order logic. The Functor is a second order logical element. While the method I intended in my question stays within first order logic. The loops can be converted to FOL + PA, while you can't do that with fixed point induction. – Lucas K. Aug 1 2011 at 23:04 show 7 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hoare logic and temporal logic might be "the only known techniques for proving programs correct" to you, but there are certainly others! For example, and this list is not exhaustive: • equational reasoning about fixpoints, this works in languages like Haskell • properties of programs can be proved via denotational semantics, which in itself is a vast area including domain theory and game semantics, to name just two. • for certain kinds of programs, for example for parametrically polymorphic ones, there are techniques that go under the name "relational parametricity" • you can use various logical interpretations to get correctness of programs: • a program extracted as a realizer via the realizability interpretation of logic automatically satisfies a certain specification • with tools such as Coq you can use type theory to write programs as proofs, or construct programs and prove them correct all at once • there are other ways of extracting programs from logical statements, one family of which are variants of Gödel's Dialectica interpretation that extract programs from classical logic. Now, regarding your specific question. I think you should look at realizability, type theory, and extraction of programs from proofs. All of these are "logical" methods for developing correct programs, or proving them correct. Some randomly chosen starting points: • start with something fun and surprising, perhaps Paulo Oliva's tutorial on Programs from classical proofs via Gödel's dialectica interpretation • an accessible paper on realizability interpretation which uses logical methods in computable analysis might be Ulrich Berger's Realisability for Induction and Coinduction with Applications to Constructive Analysis • if you want to use computers to actually show correctness of programs, you could learn Coq and then proceed to Ynot (Hoare logic on steorids) or go straight to Adam Chipala's Certified Programming with Dependent Types. • cool people use Agda instead of Coq. • if you are first-order logic sort of person, you might find Minlog more palatable than Coq and Agda, as it does not throw type theory in your face. See you in two years. - Thanks for the answer. But the other answer was more the answer I was looking for. Basically, I am looking for methods that are simpler and better accessible for people that are not expert in logic. For those people I think it is interesting to understand the relation between loops and closures. Where the loop is more about programming, the closure is more mathematical. So, this is the bridge between those 2. Coq is for non-logicians too complex. I will take a look at Minlog. – Lucas K. Jul 29 2011 at 21:27 Andrej, I am reading the tutorial of Minlog. On page 12, they do add-global-assumption. These kind of things you want to avoid, because this way you can introduce inconsistencies in your system. The way I define the predicate in my question is 'safe', because it is a single definition. Taking the transitive reflexive closure is also 'safe'. So, I define the Fibonacci numbers in a safe way, while the tutorial in Minlog isn't. – Lucas K. Jul 29 2011 at 21:47 I was just commenting on your sweeping statement that "the known techniques are Hoare logic and temporal logics". There are many others. If you are going to educate people about loops, you should show them fixed points and explain how loops are a form of fixed points equations. That in my opinion is much more illuminating, and it is just stanard math (order theory). – Andrej Bauer Jul 30 2011 at 10:26 Sort of like this: cstheory.stackexchange.com/questions/7029/… – Andrej Bauer Jul 30 2011 at 10:27 Andreas, I am not familiar with Coq, but I know a little ZFC and HOL Light. As far I know, if one has to transform a computerprogram with a loop to a logical expression in one of those systems, then you end up in making a closure operator somewhere. So, when explaining how programs and systems like ZFC and HOL light are related, then I think you have to start that. From there, one can go to fixed point theorems. I am an outsider, because I do not work on an university. But graduates that start to work in my company, are not capable of defining a mathematical problem in a formal system. – Lucas K. Jul 31 2011 at 20:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354668855667114, "perplexity_flag": "middle"}
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.102.031801
Synopsis: Deciphering a bump in the spectrum Search for High-Mass e+e- Resonances in pp̅ Collisions at √s=1.96  TeV T. Aaltonen et al. (CDF Collaboration) Published January 23, 2009 An important method for searching for new particles is to look for resonances in the energy spectrum of pairs of charged leptons and their antiparticles emitted from high-energy collisions. Such a resonance can be an indication that a particle has decayed into a lepton-antilepton pair. Lepton resonances generally have a cleaner experimental signature and lower background than similar resonances for hadronic final states. This technique has provided evidence of such particles as the $J/ψ$, the $Υ$, and the $Z$ boson. In a paper appearing in Physical Review Letters, the CDF (Collider Detector at Fermilab) Collaboration reports searches for resonances in electron-positron pairs created by high-energy proton-antiproton collisions. Their results place significant limits on particles hypothesized in several popular theories that go beyond the standard model, including a prediction for the graviton and various types of $Z′$ bosons (new gauge bosons that correspond to additional symmetries). They do find a tantalizing indication of a resonance at around $240GeV/c2$: a “bump” or excess of events in that part of the spectrum. The statistical significance is somewhat weak and not strong enough to establish the existence of a particle, but it is strong enough to motivate further searches in this region to confirm or refute the existence of an as yet unknown particle. Although this experiment sees no evidence of the hypothetical particles it searched for, if a new particle is eventually confirmed, theorists will no doubt be scrambling to explain it. – Stanley Brown ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144198894500732, "perplexity_flag": "middle"}
http://roy-t.nl/index.php/tag/math/
## Thesis: upper sets in partially ordered sets Today I finally finished my thesis. It’s topic was to count the number of upper sets in partially ordered sets Which is quite a hard problem since it’s in the complexity class #P-Complete (that’s the class of counting the solutions to the decision problems in NP-complete). All and all I’m quite pleased with the result. Although the upper bound is still $O(2^{n})$, (can’t quite get under there without solving P=NP and winning a million dollars) I’ve manged to find a solution that has a best case of $O(n)$ both in time and memory complexity. With a particularly large data set the brute-force algorithm took over 2 hours to complete while my algorithm took 0.025 seconds. Now that’s what I’d call a speed gain (and yes it was a real life data set, no tricks here). You can see this for yourself in the graph at the bottom of this post the ‘naïeve algoritme’ is the brute force approach, the ‘Familiealgoritme zonder uptrie’ is the first version of my algorithm, the ‘Familiealgoritme met uptrie’ is the final version of my algorithm. It uses a trie like data structure to speed up searching and uses a lot less memory. Note that the graph has a logarithmic scale. Unfortunately for most readers my thesis is in Dutch, but I’ve translated the abstract to English: Counting the number of upper sets in partially ordered sets gives us a unique number that can be used to compare sets. This number is like the fingerprint of a set. Until now there isn’t, as to my knowledge, an efficient algorithm to calculate this number. This meant that the number had to be calculated either by hand or by using a brute force approach. Using a brute force approach leads quickly to problems, even for trivially small data sets since this means that you have to generate 2^n subsets and check each of these subsets on upwards closure. When calculating by hand you can use symmetry but this menial process can take a lot of time and is error prone. In this thesis I present an algorithm that can calculate exact, and usually fast, the number of upper sets in a partially ordered set. You can download my thesis here: Upper sets in partially ordered sets (Bsc thesis Roy Triesscheijn) as I’ve said before the text is in Dutch, but the proofs and attached code should be readable enough. If you’ve got any questions feel free to ask below! 06 Jul 2012 CATEGORY Blog, Personal COMMENTS No Comments ## Getting the Left, Forward and Back vectors from a View Matrix directly I was wondering why I had to calculate the forward and left vectors for my arcball camera manually and why these results differed from ViewMatrix.Left and the likes.  So I asked at the xna forums and almost immediately Jeremy Walsh pointed me in the right direction.  He pointed out to me that view matrices are transposed from a normal matrix (meaning that the rows and columns are switched). To get the right vectors from the view matrix, we have to transpose it again to get the original matrix, however this generates a lot of garbage, so he told me that its better to construct the vectors from the matrix cells themselves. And so I did, and I’ve packaged them into my neat PositionalMath class (which I might release some day). Here are the methods to get all the information you want from those view matrices, without having to calculate the forward (lookat – position) and crossing that. ```// Because a ViewMatrix is an inverse transposed matrix, viewMatrix.Left is not the real left // These methods returns the real .Left, .Right, .Up, .Down, .Forward, .Backward // See: http://forums.xna.com/forums/t/48799.aspx public static Vector3 ViewMatrixLeft(Matrix viewMatrix) { return -ViewMatrixRight(viewMatrix); } public static Vector3 ViewMatrixRight(Matrix viewMatrix) { return new Vector3(viewMatrix.M11, viewMatrix.M21, viewMatrix.M31); } public static Vector3 ViewMatrixUp(Matrix viewMatrix) { return new Vector3(viewMatrix.M12, viewMatrix.M22, viewMatrix.M33); } public static Vector3 ViewMatrixDown(Matrix viewMatrix) { return -ViewMatrixUp(viewMatrix); } public static Vector3 ViewMatrixForward(Matrix viewMatrix) { return -ViewMatrixBackward(viewMatrix); } public static Vector3 ViewMatrixBackward(Matrix viewMatrix) { return new Vector3(viewMatrix.M13, viewMatrix.M23, viewMatrix.M33); } ``` 04 Mar 2010 CATEGORY Blog COMMENTS No Comments TAGS Coordination, Math, Matrices, Vectors, View Matrix, ViewMatrix, XNA
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364096522331238, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/283222/sketching-complex-numbers-in-coordinate-system
# Sketching complex numbers in coordinate system Hi guys i want to sketch these set of complex numbers in coordinate system, i hope you can help me. $a.\{z\in \mathbb{C}||z-1|+|z+1|<4\}$ $b.\{z\in \mathbb{C}| \mathrm{Im}((1-i)z)=0\}$ $c.\{z\in \mathbb{C}|1<|z+3i|<2\}$ Thanks in advance:) - Note that $|z-a|$ is distance of $z$ from $a$. Use this to translate a and c to geometry questions. – Maesumi Jan 21 at 3:21 ## 1 Answer Hint: For (b), $$\text{Im}((1-i)(x+iy))=\text{Im}((x+y)+i(y-x))=(y-x)$$ so what is the area in $\mathbb R^2$ when you are given $$y-x=0$$ For(c): If you set $x+iy=z$ then $|z+3i|=2\equiv|x+i(y+3)|=2~\equiv~\sqrt{x^2+(y+3)^2}=2$ shows a circle in $\mathbb R^2$ centered at $(0,-3)$ with radius $2$. The same is true for $|z+3i|=1$. Now what is $1<|z+3i|<2$? Isn't it the area between two circle which do not contain their borders? - 1 Nice graphic, again! + 1! – amWhy Feb 11 at 2:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.893657922744751, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Euclidean_vector
# Euclidean vector In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric[1] or spatial vector,[2] or—as here—simply a vector) is a geometric object that has magnitude (or length) and direction and can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B,[3] and denoted by $\overrightarrow{AB}.$ Vectors play an important role in physics: velocity and acceleration of a moving object and forces acting on it are all described by vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can be still represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors. It is important to distinguish Euclidean vectors from the more general concept in linear algebra of vectors as elements of a vector space. General vectors in this sense are fixed-size, ordered collections of items as in the case of Euclidean vectors, but the individual items may not be real numbers, and the normal Euclidean concepts of length, distance and angle may not be applicable. (A vector space with a definition of these concepts is called an inner product space.) In turn, both of these definitions of vector should be distinguished from the statistical concept of a random vector. The individual items in a random vector are individual real-valued random variables, and are often manipulated using the same sort of mathematical vector and matrix operations that apply to the other types of vectors, but otherwise usually behave more like collections of individual values. Concepts of length, distance and angle do not normally apply to these vectors, either; rather, what links the values together is the potential correlations among them. The word "vector" originates from the Latin vehere meaning "to carry". It was first used by 18th century astronomers investigating planet rotation around the Sun. [4] ## Overview In physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a directed line segment, or arrow, in a Euclidean space.[5] In pure mathematics, a vector is defined more generally as any element of a vector space. In this context, vectors are abstract entities which may or may not be characterized by a magnitude and a direction. This generalized definition implies that the above mentioned geometric entities are a special kind of vectors, as they are elements of a special kind of vector space called Euclidean space. This article is about vectors strictly defined as arrows in Euclidean space. When it becomes necessary to distinguish these special vectors from vectors as defined in pure mathematics, they are sometimes referred to as geometric, spatial, or Euclidean vectors. Being an arrow, a Euclidean vector possesses a definite initial point and terminal point. A vector with fixed initial and terminal point is called a bound vector. When only the magnitude and direction of the vector matter, then the particular initial point is of no importance, and the vector is called a free vector. Thus two arrows $\overrightarrow{AB}$ and $\overrightarrow{A'B'}$ in space represent the same free vector if they have the same magnitude and direction: that is, they are equivalent if the quadrilateral ABB′A′ is a parallelogram. If the Euclidean space is equipped with a choice of origin, then a free vector is equivalent to the bound vector of the same magnitude and direction whose initial point is the origin. The term vector also has generalizations to higher dimensions and to more formal approaches with much wider applications. ### Examples in one dimension Since the physicist's concept of force has a direction and a magnitude, it may be seen as a vector. As an example, consider a rightward force F of 15 newtons. If the positive axis is also directed rightward, then F is represented by the vector 15 N, and if positive points leftward, then the vector for F is −15 N. In either case, the magnitude of the vector is 15 N. Likewise, the vector representation of a displacement Δs of 4 meters to the right would be 4 m or −4 m, and its magnitude would be 4 m regardless. ### In physics and engineering Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has magnitude, has direction, and which adheres to the rules of vector addition. An example is velocity, the magnitude of which is speed. For example, the velocity 5 meters per second upward could be represented by the vector (0,5) (in 2 dimensions with the positive y axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction and follows the rules of vector addition. Vectors also describe many other physical quantities, such as linear displacement, displacement, linear acceleration, angular acceleration, linear momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field. Examples of quantities that have magnitude and direction but fail to follow the rules of vector addition: Angular displacement and electric current. Consequently, these are not vectors. ### In Cartesian space In the Cartesian coordinate system, a vector can be represented by identifying the coordinates of its initial and terminal point. For instance, the points A = (1,0,0) and B = (0,1,0) in space determine the free vector $\overrightarrow{AB}$ pointing from the point x=1 on the x-axis to the point y=1 on the y-axis. Typically in Cartesian coordinates, one considers primarily bound vectors. A bound vector is determined by the coordinates of the terminal point, its initial point always having the coordinates of the origin O = (0,0,0). Thus the bound vector represented by (1,0,0) is a vector of unit length pointing from the origin along the positive x-axis. The coordinate representation of vectors allows the algebraic features of vectors to be expressed in a convenient numerical fashion. For example, the sum of the vectors (1,2,3) and (−2,0,4) is the vector (1, 2, 3) + (−2, 0, 4) = (1 − 2, 2 + 0, 3 + 4) = (−1, 2, 7). ### Euclidean and affine vectors In the geometrical and physical settings, sometimes it is possible to associate, in a natural way, a length or magnitude and a direction to vectors. In turn, the notion of direction is strictly associated with the notion of an angle between two vectors. When the length of vectors is defined, it is possible to also define a dot product — a scalar-valued product of two vectors — which gives a convenient algebraic characterization of both length (the square root of the dot product of a vector by itself) and angle (a function of the dot product between any two non-zero vectors). In three dimensions, it is further possible to define a cross product which supplies an algebraic characterization of the area and orientation in space of the parallelogram defined by two vectors (used as sides of the parallelogram). However, it is not always possible or desirable to define the length of a vector in a natural way. This more general type of spatial vector is the subject of vector spaces (for bound vectors) and affine spaces (for free vectors). An important example is Minkowski space that is important to our understanding of special relativity, where there is a generalization of length that permits non-zero vectors to have zero length. Other physical examples come from thermodynamics, where many of the quantities of interest can be considered vectors in a space with no notion of length or angle.[6] ### Generalizations In physics, as well as mathematics, a vector is often identified with a tuple, or list of numbers, which depend on some auxiliary coordinate system or reference frame. When the coordinates are transformed, for example by rotation or stretching, then the components of the vector also transform. The vector itself has not changed, but the reference frame has, so the components of the vector (or measurements taken with respect to the reference frame) must change to compensate. The vector is called covariant or contravariant depending on how the transformation of the vector's components is related to the transformation of coordinates. In general, contravariant vectors are "regular vectors" with units of distance (such as a displacement) or distance times some other unit (such as velocity or acceleration); covariant vectors, on the other hand, have units of one-over-distance such as gradient. If you change units (a special case of a change of coordinates) from meters to milimeters, a scale factor of 1/1000, a displacement of 1 m becomes 1000 mm–a contravariant change in numerical value. In contrast, a gradient of 1 K/m becomes 0.001 K/mm–a covariant change in value. See covariance and contravariance of vectors. Tensors are another type of quantity that behave in this way; in fact a vector is a special type of tensor. In pure mathematics, a vector is any element of a vector space over some field and is often represented as a coordinate vector. The vectors described in this article are a very special case of this general definition because they are contravariant with respect to the ambient space. Contravariance captures the physical intuition behind the idea that a vector has "magnitude and direction". ## History The concept of vector, as we know it today, evolved gradually over a period of more than 200 years. About a dozen people made significant contributions.[7] The immediate predecessor of vectors were quaternions, devised by William Rowan Hamilton in 1843 as a generalization of complex numbers. Initially, his search was for a formalism to enable the analysis of three-dimensional space in the same way that complex numbers had enabled analysis of two-dimensional space, but he arrived at a four-dimensional system. In 1846 Hamilton divided his quaternions into the sum of real and imaginary parts that he respectively called "scalar" and "vector": The algebraically imaginary part, being geometrically constructed by a straight line, or radius vector, which has, in general, for each determined quaternion, a determined length and determined direction in space, may be called the vector part, or simply the vector of the quaternion.[8] Several other mathematicians developed vector-like systems around the same time as Hamilton including Giusto Bellavitis, Augustin Cauchy, Hermann Grassmann, August Möbius, Comte de Saint-Venant, and Matthew O’Brien. Grassmann's 1840 work Theorie der Ebbe und Flut (Theory of the Ebb and Flow) was the first system of spatial analysis similar to today's system and had ideas corresponding to the cross product, scalar product and vector differentiation. Grassmann's work was largely neglected until the 1870's.[7] Peter Guthrie Tait carried the quaternion standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇. In 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product. This approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwell's Treatise on Electricity and Magnetism, separated off their vector part for independent treatment. The first half of Gibbs's Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis.[7] In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibb's lectures, and banishing any mention of quaternions in the development of vector calculus. ## Representations Vectors are usually denoted in lowercase boldface, as a or lowercase italic boldface, as a. (Uppercase letters are typically used to represent matrices.) Other conventions include $\vec{a}$ or a, especially in handwriting. Alternatively, some use a tilde (~) or a wavy underline drawn beneath the symbol, e.g. $\underset{^\sim}a$, which is a convention for indicating boldface type. If the vector represents a directed distance or displacement from a point A to a point B (see figure), it can also be denoted as $\overrightarrow{AB}$ or AB. Especially in literature in German it was common to represent vectors with small fraktur letters as $\mathfrak{a}$. Vectors are usually shown in graphs or other diagrams as arrows (directed line segments), as illustrated in the figure. Here the point A is called the origin, tail, base, or initial point; point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction. On a two-dimensional diagram, sometimes a vector perpendicular to the plane of the diagram is desired. These vectors are commonly shown as small circles. A circle with a dot at its centre (Unicode U+2299 ⊙) indicates a vector pointing out of the front of the diagram, toward the viewer. A circle with a cross inscribed in it (Unicode U+2297 ⊗) indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip of an arrow head on and viewing the vanes of an arrow from the back. A vector in the Cartesian plane, showing the position of a point A with coordinates (2,3). In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented as coordinate vectors in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers (n-tuple). These numbers are the coordinates of the endpoint of the vector, with respect to a given Cartesian coordinate system, and are typically called the scalar components (or scalar projections) of the vector on the axes of the coordinate system. As an example in two dimensions (see figure), the vector from the origin O = (0,0) to the point A = (2,3) is simply written as $\mathbf{a} = (2,3).$ The notion that the tail of the vector coincides with the origin is implicit and easily understood. Thus, the more explicit notation $\overrightarrow{OA}$ is usually not deemed necessary and very rarely used. In three dimensional Euclidean space (or $\mathbb{R}^3$), vectors are identified with triples of scalar components: $\mathbf{a} = (a_1, a_2, a_3).$ also written $\mathbf{a} = (a_\text{x}, a_\text{y}, a_\text{z}).$ These numbers are often arranged into a column vector or row vector, particularly when dealing with matrices, as follows: $\mathbf{a} = \begin{bmatrix} a_1\\ a_2\\ a_3\\ \end{bmatrix}$ $\mathbf{a} = [ a_1\ a_2\ a_3 ].$ Another way to represent a vector in n-dimensions is to introduce the standard basis vectors. For instance, in three dimensions, there are three of them: ${\mathbf e}_1 = (1,0,0),\ {\mathbf e}_2 = (0,1,0),\ {\mathbf e}_3 = (0,0,1).$ These have the intuitive interpretation as vectors of unit length pointing up the x, y, and z axis of a Cartesian coordinate system, respectively. In terms of these, any vector a in $\mathbb{R}^3$ can be expressed in the form: $\mathbf{a} = (a_1,a_2,a_3) = a_1(1,0,0) + a_2(0,1,0) + a_3(0,0,1), \$ or $\mathbf{a} = \mathbf{a}_1 + \mathbf{a}_2 + \mathbf{a}_3 = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3,$ where a1, a2, a3 are called the vector components (or vector projections) of a on the basis vectors or, equivalently, on the corresponding Cartesian axes x, y, and z (see figure), while a1, a2, a3 are the respective scalar components (or scalar projections). In introductory physics textbooks, the standard basis vectors are often instead denoted $\mathbf{i},\mathbf{j},\mathbf{k}$ (or $\mathbf{\hat{x}}, \mathbf{\hat{y}}, \mathbf{\hat{z}}$, in which the hat symbol ^ typically denotes unit vectors). In this case, the scalar and vector components are denoted respectively ax, ay, az, and ax, ay, az (note the difference in boldface). Thus, $\mathbf{a} = \mathbf{a}_\text{x} + \mathbf{a}_\text{y} + \mathbf{a}_\text{z} = a_\text{x}{\mathbf i} + a_\text{y}{\mathbf j} + a_\text{z}{\mathbf k}.$ The notation ei is compatible with the index notation and the summation convention commonly used in higher level mathematics, physics, and engineering. ### Decomposition For more details on this topic, see Basis (linear algebra). As explained above a vector is often described by a set of vector components that add up to form the given vector. Typically, these components are the projections of the vector on a set of mutually perpendicular reference axes (basis vectors). The vector is said to be decomposed or resolved with respect to that set. Illustration of tangential and normal components of a vector to a surface. However, the decomposition of a vector into components is not unique, because it depends on the choice of the axes on which the vector is projected. Moreover, the use of Cartesian unit vectors such as $\mathbf{\hat{x}}, \mathbf{\hat{y}}, \mathbf{\hat{z}}$ as a basis in which to represent a vector is not mandated. Vectors can also be expressed in terms of the unit vectors of a cylindrical coordinate system ($\boldsymbol{\hat{\rho}}, \boldsymbol{\hat{\phi}}, \mathbf{\hat{z}}$) or spherical coordinate system ($\mathbf{\hat{r}}, \boldsymbol{\hat{\theta}}, \boldsymbol{\hat{\phi}}$). The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry respectively. The choice of a coordinate system doesn't affect the properties of a vector or its behaviour under transformations. A vector can be also decomposed with respect to "non-fixed" axes which change their orientation as a function of time or space. For example, a vector in three-dimensional space can be decomposed with respect to two axes, respectively normal, and tangent to a surface (see figure). Moreover, the radial and tangential components of a vector relate to the radius of rotation of an object. The former is parallel to the radius and the latter is orthogonal to it.[9] In these cases, each of the components may be in turn decomposed with respect to a fixed coordinate system or basis set (e.g., a global coordinate system, or inertial reference frame). ## Basic properties The following section uses the Cartesian coordinate system with basis vectors ${\mathbf e}_1 = (1,0,0),\ {\mathbf e}_2 = (0,1,0),\ {\mathbf e}_3 = (0,0,1)$ and assumes that all vectors have the origin as a common base point. A vector a will be written as ${\mathbf a} = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3.$ ### Equality Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors ${\mathbf a} = a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3$ and ${\mathbf b} = b_1{\mathbf e}_1 + b_2{\mathbf e}_2 + b_3{\mathbf e}_3$ are equal if $a_1 = b_1,\quad a_2=b_2,\quad a_3=b_3.\,$ ### Addition and subtraction For more details on this topic, see Vector space. Assume now that a and b are not necessarily equal vectors, but that they may have different magnitudes and directions. The sum of a and b is $\mathbf{a}+\mathbf{b} =(a_1+b_1)\mathbf{e}_1 +(a_2+b_2)\mathbf{e}_2 +(a_3+b_3)\mathbf{e}_3.$ The addition may be represented graphically by placing the tail of the arrow b at the head of the arrow a, and then drawing an arrow from the tail of a to the head of b. The new arrow drawn represents the vector a + b, as illustrated below: This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are bound vectors that have the same base point, this point will also be the base point of a + b. One can check geometrically that a + b = b + a and (a + b) + c = a + (b + c). The difference of a and b is $\mathbf{a}-\mathbf{b} =(a_1-b_1)\mathbf{e}_1 +(a_2-b_2)\mathbf{e}_2 +(a_3-b_3)\mathbf{e}_3.$ Subtraction of two vectors can be geometrically defined as follows: to subtract b from a, place the tails of a and b at the same point, and then draw an arrow from the head of b to the head of a. This new arrow represents the vector a − b, as illustrated below: Subtraction of two vectors may also be performed by adding the opposite of the second vector to the first vector, that is, a − b = a + (−b). ### Scalar multiplication Scalar multiplication of a vector by a factor of 3 stretches the vector out. The scalar multiplications 2a and −a of a vector a A vector may also be multiplied, or re-scaled, by a real number r. In the context of conventional vector algebra, these real numbers are often called scalars (from scale) to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is $r\mathbf{a}=(ra_1)\mathbf{e}_1 +(ra_2)\mathbf{e}_2 +(ra_3)\mathbf{e}_3.$ Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized (at least in the case when r is an integer) as placing r copies of the vector in a line where the endpoint of one vector is the initial point of the next vector. If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (r = −1 and r = 2) are given below: Scalar multiplication is distributive over vector addition in the following sense: r(a + b) = ra + rb for all vectors a and b and all scalars r. One can also show that a − b = a + (−1)b. ### Length The length or magnitude or norm of the vector a is denoted by ‖a‖ or, less commonly, |a|, which is not to be confused with the absolute value (a scalar "norm"). The length of the vector a can be computed with the Euclidean norm $\left\|\mathbf{a}\right\|=\sqrt{{a_1}^2+{a_2}^2+{a_3}^2}$ which is a consequence of the Pythagorean theorem since the basis vectors e1, e2, e3 are orthogonal unit vectors. This happens to be equal to the square root of the dot product, discussed below, of the vector with itself: $\left\|\mathbf{a}\right\|=\sqrt{\mathbf{a}\cdot\mathbf{a}}.$ Unit vector The normalization of a vector a into a unit vector â Main article: Unit vector A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â. To normalize a vector a = [a1, a2, a3], scale the vector by the reciprocal of its length ||a||. That is: $\mathbf{\hat{a}} = \frac{\mathbf{a}}{\left\|\mathbf{a}\right\|} = \frac{a_1}{\left\|\mathbf{a}\right\|}\mathbf{e}_1 + \frac{a_2}{\left\|\mathbf{a}\right\|}\mathbf{e}_2 + \frac{a_3}{\left\|\mathbf{a}\right\|}\mathbf{e}_3$ Null vector Main article: Null vector The null vector (or zero vector) is the vector with length zero. Written out in coordinates, the vector is (0,0,0), and it is commonly denoted $\vec{0}$, or 0, or simply 0. Unlike any other vector it has an arbitrary or indeterminate direction, and cannot be normalized (that is, there is no unit vector which is a multiple of the null vector). The sum of the null vector with any vector a is a (that is, 0+a=a). ### Dot product Main article: dot product The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b and is defined as: $\mathbf{a}\cdot\mathbf{b} =\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\cos\theta$ where θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means that a and b are drawn with a common start point and then the length of a is multiplied with the length of that component of b that points in the same direction as a. The dot product can also be defined as the sum of the products of the components of each vector as $\mathbf{a} \cdot \mathbf{b} = a_1 b_1 + a_2 b_2 + a_3 b_3.$ ### Cross product Main article: Cross product The cross product (also called the vector product or outer product) is only meaningful in three or seven dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as $\mathbf{a}\times\mathbf{b} =\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\sin(\theta)\,\mathbf{n}$ where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and (–n). An illustration of the cross product The cross product a × b is defined so that a, b, and a × b also becomes a right-handed system (but note that a and b are not necessarily orthogonal). This is the right-hand rule. The length of a × b can be interpreted as the area of the parallelogram having a and b as sides. The cross product can be written as ${\mathbf a}\times{\mathbf b} = (a_2 b_3 - a_3 b_2) {\mathbf e}_1 + (a_3 b_1 - a_1 b_3) {\mathbf e}_2 + (a_1 b_2 - a_2 b_1) {\mathbf e}_3.$ For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see below). ### Scalar triple product Main article: Scalar triple product The scalar triple product (also called the box product or mixed triple product) is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar triple product is sometimes denoted by (a b c) and defined as: $(\mathbf{a}\ \mathbf{b}\ \mathbf{c}) =\mathbf{a}\cdot(\mathbf{b}\times\mathbf{c}).$ It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors a, b and c are right-handed. In components (with respect to a right-handed orthonormal basis), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the determinant of the 3-by-3 matrix having the three vectors as rows $(\mathbf{a}\ \mathbf{b}\ \mathbf{c})=\left|\begin{pmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \\ \end{pmatrix}\right|$ The scalar triple product is linear in all three entries and anti-symmetric in the following sense: $(\mathbf{a}\ \mathbf{b}\ \mathbf{c}) = (\mathbf{c}\ \mathbf{a}\ \mathbf{b}) = (\mathbf{b}\ \mathbf{c}\ \mathbf{a})= -(\mathbf{a}\ \mathbf{c}\ \mathbf{b}) = -(\mathbf{b}\ \mathbf{a}\ \mathbf{c}) = -(\mathbf{c}\ \mathbf{b}\ \mathbf{a}).$ ### Multiple Cartesian bases All examples thus far have dealt with vectors expressed in terms of the same basis, namely, e1, e2, e3. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. For example, using the vector a from above, $\mathbf{a} = a_1\mathbf{e}_1 + a_2\mathbf{e}_2 + a_3\mathbf{e}_3 = u\mathbf{n}_1 + v\mathbf{n}_2 + w\mathbf{n}_3$ where n1, n2, n3 form another orthonormal basis not aligned with e1, e2, e3. The values of u, v, and w are such that the resulting vector sum is exactly a. It is not uncommon to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In order to perform many of the operations defined above, it is necessary to know the vectors in terms of the same basis. One simple way to express a vector known in one basis in terms of another uses column matrices that represent the vector in each basis along with a third matrix containing the information that relates the two bases. For example, in order to find the values of u, v, and w that define a in the n1, n2, n3 basis, a matrix multiplication may be employed in the form $\begin{bmatrix} u \\ v \\ w \\ \end{bmatrix} = \begin{bmatrix} c_{11} & c_{12} & c_{13} \\ c_{21} & c_{22} & c_{23} \\ c_{31} & c_{32} & c_{33} \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}$ where each matrix element cjk is the direction cosine relating nj to ek.[10] The term direction cosine refers to the cosine of the angle between two unit vectors, which is also equal to their dot product.[10] By referring collectively to e1, e2, e3 as the e basis and to n1, n2, n3 as the n basis, the matrix containing all the cjk is known as the "transformation matrix from e to n", or the "rotation matrix from e to n" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from e to n"[10] (because it contains direction cosines). The properties of a rotation matrix are such that its inverse is equal to its transpose. This means that the "rotation matrix from e to n" is the transpose of "rotation matrix from n to e". By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases.[10] ### Other dimensions With the exception of the cross and triple products, the above formulae generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions as $(a_1{\mathbf e}_1 + a_2{\mathbf e}_2)+(b_1{\mathbf e}_1 + b_2{\mathbf e}_2) = (a_1+b_1){\mathbf e}_1 + (a_2+b_2){\mathbf e}_2$ and in four dimensions as $\begin{align}(a_1{\mathbf e}_1 + a_2{\mathbf e}_2 + a_3{\mathbf e}_3 + a_4{\mathbf e}_4) &+ (b_1{\mathbf e}_1 + b_2{\mathbf e}_2 + b_3{\mathbf e}_3 + b_4{\mathbf e}_4) =\\ (a_1+b_1){\mathbf e}_1 + (a_2+b_2){\mathbf e}_2 &+ (a_3+b_3){\mathbf e}_3 + (a_4+b_4){\mathbf e}_4.\end{align}$ The cross product does not readily generalise to other dimensions, though the closely related exterior product does, whose result is a bivector. In two dimensions this is simply a pseudoscalar $(a_1{\mathbf e}_1 + a_2{\mathbf e}_2)\wedge(b_1{\mathbf e}_1 + b_2{\mathbf e}_2) = (a_1 b_2 - a_2 b_1)\mathbf{e}_1 \mathbf{e}_2.$ A seven-dimensional cross product is similar to the cross product in that its result is a vector orthogonal to the two arguments; there is however no natural way of selecting one of the possible such products. ## Physics Vectors have many uses in physics and other sciences. ### Length and units In abstract vector spaces, the length of the arrow depends on a dimensionless scale. If it represents, for example, a force, the "scale" is of physical dimension length/force. Thus there is typically consistency in scale among quantities of the same dimension, but otherwise scale ratios may vary; for example, if "1 newton" and "5 m" are both represented with an arrow of 2 cm, the scales are 1:250 and 1 m:50 N respectively. Equal length of vectors of different dimension has no particular significance unless there is some proportionality constant inherent in the system that the diagram represents. Also length of a unit vector (of dimension length, not length/force, etc.) has no coordinate-system-invariant significance. ### Vector-valued functions Main article: Vector-valued function Often in areas of physics and mathematics, a vector evolves in time, meaning that it depends on a time parameter t. For instance, if r represents the position vector of a particle, then r(t) gives a parametric representation of the trajectory of the particle. Vector-valued functions can be differentiated and integrated by differentiating or integrating the components of the vector, and many of the familiar rules from calculus continue to hold for the derivative and integral of vector-valued functions. ### Position, velocity and acceleration The position of a point x = (x1, x2, x3) in three-dimensional space can be represented as a position vector whose base point is the origin ${\mathbf x} = x_1 {\mathbf e}_1 + x_2{\mathbf e}_2 + x_3{\mathbf e}_3.$ The position vector has dimensions of length. Given two points x = (x1, x2, x3), y = (y1, y2, y3) their displacement is a vector ${\mathbf y}-{\mathbf x}=(y_1-x_1){\mathbf e}_1 + (y_2-x_2){\mathbf e}_2 + (y_3-x_3){\mathbf e}_3.$ which specifies the position of y relative to x. The length of this vector gives the straight line distance from x to y. Displacement has the dimensions of length. The velocity v of a point or particle is a vector, its length gives the speed. For constant velocity the position at time t will be ${\mathbf x}_t= t {\mathbf v} + {\mathbf x}_0,$ where x0 is the position at time t=0. Velocity is the time derivative of position. Its dimensions are length/time. Acceleration a of a point is vector which is the time derivative of velocity. Its dimensions are length/time2. ### Force, energy, work Force is a vector with dimensions of mass×length/time2 and Newton's second law is the scalar multiplication ${\mathbf F} = m{\mathbf a}$ Work is the dot product of force and displacement $E = {\mathbf F} \cdot ({\mathbf x}_2 - {\mathbf x}_1).$ ## Vectors as directional derivatives A vector may also be defined as a directional derivative: consider a function $f(x^\alpha)$ and a curve $x^\alpha (\tau)$. Then the directional derivative of $f$ is a scalar defined as $\frac{df}{d\tau} = \sum_{\alpha=1}^n \frac{dx^\alpha}{d\tau}\frac{\partial f}{\partial x^\alpha}.$ where the index $\alpha$ is summed over the appropriate number of dimensions (for example, from 1 to 3 in 3-dimensional Euclidean space, from 0 to 3 in 4-dimensional spacetime, etc.). Then consider a vector tangent to $x^\alpha (\tau)$: $t^\alpha = \frac{dx^\alpha}{d\tau}.$ The directional derivative can be rewritten in differential form (without a given function $f$) as $\frac{d}{d\tau} = \sum_\alpha t^\alpha\frac{\partial}{\partial x^\alpha}.$ Therefore any directional derivative can be identified with a corresponding vector, and any vector can be identified with a corresponding directional derivative. A vector can therefore be defined precisely as $\mathbf{a} \equiv a^\alpha \frac{\partial}{\partial x^\alpha}.$ ## Vectors, pseudovectors, and transformations An alternative characterization of Euclidean vectors, especially in physics, describes them as lists of quantities which behave in a certain way under a coordinate transformation. A contravariant vector is required to have components that "transform like the coordinates" under changes of coordinates such as rotation and dilation. The vector itself does not change under these operations; instead, the components of the vector make a change that cancels the change in the spatial axes, in the same way that co-ordinates change. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, like the co-ordinates, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by an invertible matrix M, so that a coordinate vector x is transformed to x′ = Mx, then a contravariant vector v must be similarly transformed via v′ = Mv. This important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities. For example, if v consists of the x, y, and z-components of velocity, then v is a contravariant vector: if the coordinates of space are stretched, rotated, or twisted, then the components of the velocity transform in the same way. On the other hand, for instance, a triple consisting of the length, width, and height of a rectangular box could make up the three components of an abstract vector, but this vector would not be contravariant, since rotating the box does not change the box's length, width, and height. Examples of contravariant vectors include displacement, velocity, electric field, momentum, force, and acceleration. In the language of differential geometry, the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a contravariant vector to be a tensor of contravariant rank one. Alternatively, a contravariant vector is defined to be a tangent vector, and the rules for transforming a contravariant vector follow from the chain rule. Some vectors transform like contravariant vectors, except that when they are reflected through a mirror, they flip and gain a minus sign. A transformation that switches right-handedness to left-handedness and vice versa like a mirror does is said to change the orientation of space. A vector which gains a minus sign when the orientation of space changes is called a pseudovector or an axial vector. Ordinary vectors are sometimes called true vectors or polar vectors to distinguish them from pseudovectors. Pseudovectors occur most frequently as the cross product of two ordinary vectors. One example of a pseudovector is angular velocity. Driving in a car, and looking forward, each of the wheels has an angular velocity vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the reflection of this angular velocity vector points to the right, but the actual angular velocity vector of the wheel still points to the left, corresponding to the minus sign. Other examples of pseudovectors include magnetic field, torque, or more generally any cross product of two (true) vectors. This distinction between vectors and pseudovectors is often ignored, but it becomes important in studying symmetry properties. See parity (physics). ## See also • Affine space, which distinguishes between vectors and points • Array data structure or Vector (Computer Science) • Banach space • Clifford algebra • Complex number • Coordinate system • Covariance and contravariance of vectors • Four-vector, a non-Euclidean vector in Minkowski space (i.e. four-dimensional spacetime), important in relativity • Function space • Grassmann's Ausdehnungslehre • Hilbert space • Normal vector • Null vector • Pseudovector • Quaternion • Tangential and normal components (of a vector) • Tensor • Unit vector • Vector bundle • Vector calculus • Vector notation • Vector-valued function ## Notes 1. Ito 1993, p. 1678; Pedoe 1988 2. The Oxford english dictionary. (2nd. ed. ed.). London: Claredon Press. 2001. ISBN 9780195219425. 3. ^ a b c Michael J. Crowe, A History of Vector Analysis; see also his lecture notes on the subject. 4. W. R. Hamilton (1846) London, Edinburgh & Dublin Philosophical Magazine 3rd series 29 27 5. ^ a b c d ## References Mathematical treatments • Apostol, T. (1967). Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra. John Wiley and Sons. ISBN 978-0-471-00005-1. • Apostol, T. (1969). Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications. John Wiley and Sons. ISBN 978-0-471-00007-5. • Kane, Thomas R.; Levinson, David A. (1996), Dynamics Online, Sunnyvale, California: OnLine Dynamics, Inc. • Heinbockel, J. H. (2001), Introduction to Tensor Calculus and Continuum Mechanics, Trafford Publishing, ISBN 1-55369-133-4 • Ito, Kiyosi (1993), Encyclopedic Dictionary of Mathematics (2nd ed.), MIT Press, ISBN 978-0-262-59020-4 • Ivanov, A.B. (2001), "Vector, geometric", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 • Pedoe, D. (1988). Geometry: A comprehensive course. Dover. ISBN 0-486-65812-0. . Physical treatments • Aris, R. (1990). Vectors, Tensors and the Basic Equations of Fluid Mechanics. Dover. ISBN 978-0-486-66110-0. • Feynman, R., Leighton, R., and Sands, M. (2005). "Chapter 11". (2nd ed ed.). Addison Wesley. ISBN 978-0-8053-9046-9.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 64, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192370176315308, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/47890/solving-a-problem-using-degrees-or-radians?answertab=oldest
# solving a problem using degrees OR radians hey so i'm programming something that finds an angle of a line between 0 and 180 degrees based on two points.... the equation to find the answer is `Angle = sin-1((1/Hypotenuse)*B)`where B is the vertical side of the triangle formed and the hypotenuse is the distance between point 1 and 2. However the inverse sin function in my program only takes and outputs radians so instead the equation to get degrees becomes `(Angle = sin-1(((1/Hypotenuse)*B *3.14) /180) *180) /3.14` This does not however seem to be right for some reason, as when putting in the parameters of `Hypotenuse=150`, `B=149.6` i get the answer of 85.8 (right) for the original equation and then .9973 degrees for the new equation?? - 2 The inputs of the arcsin function are not in degrees or radians. That's the main problem. – Arturo Magidin Jun 27 '11 at 2:51 ## 2 Answers If $B$ is the length of the opposite side, and $H$ is the length of the hypothenuse, then $B/H$ is the sine of the angle. This is not measured in either degrees or radians; it's the value of the sine. If you take $\arcsin(B/H)$, this will be given in radians. To convert to degrees, you multiply be $180/\pi$. So what you want is: $$\mathrm{angle} = \arcsin\Biggl(\left(\frac{1}{\text{hypothenuse}}\right)*B\Biggr)*180\Bigm/\pi.$$ $3.14$ is a very rough approximation to $\pi$. - You would calculate the answer in radians, and then convert to degrees. Inside of the inverse sin should just be (1/hypotenuse*B) since its a ratio of side lengths. You're overthinking it, I'm guessing. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306126236915588, "perplexity_flag": "middle"}
http://nrich.maths.org/2047/solution?nomenu=1
## 'Poly Fibs' printed from http://nrich.maths.org/ ### Show menu Here Andrei Lazanu, age 14, School No. 205, Bucharest, Romania gives another excellent solution to this problem which he has extended after doing some research on the web. First, I calculated the first 10 polynomials that satisfy the recurrence relation given in the problem: $$P_{n+2}(x)=xP_{n+1}(x)-P_n(x)$$ where $P_0(x)=0$ and $P_1(x)=1.$ I successively found: $$\eqalign{ P_2(x) &= x \cr P_3(x) &= x^2 - 1 \cr P_4(x) &= x^3 - 2x = x(x^2 - 2)\cr P_5(x) &= x^4 - 3x^2 + 1 \cr P_6(x) &= x^5 - 4x^3 + 3x = x(x^2 - 1)(x^2 - 3) \cr P_7(x) &= x^6 - 5x^4 + 6x^2 - 1 \cr P_8(x) &= x^7 - 6x^5 + 10x^3 - 4x = x(x^2 - 2)(x^4 - 4x^2 +2) \cr P_9(x) &= x^8 - 7x^6 + 15x^4 - 10x^2 + 1 \cr P_{10}(x) &= x(x^4 - 3x^2 + 1)(x^4 - 5x^2 + 5)}.$$ From the examination of the expressions of the polynomials, I drew some conclusions: (1) Odd order polynomials contain only even powers of $x$, including zero. (2) Even order polynomials contain only odd powers of $x$. (3) There are alternate signs of terms in each polynomial, starting with the first, of order $(n-1)$ for $P_n(x)$, which is positive. I have also shown, as required in the question, that $P_4(x)$ contains as a factor $P_2(x)$, $P_6(x)$ contains as factor $P_3(x)$ (that is every root of $P_3$ is a root of $P_6$), $P_8(x)$ contains as factor $P_4(x)$ and $P_{10}(x)$ contains as factor $P_5(x$. $$\eqalign{ {P_4(x)\over P_2(x)} &= x^2 - 2 \cr {P_6(x)\over P_3(x)} &= x(x^2 - 3)\cr {P_8(x)\over P_4(x)} &= x^4 - 4x^2 + 2 \cr {P_{10}(x)\over P_5(x)} &= x(x^4 - 5x^2 + 5).}$$ Using the defining recurrence relation we can express $P_6$ in terms of previous polynomials in the sequence. $$\eqalign{ P_6 &= xP_5-P_4 \cr &= (x^2-1)P_4-xP_3 \cr &= P_3P_4 - P_2P_3 \cr &= P_3(P_4-P_2) .}$$ Similarly we can express $P_8$ in terms of previous polynomials in the sequence. $$\eqalign{ P_8 &= xP_7-P_6 \cr &= (x^2-1)P_6-xP_5 \cr &= (x^3-2x)P_5-(x^2-1)P_4 \cr &= P_4(P_5-P_3) .}$$ Again we can express $P_{10}$ in terms of previous polynomials in the sequence. $$\eqalign{ P_{10} &= xP_9-P_8 \cr &= (x^2-1)P_8-xP_7 \cr &= (x^3-2x)P_7-(x^2-1)P_6 \cr &= (x^4 - 3x^2 +1)P_6 - (x^3 - 2x)P_5 \cr &= P_5P_6-P_4P_5 \cr &= P_5(P_6-P_4).}$$ Editor's note: This suggests a conjecture that $P_{2k}=P_k(P_{k+1}-P_{k-1})$ where $k$ is any natural number. This is true but the general proof is beyond the scope of school mathematics. Andrei made further observations about the coefficients in these polynomials in the hope of finding explicit formulae for the $n$th order polynomial. He found, looking at Fibonacci numbers, that these polynomials are very similar to Fibonacci polynomials, which are given by the recursive relation: F_{n+2}(x) = xF_{n+1}(x) + Fn(x) with $F_0(x) = 0$ and $F_1(x) = 1$. Using these relations, he found the first 10 Fibonacci polynomials, which are, up to the signs found in $P_n(x)$, identical with $F_n(x)$. Using the explicit formula of Fibonacci polynomials from http://mathworld.wolfram.com, Andrei hoped to write correctly the explicit formula for the polynomials in the problem, as: $$P_n(x)=\sum_{j=0}^{(n-1)/2}(-1)^j C^{n-j-1}_jx^{n-2j-1}.$$ The formula works well up to $n = 10$. From this formula, all properties could be found easily, although Andrei was not able to demonstrate the general case that $P_{2n}(x)$ is divisible by $P_n(x)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409088492393494, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/300306/confusion-over-notation-in-a-book-on-the-mathematics-of-qft-by-faria-melo?answertab=oldest
# Confusion over notation in a book on the mathematics of QFT by Faria-Melo While formulating this question, I arrived at a likely interpretation provided in an answer to my own question below. My problem appears to be one of inexperience in working with ambient coordinates, and since this is not for class but of my own interest I'm still posting this question in the hope that someone might care to verify my interpretation. I hope this is acceptable :-) Let $M$ be a smooth submanifold of $\mathbb{R}^{3N}$ with a smooth (Lagrangian) function $L:TM\to\mathbb{R}$, let $\gamma$ be a smooth curve $I=[a,b]\to M$, and let $S$ be the action functional defined by integrating $L$ over derivatives of smooth paths (such as $\gamma$). A variation of $\gamma$ is a smooth map $\widetilde{\gamma}=(-\epsilon,\epsilon)\times I\to M$ extending $\gamma$, or more precisely, coinciding with $\gamma$ at $0\times I$. This yields a family $s\in(-\epsilon,\epsilon)\mapsto\gamma_s$ of paths in $M$, which in turn yields a real function $S(\gamma_s)$ defined on $(-\epsilon,\epsilon)$. In this book, section 1.2.3 page 5, the first variation of $S$ at $\gamma$, defined to be $$\delta S=\left.\frac{\partial}{\partial s}\right|_{s=0}S(\gamma_s),$$ is stated to commute with the integral, yielding the expression $$\delta S=\int_a^b\delta L(\gamma,\dot{\gamma})dt,$$ with $\delta L$ defined as $$DL(\gamma,\dot{\gamma})\cdot(\delta\gamma,\delta\dot{\gamma})$$ where $$\delta\gamma=\left.\frac{\partial}{\partial s}\right|_{s=0}\gamma_s$$ and $$\delta\dot{\gamma}=\left.\frac{\partial}{\partial s}\right|_{s=0}\dot{\gamma}_s.$$ My problem is one of type-checking: I cannot figure out what the symbols $\delta\gamma$ and $\delta\dot{\gamma}$ mean. The partial operator acts on functions! - ## 1 Answer While formulating this question I arrived at the following possible interpretation: $(\gamma_s(t),\dot{\gamma}_s(t))$ is a smooth map defined on $(-\epsilon,\epsilon)\times I$ with values in $TM\subset T\mathbb{R}^{3N}=\mathbb{R}^{3N}\times\mathbb{R}^{3N}$, so we get components of $\gamma_s(t)$ and $\dot{\gamma}_s(t)$ respectively, which are real functions on $(-\epsilon,\epsilon)\times I$, and the symbols are meant to mean component-wise partial derivation of these with respect to $s$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394218921661377, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/9773/scattering-vs-bound-states?answertab=active
# Scattering vs bound states Why are these states called as such, and how do they differ? I vaguely understand that when E > 0 you obtain a scattering state, but when E < 0 you have a bound state. - ## 2 Answers These terms apply when you're solving the Schrodinger equation with a potential that goes to zero at large distances. In this situation, the solutions with $E<0$ have the property that $\psi$ dies away to zero for large distance. So the particle is, with high probability, guaranteed to be in a confined region (not at large distance). So those are bound states. The solutions with $E>0$, on the other hand, do not die away to zero at large distances -- instead, they go like $e^{ikr}$ where $k=\sqrt{2mE}/\hbar$. So these solutions represent particles that have high probability to be arbitrarily far away. Physically, they are useful when describing particles that start far away, approach the scattering center, and end up far away again. Hence the name "scattering states." - Let me explain with a simple example. Consider a particle in a finite potential well. There will be two cases: i) $E<V$; ii) $E>V$. If the energy of the particle its smaller than the magnitude of the potential, the particle will be confined in the box forever, that is, the particle its bounded to the thing that is generating the potential. In this case where the particle its confined to a finite space, in a bound state, the energy will be quantized, that is, only multiples of a certain quantity of energy will be allowed. But if the energy of the particle it's greater than the intensity of the potential the still will "feel" the "hole" below it, a portion of the "particle" will be reflected, and will go back, and the other portion will cross the well. The energy did not need to be necessarily smaller than zero to a bound state occur, as a matter of fact it only need to be smaller than the intensity of the potential. All of the information described above can be obtained by solving the Schrödinger Equation to the potential in question, as was done in the link in the top of the answer. A very good introductory book in this subject its "Introduction to Quantum Mechanics" by David J. Griffiths. Read it, it is very nice! - Thanks for the book recommendation! I am reading it and it is indeed very nice. – wrongusername May 13 '11 at 3:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419208765029907, "perplexity_flag": "head"}
http://mathoverflow.net/questions/56738?sort=votes
## p-adic representations of a quaternion algebra over a local field ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) How to determine a complete set of isomorphism class representatives of the irreducible algebraic representations of $D^{\times}/F$ (where $D$ is a quaternion algebra over a local field $F/\mathbb{Q} _p$) on $E$ (also a finite extension of $\mathbb{Q} _p$)? Answer for other (algebraic) groups would also be welcome as well as any references to the literature. - ## 3 Answers If you want a construction entirely compatible with Bushnell and Kutzko's theory of strata and simple characters (and that also works when $F$ has positive characteristic), you may refer to my PhD thesis : Broussous, P. Extension du formalisme de Bushnell et Kutzko au cas d'une algèbre à division. (French) [Extension of the Bushnell-Kutzko formalism to the case of a division algebra] Proc. London Math. Soc. (3) 77 (1998), no. 2, 292–326. For other reductive groups, there are basically two "schools". First Bushnell and Kutzko (GL(N), SL(N)) and the students of Bushnell (Shaun Stevens : classical groups), of Henniart (myself and Vincent Secherre : GL(m,D)), of Zink (Martin Grabitz : GL(m,D)). (I don't give any precise references for you may easily find them with Mascinet.) Second, you have the "american school", initiated by Roger Howe, it has entirely solved the construction of "tame" supercuspidal representations for a general reductive group. Howe itself did GL(n) a long time ago. The following papers solve the general case. Yu, Jiu-Kang Construction of tame supercuspidal representations. J. Amer. Math. Soc. 14 (2001), no. 3, 579–622. Kim, Ju-Lee Supercuspidal representations: an exhaustion theorem. J. Amer. Math. Soc. 20 (2007), no. 2, 273–320. To finish I must add that Bushnell and Kutzko have defined the beautiful notion of "type" for Bernstein blocks of the category of smooth complex representations of a given reductive group : Bushnell, Colin J.; Kutzko, Philip C. Smooth representations of reductive $p$-adic groups: structure theory via types. Proc. London Math. Soc. (3) 77 (1998), no. 3, 582–634. This notion allows to develop a general strategy to construct all representations of a given reductive group. - Oups I misread the question (and so did Joël)!! Przemyslaw Chojecki is in fact interested in representations with $p$-adic coefficients. I'm sorry. – Paul Broussous Feb 26 2011 at 19:53 2 Yes, you've answered the wrong question. But it's still a good answer! – Jeff Adler Feb 27 2011 at 5:52 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think you may find a description of representation theory of $D^{\times}$ in the following work of E.W. Zink : Ernst-Wilhelm Zink. Representation filters and their application in the theory of local fields. J. Reine Angew. Math., 387 :182–208, 1988. Ernst-Wilhelm Zink. Representation theory of local division algebras. J. Reine Angew. Math., 428 :1–44, 1992. - Thank you very much! – Przemyslaw Chojecki Feb 26 2011 at 15:47 What kind of coefficients are used in these two papers? I can't tell from the introductions. – Vesna Stojanoska Sep 11 at 21:56 If $E$ is an algebraic closure of $F$, then $D\otimes_F E\simeq M_2(E)$. (In fact this is also true if $E$ is taken to be, say, the unramified quadratic extension field of $F$.) We get an algebraic representation `$$\phi\colon D^\times\hookrightarrow (D\otimes E)^\times=\text{GL}_2(E).$$` And then for each $a\geq 0$ and $b\in \mathbf{Z}$ we get the representation `$\text{Sym}^{a}\phi\otimes (\det\phi)^b$`. My feeling is that these exhaust the irreducible algebraic representations of $D^\times$, but I'm afraid I don't have a proof at the ready. As the other answerers show, the question of classifying the admissible representations of $D^\times$ (with complex coefficients) is a far more subtle issue! - Unfortunately, the property irreducible'' is not stable by base change. I think your method should work in case one seeks to classify absolutely irreducible representations: pass to the algebraic closure, in that case irr algebraic representations are classified in terms of the root system, then use the action of the abs Galois group on this root system to determine which are the ones which are rational? – Arno Kret Mar 3 2011 at 15:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8860806822776794, "perplexity_flag": "middle"}
http://psychology.wikia.com/wiki/Cluster_analysis?oldid=93094
# Cluster analysis Talk0 31,726pages on this wiki Revision as of 21:18, December 22, 2008 by Dr Joe Kiff (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory Cluster analysis or clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics. Clustering is the classification of similar objects into different groups, or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait - often proximity according to some defined distance measure. Besides the term data clustering (or just clustering), there are a number of terms with similar meanings, including cluster analysis, automatic classification, numerical taxonomy, botryology and typological analysis. ## Types of clustering Data clustering algorithms can be hierarchical or partitional. Hierarchical algorithms find successive clusters using previously established clusters, whereas partitional algorithms determine all clusters at once. Hierarchical algorithms can be agglomerative (bottom-up) or divisive (top-down). Agglomerative algorithms begin with each element as a separate cluster and merge them in successively larger clusters. Divisive algorithms begin with the whole set and proceed to divide it into successively smaller clusters. ## Hierarchical clustering ### Distance measure A key step in a hierarchical clustering is to select a distance measure. A simple measure is manhattan distance, equal to the sum of absolute distances for each variable. The name comes from the fact that in a two-variable case, the variables can be plotted on a grid that can be compared to city streets, and the distance between two points is the number of blocks a person would walk. A more common measure is Euclidean distance, computed by finding the square of the distance between each variable, summing the squares, and finding the square root of that sum. In the two-variable case, the distance is analogous to finding the length of the hypotenuse in a triangle; that is, it is the distance "as the crow flies." A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance. ### Creating clusters Given a distance measure, elements can be combined. Hierarchical clustering builds (agglomerative), or breaks up (divisive), a hierarchy of clusters. The traditional representation of this hierarchy is a tree data structure (called a dendrogram), with individual elements at one end and a single cluster with every element at the other. Agglomerative algorithms begin at the top of the tree, whereas divisive algorithms begin at the bottom. (In the figure, the arrows indicate an agglomerative clustering.) Cutting the tree at a given height will give a clustering at a selected precision. In the following example, cutting after the second row will yield clusters {a} {b c} {d e} {f}. Cutting after the third row will yield clusters {a} {b c} {d e f}, which is a coarser clustering, with a fewer number of larger clusters. ### Agglomerative hierarchical clustering For example, suppose this data is to be clustered. Where euclidean distance is the distance metric. The hierarchical clustering dendrogram would be as such: This method builds the hierarchy from the individual elements by progressively merging clusters. Again, we have six elements {a} {b} {c} {d} {e} and {f}. The first step is to determine which elements to merge in a cluster. Usually, we want to take the two closest elements, therefore we must define a distance $d(\mathrm{element}_1,\mathrm{element}_2)$ between elements. One can also construct a distance matrix at this stage. Suppose we have merged the two closest elements b and c, we now have the following clusters {a}, {b, c}, {d}, {e} and {f}, and want to merge them further. But to do that, we need to take the distance between {a} and {b c}, and therefore define the distance between two clusters. Usually the distance between two clusters $\mathcal{A}$ and $\mathcal{B}$ is one of the following: • The maximum distance between elements of each cluster (also called complete linkage clustering): $\max \{\, d(x,y) : x \in \mathcal{A},\, y \in \mathcal{B}\,\}$ • The minimum distance between elements of each cluster (also called single linkage clustering): $\min \{\, d(x,y) : x \in \mathcal{A},\, y \in \mathcal{B} \,\}$ • The mean distance between elements of each cluster (also called average linkage clustering): ${1 \over {\mathrm{card}(\mathcal{A})\mathrm{card}(\mathcal{B})}}\sum_{x \in \mathcal{A}}\sum_{ y \in \mathcal{B}} d(x,y)$ • The sum of all intra-cluster variance • The increase in variance for the cluster being merged (Ward's criterion) • The probability that candidate clusters spawn from the same distribution function (V-linkage) Each agglomeration occurs at a greater distance between clusters than the previous agglomeration, and one can decide to stop clustering either when the clusters are too far apart to be merged (distance criterion) or when there is a sufficiently small number of clusters (number criterion). ## Partitional clustering ### k-means and derivatives #### K-means clustering The K-means algorithm assigns each point to the cluster whose center (also called centroid) is nearest. The center is the average of all the points in the cluster — that is, its coordinates are the arithmetic mean for each dimension separately over all the points in the cluster. Example: The data set has three dimensions and the cluster has two points: X = (x1, x2, x3) and Y = (y1, y2, y3). Then the centroid Z becomes Z = (z1, z2, z3), where z1 = (x1 + y1)/2 and z2 = (x2 + y2)/2 and z3 = (x3 + y3)/2. The algorithm is roughly (J. MacQueen, 1967): • Randomly generate k clusters and determine the cluster centers, or directly generate k seed points as cluster centers. • Assign each point to the nearest cluster center. • Recompute the new cluster centers. • Repeat until some convergence criterion is met (usually that the assignment hasn't changed). The main advantages of this algorithm are its simplicity and speed which allows it to run on large datasets. Its disadvantage is that it does not yield the same result with each run, since the resulting clusters depend on the initial random assignments. It maximizes inter-cluster (or minimizes intra-cluster) variance, but does not ensure that the result has a global minimum of variance. #### QT Clust algorithm QT (Quality Threshold) Clustering (Heyer et al, 1999) is an alternative method of partitioning data, invented for gene clustering. It requires more computing power than k-means, but does not require specifying the number of clusters a priori, and always returns the same result when run several times. The algorithm is: • The user chooses a maximum diameter for clusters. • Build a candidate cluster for each point by including the closest point, the next closest, and so on, until the diameter of the cluster surpasses the threshold. • Save the candidate cluster with the most points as the first true cluster, and remove all points in the cluster from further consideration. • Recurse with the reduced set of points. The distance between a point and a group of points is computed using complete linkage, i.e. as the maximum distance from the point to any member of the group (see the "Agglomerative hierarchical clustering" section about distance between clusters). #### Fuzzy c-means clustering In fuzzy clustering, each point has a degree of belonging to clusters, as in fuzzy logic, rather than belonging completely to just one cluster. Thus, points on the edge of a cluster, may be in the cluster to a lesser degree than points in the center of cluster. For each point x we have a coefficient giving the degree of being in the kth cluster $u_k(x)$. Usually, the sum of those coefficients is defined to be 1, so that $u_k(x)$ denotes a probability of belonging to a certain cluster: $\forall x \sum_{k=1}^{\mathrm{num.}\ \mathrm{clusters}} u_k(x) \ =1.$ With fuzzy c-means, the centroid of a cluster is the mean of all points, weighted by their degree of belonging to the cluster: $\mathrm{center}_k = {{\sum_x u_k(x) x} \over {\sum_x u_k(x)}}.$ The degree of belonging is related to the inverse of the distance to the cluster $u_k(x) = {1 \over d(\mathrm{center}_k,x)},$ then the coefficients are normalized and fuzzyfied with a real parameter $m>1$ so that their sum is 1. So $u_k(x) = \frac{1}{\sum_j \left(\frac{d(\mathrm{center}_k,x)}{d(\mathrm{center}_j,x)}\right)^{1/(m-1)}}.$ For m equal to 2, this is equivalent to normalising the coefficient linearly to make their sum 1. When m is close to 1, then cluster center closest to the point is given much more weight than the others, and the algorithm is similar to k-means. The fuzzy c-means algorithm is very similar to the k-means algorithm: • Choose a number of clusters. • Assign randomly to each point coefficients for being in the clusters. • Repeat until the algorithm has converged (that is, the coefficients' change between two iterations is no more than $\epsilon$, the given sensitivity threshold) : • Compute the centroid for each cluster, using the formula above. • For each point, compute its coefficients of being in the clusters, using the formula above. The algorithm minimizes intra-cluster variance as well, but has the same problems as k-means, the minimum is a local minimum, and the results depend on the initial choice of weights. ## Elbow criterion The elbow criterion is a common rule of thumb to determine what number of clusters should be chosen, for example for k-means and agglomerative hierarchical clustering. The elbow criterion says that you should choose a number of clusters so that adding another cluster doesn't add sufficient information. More precisely, if you graph the percentage of variance explained by the clusters against the number of clusters, the first clusters will add much information (explain a lot of variance), but at some point the marginal gain will drop, giving an angle in the graph (the elbow). On the following graph, the elbow is indicated by the red circle. The number of clusters chosen should therefore be 4. ## Spectral clustering Given a set of data points, the similarity matrix may be defined as a matrix $S$ where $S_{ij}$ represents a measure of the similarity between point $i$ and $j$. Spectral clustering techniques make use of the spectrum of the similarity matrix of the data to cluster the points. Sometimes such techniques are also used to perform dimensionality reduction for clustering in fewer dimensions. One such technique is the Shi-Malik algorithm, commonly used for image segmentation. It partitions points into two sets $(S_1,S_2)$ based on the eigenvector $v$ corresponding to the second-smallest eigenvalue of the Laplacian $L = I - D^{1/2}SD^{1/2}$ of $S$, where $D$ is the diagonal matrix $D_{ii} = \sum_{j} S_{ij}.$ This partitioning may be done in various ways, such as by taking the median $m$ of the components in $v$, and placing all points whose component in $v$ is greater than $m$ in $S_1$, and the rest in $S_2$. The algorithm can be used for hierarchical clustering, by repeatedly partitioning the subsets in this fashion. A related algorithm is the Meila-Shi algorithm, which takes the eigenvectors corresponding to the k largest eigenvalues of the matrix $P = SD^{-1}$ for some k, and then invokes another (e.g. k-means) to cluster points by their respective k components in these eigenvectors. ## Applications ### Biology In biology clustering has many applications in the fields of computational biology and bioinformatics, two of which are: • In transcriptomics, clustering is used to build groups of genes with related expression patterns. Often such groups contain functionally related proteins, such as enzymes for a specific pathway, or genes that are co-regulated. High throughput experiments using expressed sequence tags (ESTs) or DNA microarrays can be a powerful tool for genome annotation, a general aspect of genomics. • In sequence analysis, clustering is used to group homologous sequences into gene families. This is a very important concept in bioinformatics, and evolutionary biology in general. See evolution by gene duplication. ### Marketing research Cluster analysis is widely used in market research when working with multivariate data from surveys and test panels. Market researchers use cluster analysis to partition the general population of consumers into market segments and to better understand the relationships between different groups of consumers/potential customers. • Segmenting the market and determining target markets • Product positioning • New product development • Selecting test markets (see : experimental techniques) ### Other applications Social network analysis: In the study of social networks, clustering may be used to recognize communities within large groups of people. Image segmentation: Clustering can be used to divide a digital image into distinct regions for border detection or object recognition. Data mining: Many data mining applications involve partitioning data items into related subsets; the marketing applications discussed above represent some examples. Another common application is the division of documents, such as World Wide Web pages, into genres. ## Comparisons between data clusterings There have been several suggestions for a measure of similarity between two clusterings. Such a measure can be used to compare how well different data clustering algorithms perform on a set of data. Many of these measures are derived from the matching matrix (aka confusion matrix), e.g., the Rand measure and the Fowlkes-Mallows Bk measures. # Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8988176584243774, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/113505-help.html
# Thread: 1. ## Help So after I received help earlier today with two problems, I've been flying through my homework. This is until I reached this problem. I don't even know what this applies to, yet alone know how to figure this out. Could someone lay out an equation or something please. The half-life of radium is 1690 years. If 60 grams are present now, how much will be present in 2450 years? When solving, round the decay constant,r, to 5 decimal places. 2. Originally Posted by NFG123 So after I received help earlier today with two problems, I've been flying through my homework. This is until I reached this problem. I don't even know what this applies to, yet alone know how to figure this out. Could someone lay out an equation or something please. The half-life of radium is 1690 years. If 60 grams are present now, how much will be present in 2450 years? When solving, round the decay constant,r, to 5 decimal places. You don't need to apologize for asking help or say it's your last time. This site wants to help you if you are using it correctly, which you seem to be What does half life mean? It means after the half life has passed, one half of the substance is gone. Mathematically this means $A(t)=A_0 \left( \frac{1}{2} \right) ^{\frac{t}{h}}$. Now where did this come from? Look at (t/h), where t is time and h is the half life. When t=h that means that the the time has reached the half life and this fraction will also be 1. If the fraction is 1, then you get $A(t)=A_0 \left( \frac{1}{2} \right) ^1$, which is just one half of the starting amount. This makes sense because that's what the half life definition is. So plug in the half life time your problem gives you in that equation for h and you can now know the amount left at any time t.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9751600623130798, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/1673/key-space-size-when-either-of-two-public-keys-are-valid-for-authentication?answertab=oldest
# Key space size when either of two public keys are valid for authentication? If for authentication a user can own either A OR B public key instead of just one specific key is that equivalent to halving the key space. i.e. it it theoretically twice as easy to brute force and only equivalent to a 128 bit key? - Your title mentions SHA256 but the text of your question doesn't, instead talking about public keys. – GregS Jan 16 '12 at 1:07 ## 2 Answers Yes, having two valid keys in effect halves the size of the keyspace — from 2256 to 2255 possible keys per each valid one. Not to 2128 keys, which is what a 128-bit keylength would get you. Remember that the number of possible keys grows exponentially with the number of bits per key. (Adding one bit doubles the size of the keyspace, since the added bit has two possible values: 0 or 1.) A 256-bit keyspace has 2256 possible keys, which is a huge number. Even if you had literally billions of valid keys instead of just one, it wouldn't make finding one of them by brute force any less impossible. - ## Did you find this question interesting? Try our newsletter email address I don't believe it's quite as simple as Ilmari is making it out to be, although his end result that "it doesn't really damage the security" is quite correct. It is easy to see that Ilmari's answer is actually a worse case; the attacker manages to run the same attack on both keys simultaneously, with negligible cost over running the attack over one key. It is easy to see the attacker cannot do better than increasing the probability of success by 2x over a time period versus attacking a single key (because if he could do better, he could use that to improve his single key attack) It is also see that, in a best case scenario, an attacker might not be able to get any advantage at all over him picking one of the keys, and just attacking that one. So, which is it (or is it somewhere between the two)? Well, that rather depends on the crypto primitive involved. If we're talking about RSA, well, the best known attack against that (assuming good padding, and reasonably large prime factors) would be to factor the modulus using NFS. Now, NFS processing is quite specific to the actual modulus being factored; there is no known way to use the same NFS processing to factor two different numbers simultaneously. Hence, if the keys are RSA public keys, it's actually the best case (meaning having a second public key of the same size doesn't appear to help the attacker at all). If we're talking about an Elliptic Curve-based public key operation (and assuming both public keys are on the same curve), well, it's rather more mixed. The best attack would essentially be finding a value $k$ such that either $kG = M_1$ or $kG = M_2$ (where $M_1$, $M_2$ are elliptic curve points from the public key); it strikes me as likely that a single search (such as one based on BigStep-LittleStep) would be able to take some advantage of having two potential targets (although it isn't clear to me exactly how much). - You're right, I tried to deliberately keep my answer simple and assume the simplest and worst case. It's not really clear what the OP is talking about. He mentions SHA-256, public keys and user authentication; since users typically don't authenticate themselves with public keys, I suspect at least one of these is a mistake. I sort of assumed that the "public key" part was the red herring; you seem to have taken it at face value and answered accordingly. In any case, +1. – Ilmari Karonen Jan 16 '12 at 18:16 @Ilmari Karonen: Not a problem; as you pointed out, even a 2x factor in attacking doesn't really matter with modern crypto systems. This sort of analysis might be start to get interesting if you have millions of possible targets; the OP seems to be quite far from that. – poncho Jan 16 '12 at 18:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9640641808509827, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/34228/does-every-nontrivial-sheaf-of-rings-have-a-maximal-ideal
## Does every nontrivial sheaf of rings have a maximal ideal? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $R$ be a sheaf of rings on a topological space $X$. Assume $R \neq 0$. Does then $R$ have a maximal ideal? So this is a spacified analogon of the theorem, that every nontrivial ring has a maximal ideal. Currently I try to develope this sort of spacified commutative algebra and algebraic geometry. If anyone knows some literature about it, please let me know. So let's try to imitate the known proof for rings and use Zorn's Lemma. For that, we need that for every linear ordered set `$(J_k)_{k \in K}$` of proper ideals in $R$, their sum $\sum_{k \in K} J_k$ is also a proper ideal. Note that if we replace $R \neq 0$ by $R_x \neq 0$ for all $x \in X$ and the notion proper by "stalkwise proper", then everything works out fine since stalks and sum commute. However, global sections do not commute with (infinite) sums. Anyway, let's try to continue: Assume $\sum_{k \in K} J_k = R$, that is, $1$ is a global section of the sum. Then there is an open covering $X = \cup_{i \in I} U_i$, such that $1 \in \sum_{k \in K} J_k(U_i) = \cup_{k \in K} J_k(U_i)$. Thus we get a function $I \to K, i \mapsto k_i$, such that $1 \in J_{k_i}(U_i)$. If this function has an upper bound, say $k$, then we get a contradiction $J_k=R$. Thus the function is unbounded. And now? I think that this already indicates that there will be counterexamples, but I'm not sure. Also note that everything is fine when $X$ is quasi compact. - 1 Fixed. (SCNR...) – darij grinberg Aug 2 2010 at 9:27 Back to topic, is the quasicompactness the thing that allows us to just take a maximal ideal in one stalk and the complete rings in the other stalks? – darij grinberg Aug 2 2010 at 9:33 @Darij: No. If $X$ is quasi compact, the global sections of a sum is the sum of the global sections. Or use that the function above is bounded. – Martin Brandenburg Aug 2 2010 at 9:52 ## 2 Answers Take $X=\mathbf Z$ with topology $(k,\infty)\cap\mathbf Z$ for $-\infty\leq k\leq\infty$, so that sheaves on $X$ may be identified with sequences $\dots\to F_k\to F_{k+1}\to\dots$, $k\in\mathbf Z$. Now take for $R$ the constant sheaf with value $\mathbf Q$. All ideals have the form $\dots\to0_{k-1}\to0_k\to\mathbf Q_{k+1}\to\mathbf Q_{k+2}\to\dots$ for some $-\infty\leq k\leq\infty$, so there are no maximal ideals. - Thank you, this is a nice example. – Martin Brandenburg Aug 2 2010 at 10:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One could have guessed that the answer is "no" through the following reasoning: A sheaf of rings on a space $X$ is a ring object in the topos of sheaves on $X$, and your question is about ring theory in that topos. But such a topos (unless $X$ is very close to being discrete, see e.g. comments here) can be seen as an intuitionistic universe of sets, where the axiom of choice and Zorn's lemma are not valid. This can be made precise; you can interpret formal languages in a topos and you have certain proof systems obeying intuitionistic rules which are sound and complete with respect to the topos interpretation. The soundness part tells you that whenever you manage to prove a theorem about rings according to the intuitionistic rules it will also be valid for sheaves of rings. Here you have to make a distinction between first order and higher order languages - both are interpretable in toposes. Your question about ideals is a higher order question since it talks about subsets of a ring. Anyway both justify the point of view that you are doing ring theory in an intuitionistic set universe. A nice and very friendly written example of such reasoning is this article of Mulvey, "Intuitionistic Algebra and Representations of rings", in which he gives an intuitionistic proof of a theorem of Kaplansky to conclude that it holds for sheaves of rings (and then draws nice consequences). If you are interested in this way of reasoning about sheaves of rings it is definitely the article for you - and you don't have to read Johnstone first, he does it all from scratch and talks about both, 1st and higher order, in a colloquial way. The clearest written precise accounts of this are IMHO section D1 of Johnstone's Elephant for 1st order and D4 for higher order, but they are a bit more general than what you need, see the last paragraph, the treatment of Mulvey, is closer to your setting. A beautiful feature of the 1st order version (e.g. Johnstone, section D3) is that you have a universal ring object living in a certain topos (the "classifying topos" for rings) which satisfies exactly those 1st order statements which are provable in any topos. So whenever you can prove something about this particular ring object you can be sure it holds for all sheaves of rings on a space; see MacLane/Moerdijk's Sheaves in geometry and Logic, section VIII.5 for this universal ring object. The existence of a classifying topos also leads to the following excellent news: For formulas of a certain syntactic form, called "geometric formulas", there are the theorems of Deligne and Barr (see Johnstone) which tell you that whenever you can prove such a formula using classical reasoning there also exists an intuitionistic proof - so you don't have to bother restricting your logic in these cases! Since you said that you are also interested in "spacified" algebraic geometry: There is an article by Anders Kock, "Universal Projective Geometry via Topos Theory", J. Pure Appl. Algebra 9 (1976), 1-24, where he does projective geometry over this universal ring, presumably again obtaining statements which hold over any sheaf of rings (but I only skimmed that one long ago). To sum up: If you can prove something intuitionistically (in a non-defined loose sense) you have good chances to be able to translate your proof into the formal systems from above and then you know that your result is true about rings in any topos. If you manage to express a 1st order statement in a certain syntactic form (as a "geometric formula") and can prove it with classical logic you can also conclude that it holds for any ring in a topos. Important note: "Ring in a topos" is a more general class of objects than you seem to be interested in, you want to know about "rings in a spatial topos" (i.e. topos of sheaves on a topological space), and there more will be true. So even if a statement does e.g. not hold for the universal ring (which lives in a non-spatial topos) it might still be true for all rings in spatial toposes. - Thank you very much for this overview! It is exactly what I was looking for. I will start to read the papers and books you cited :). – Martin Brandenburg Aug 2 2010 at 13:43 Ok, before you devote too much time to this I should warn you that these ideas are 40 years old but haven't been pursued very far. So what you will get out of your reading will not be a bunch of concrete theorems on sheaves of rings, but a general idea for producing them. One may wonder why this hasn't been happily exploited and while one answer may be that there are just few people with appropriate background and interests, another one is probably that it's just darn difficult! Anyway, it's beautiful mathematics!! I suggest to start with Mulvey! – Peter Arndt Aug 2 2010 at 15:51 Yes I rushed to the library and made a copy of the article; the introduction is very promising. It does not bother me if these ideas are 40 years old and no big theory has been developed out of that. Then I'll do it. ;-) – Martin Brandenburg Aug 2 2010 at 18:38 Cool, I hereby sign up to join the project when I'm done with my current duties ;-) – Peter Arndt Aug 3 2010 at 13:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424896240234375, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/04/22/absolute-convergence/?like=1&_wpnonce=401286028f
The Unapologetic Mathematician Absolute Convergence Let’s apply one of the tests from last time. Let $\alpha$ be a nondecreasing integrator on the ray $\left[a,\infty\right)$, and $f$ be any function integrable with respect to $\alpha$ through the whole ray. Then if the improper integral $\int_a^\infty|f|d\alpha$ converges, then so does $\int_a^\infty f\,d\alpha$. To see this, notice that $-|f(x)|\leq f(x)\leq|f(x)|$, and so $0\leq|f(x)|+f(x)\leq2|f(x)|$. Then since $\int_a^\infty2|f|\,d\alpha$ converges we see that $\int_a^\infty|f|+f\,d\alpha$ converges. Subtracting off the integral of $|f|$ we get our result. (Technically to do this, we need to extend the linearity properties of Riemann-Stieltjes integrals to improper integrals, but this is straightforward). When the integral of $|f|$ converges like this, we say that the integral of $f$ is “absolutely convergent”. The above theorem shows us that absolute convergence implies convergence, but it doesn’t necessarily hold the other way around. If the integral of $f$ converges, but that of $|f|$ doesn’t, we say that the former is “conditionally convergent”. Like this: Posted by John Armstrong | Analysis, Calculus 3 Comments » 1. [...] also can import the notion of absolute convergence. We say that a series is absolutely convergent if the series is convergent (which implies that [...] Pingback by | April 25, 2008 | Reply 2. [...] Ratio and Root Tests Now I want to bring out with two tests that will tell us about absolute convergence or (unconditional) divergence of an infinite series . As such they’ll tell us nothing about [...] Pingback by | May 5, 2008 | Reply 3. [...] integral, we have analogues of the direct comparison and limit comparison tests, and of the idea of absolute convergence. Each of these is exactly the same as before, replacing limits as approaches with limits as [...] Pingback by | January 15, 2010 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9038100838661194, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/135950-partial-second-derivative-multiple-variables.html
# Thread: 1. ## partial second derivative multiple variables I can't find the solution to this problem: is says z=f(x,y) with x= 2s +3t and y=3s-2t. i have to find zss and zst. Do i have to use chain rule or something else? thanks 2. Originally Posted by benoitpouliot5757 I can't find the solution to this problem: is says z=f(x,y) with x= 2s +3t and y=3s-2t. i have to find zss and zst. Do i have to use chain rule or something else? thanks First, this is not a problem in differential equations and really belongs in the "Calculus" section. In any case, yes, you use the chain rule: $\frac{\partial z}{\partial s}= \frac{\partial z}{\partial x}\frac{\partial x}{\partial s}+ \frac{\partial z}{\partial y}\frac{\partial y}{\partial s}$. Then, of course, to get the second derivatives, do it again. 3. i know i to get xs and ys but how do i get the zx and zy. The answer of the problem is : zss=4f11 + 12f12 + 9f22. i don't understand why there are 3 terms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319739937782288, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/33450-volume-cross-sections-help.html
# Thread: 1. ## Volume By Cross-Sections Help We have a project where we have to make a model of finding volume by cross-sections, and I'm somewhat confused. This is the problem: Find the volume - base bounded by y=x+1 and y=x^2-1, cross-sections are semi-ellipses of height 2 (perpendicular to x-axis) And I have the formula piAB/2, A=height and B=radius. I subtracted the top function minus the bottom to get the base, and divided that by 2 to get the radius, but I don't really understand the height part. Why is the height given a constant? Shouldn't it change throughout the index? 2. The region is $(x+1)-(x^{2}-1)=-x^{2}+x+2$ The area of a semi-ellipse is $\frac{{\pi}ab}{2}$. The height, a, represents the height at the minor axis of the ellipse. It is constant. The radius, b, is $\frac{-x^{2}+x+2}{2}$. That is the major axis of the ellipse. The one you are integrating over because it changes along the region. So, we have $\frac{{\pi}(2)(\frac{-x^{2}+x+2}{2})}{2}={\pi}(\frac{-x^{2}+x+2}{2})$ $\frac{\pi}{2}\int_{-1}^{2}[-x^{2}+x+2]dx$ 3. Well, that reassured me a bit. I did that exactly, my only problem is the height. My only fear is that we are making a 3D model of it; wouldn't that mean that the height of all the semi-ellipses would be 2 and only the width would change? So would the height remain uniform throughout my entire model? In most other equations we have done, which aren't semi-ellipses, but the height changes as a result of the base, if that makes sense. Any help there?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544373154640198, "perplexity_flag": "middle"}
http://playingwithmodels.wordpress.com/category/transportation/
# What can scientific models tell us about the world? ## The waiting game. Posted in Game theory, Transportation by Alexander Lobkovsky Meitiv on December 20, 2010 When two buses can take you where you need to go should you let the slow bus pass and wait for the fast bus? The Metro has changed the bus schedule this morning with no prior notice whatsoever. The 9 am J1 bus, I usually take, did not come. As I found out later, it was dropped from the schedule altogether. For about 20 minutes I was assuming (with diminishing conviction) that the J1 was merely late. While waiting for the J1, I let two J2′s go by. The J2′s take about 15 minutes longer to reach my destination. While I was waiting, I began to wonder what the best waiting strategy would be if there were two modes of transport with different travel times and different service frequencies. There is a math problem in there with a clear cut answer. It is simpler to consider the situation in which buses do not have a fixed schedule but arrive at a fixed rate per unit time $\mu$. Intervals between consecutive buses in this situation obey Poisson statistics, which means that no matter when I arrive at the stop the average waiting time before the bus arrives is $1/\mu$. In what follows I will present a few results without much derivation. If you are interested in the nitty-gritty, contact me for details. Suppose there are two buses A and B that arrive at a stop with rates $\mu_A$ and $\mu_B$. The probability that A arrives before B is $P(A\mathrm{\ before \ } B) = \displaystyle{\frac{\mu_A}{\mu_A + \mu_B}}.$ The mean waiting time for bus A provided that A has arrived first is $t_A =\displaystyle{\frac{\mu_A}{(\mu_A + \mu_B)^2}}.$ Now if the travel times to destination on buses A and B are $\tau_A \geq \tau_B,$ we can compute the expected travel time if the traveler boards the first bus that comes to the stop. We will call it $T_0$ because the strategy is to let zero buses pass (even if they take longer). $T_0 = \displaystyle{\frac{\mu_A \tau_A}{\mu_A + \mu_B} + \frac{\mu_A}{(\mu_A + \mu_B)^2} + \frac{\mu_B \tau_B}{\mu_A + \mu_B)} + \frac{\mu_B}{(\mu_A + \mu_B)^2} = \frac{1 + \mu_A \tau_A + \mu_B \tau_B}{\mu_A + \mu_B}}.$ We can interpret this formula as follows. The total bus arrival rate is $\mu_A + \mu_B$ and therefore the mean waiting time for a bus, any bus is $1/(\mu_A + \mu_B).$ Then with probability $\mu_A/(\mu_A + \mu_B)$ the A bus has arrived and the travel time is $\tau_A.$ Likewise, with probability $\mu_B/(\mu_A + \mu_B)$ the B bus arrives so that the travel time is $\tau_B.$ It is a inly marginally trickier to derive the mean trip duration (will call it $T_1$) when we are willing to let one A bus pass by in the hopes that the next bus will be the faster B bus. The answer is $T_1 = \displaystyle{\frac{1}{\mu_A + \mu_B} + \frac{\mu_A T_0}{\mu_A + \mu_B} + \frac{\mu_B \tau_B}{\mu_A + \mu_B}}.$ The explanation of the second term in the above formula is that if A arrives first, we let it pass and we are back to the “let zero buses pass” strategy. The rest of the terms in the equation for $T_1$ are the same as before. In general, for any $n \geq 1$ we have a recursion relation: $T_{n+1} = \displaystyle{\frac{1}{\mu_A + \mu_B} + \frac{\mu_A T_n}{\mu_A + \mu_B} + \frac{\mu_B \tau_B}{\mu_A + \mu_B}}.$ We can now start asking questions like: “Under what conditions does letting the slow bus pass make sense (result in a shorter expected trip)?” What about letting two buses pass? When does that strategy pay off? When does $T_1 \leq T_0?$ Comparing the formulas above we arrive at a simple condition on the arrival rate of the fast bus which is independent of the arrival rate of the slow bus $\displaystyle{\mu_B \geq \frac{1}{\tau_A - \tau_B}} \quad \mathrm{(1)}.$ For example, if the slow bus takes 30 minutes and the fast bus takes 20 minutes to arrive at the destination, it makes sense to let the slow bus pass if the fast bus arrives more frequently than once in 10 minutes. No big surprise there, anybody with a modicum of common sense could tell you that. What is surprising is that the condition (1) does not depend on the arrival rate of the slow bus. Did I make a mistake? It turns out that when $T_1 = T_0,$ the expected travel times for other strategies are exactly the same! I will leave the proof to my esteemed reader as homework :) Therefore, since it does not matter how frequently the slow bus comes, if the fast bus comes frequently enough (condition (1) is satisfied), it makes sense to wait for the fast bus no matter how many slow buses pass. Tagged with: Bus waiting times, decisions., stochastic model, transportation ## What is the mpg of a bicycle? Posted in Transportation by Alexander Lobkovsky Meitiv on November 10, 2010 The next green thing? It seems a silly question at first. Digging a little deeper, it is easy to convince yourself that when you travel somewhere by bike, your body burns more calories than if you sat in your office chair. The extra calories came from the extra food you had to eat. The land that was used to produce the food you had to eat could have been used to grow corn and make ethanol. The amount of ethanol produced from the corn required to travel a mile by bike is undoubtedly small, but how small? To compute my bike mpg we will need three numbers: 1. Extra calories burned per mile of bike travel at roughly 13 miles per hour (my average commuting speed). For relatively flat terrain and my weight this number is roughly 42 food calories per mile (obtained from about.com). 2. We need a food equivalent for the ethanol production. Let’s say I go my extra calories from eating sweet corn. According to the same source, sweet corn has 857 food calories per kilogram. So I will need to eat 49 grams of sweet corn per mile traveled at 13 miles per hour on my bike. 3. Now we need to know how much ethanol can be made from 49 grams of sweet corn. The Department of Energy’s Biomass Program to the rescue. According to their website, a metric ton of dry corn can theoretically yield 124.4 gallons of ethanol. Since sweet corn is 77% water, this means that up to 0.0014 gallons of ethanol can be made from 49 grams of sweet corn. Putting these numbers together we arrive at 0.0014 gallons of ethanol per mile or…drumbeat please: ## 701 mpg This number is not small, but neither is it very large! There exist experimental vehicles that seat four and achieve over 100 mpg. When fully loaded, the effective, per passenger mpg is 400. If my calculations are correct, Technology is about to bring motorized transport close to the efficiency of a person on a bike! Tagged with: Bicycle, ethanol, green transportation, mpg, transport ## Should you switch lanes in traffic? Posted in Statistics, Transportation by Alexander Lobkovsky Meitiv on June 24, 2010 Switching lanes in heavy traffic can indeed increase your average speed if done right. If you drive like me, you have no patience for bumper to bumper traffic. There is gotta be a way to beat it somehow, right? Do you sneak into an opening in a neighboring lane if it is moving faster? Do you set goals like: “when I get in front of that van, I’ll switch back?” It doesn’t always seem to work. A lane that was zooming by you comes to a dead stop when you switch into it. If the motion of each lane is random, is there a way to switch lanes and move faster than a car that stays in lane? It turns out there is a way to beat the traffic. To show this we will use a simple model of traffic flow introduced by Nagel and Schrekenberg (see the previous post). The model consists of a circular track with consecutive slots which can be empty of occupied by cars. Cars have an integer velocity between 0 and vmax. As we saw in the previous post, simple rules for updating the positions and velocities of the cars can reproduce the traffic jam phenomenon thereby a dense region forms in which the cars are at a standstill for a few turns and then, as the jam clears in front of them, the cars accelerate and zoom around the track only to be stuck in the jam again. The jam itself moves in the direction opposite to that of the cars. Now imagine that we put two of the circular tracks (or lanes) side by side. For starters, let’s require all cars except one to stay in their respective lanes. One rogue car can switch lanes. Can the rogue with the right lane switching strategy move faster than the rest of the cars on average? The answer is most certainly yes although finding the best lane switching strategy is a difficult computational problem. What we are going to do here is compare two lane switching strategies that at first sight seem equally good. What we will discover is that it the lane changing strategy matters. As you might have suspected, if you don’t do it right, you might actually move slower than the rest of the traffic! Here are the two simple strategies we will compare (I suggest you read the previous post for the description of the model): 1) “Stop-switch:” if the slot directly ahead is occupied, switch if the space in the other lane directly across is not occupied. 2) “Faster-switch:” if the car directly ahead in the neighboring lane is moving faster, switch if there is space available. The graph above compares the two strategies. It shows the percent improvement of the rogue’s average speed compared to the average speed of the rest of the cars as a function of the car density. When density is low and traffic jams are rare, switching lanes has almost no effect on your average speed for both strategies. When the density is high and traffic jams are abound, switching can make you go slower than the rest of the traffic. The reason is that when a space in the neighboring lane opens up, it is likely to be at the tail end of a jam whereas the jam in the lane you just switched out of might be already partially cleared. The final remark is that the “Stop-switch” strategy is significantly better improving the speed by as much as 35% whereas the best “Faster-switch” can do is a 15% improvement. Finally let me mention that if all cars switch lanes and use the same strategy, nobody wins. All cars move with the same speed on average. That average speed could be smaller or larger (depending on the car density and the switching strategy) than in the case when everybody says in lane. The graph below explains why everyone is so keen on the advice “Stay in lane!” It turns out that if everyone uses the “Faster-switch” strategy, the average speed is drastically lower for everyone than if everyone stays in lane! The reason for this dramatic result is that when you change lanes, the car behind is likely to slam on the brakes which slows everyone down. ## People are the real cause of the traffic jam Posted in Statistics, Transportation by Alexander Lobkovsky Meitiv on May 12, 2010 Sometimes traffic slows to a crawl for no apparent reason When the traffic on the beltway is moving at a snail’s pace without an obvious reason (like construction or accident), I frequently wonder: “why can’t everyone just go faster?” If all car’s were driven by computers that could talk to each other, a clever synchronization algorithm, could allow all cars travel in unison and thus prevent congestion that is not a result of lane closure. Alas, this technotopia is still decades away and cars will be driven by humans for the foreseeable future. In the meantime we can but wonder: “what is it about the way people drive that causes the traffic to jam when the density of cars becomes too great?“ Traffic flow is frequently studied because it is an example of a system far from equilibrium. The practical applications are important as well. Many models from crude to sophisticated have been advanced. Massive amounts of data exist and are frequently used to estimate model parameters and make predictions. I am not going to attempt to review the vast field here. My goal is simply to elucidate the physiological limitation of the human mind that causes the driving patters leading to congestion. Although great progress has been made in modeling traffic as a compressible fluid, a class of models that fall into the category of Cellular Automata are more intuitive and instructive. Cellular Automata, promoted by Stephen Wolfram of Mathematica fame as the solution to all problems, are indeed quite nifty. It turns out that autonomous agents, walking on a lattice and interacting according a simple set of rules can reproduce a surprising variety of observed macroscopic phenomena. If you want to learn more the Wikipedia article is a good start. A pioneering work of Nagel and Schreckenberg published in Journal de Physique in 1992 introduced a simple lattice model of traffic which reproduced the traffic jam phenomenon and came to a surprising conclusion that the essential ingredient was infrequent random slowdowns. You have probably done so yourself, you change the radio station or adjust the rear view mirror, or speak the child in the seat behind you. As you do so, your foot eases off the accelerator ever so slightly irritating the person behind you who has to disengage the cruise control. You and people like you are responsible for the traffic jams when the volume is heavy but there are no obvious obstructions to traffic. Allow me to reproduce the authors’ description of the model since it is concise and elegant: “Our computational model is defined on a one-dimensional array of L sites and with open or periodic boundary conditions. Each site may either be occupied by one vehicle, or it may be empty. Each vehicle has an integer velocity with values between zero and vmax. For an arbitrary configuration, one update of the system consists of the following four consecutive steps, which are performed in parallel for all vehicles: 1. Acceleration: if the velocity v of a vehicle is lower than vmax and if the distance to the next car ahead is larger than v + 1, the speed is increased by one. 2. Slowing down (due to other cars): if a vehicle at site i sees the next vehicle at site i + j (with j < v), it reduces its speed to j. 3. Randomization: with probability p, the velocity of each vehicle (if greater than zero) is decreased by one. 4. Car motion: each vehicle is advanced v sites.” Without the randomizing step 3) the motion is deterministic: “every initial configuration of vehicles and corresponding velocities reaches very quickly a stationary pattern which is shifted backwards (i.e. opposite the vehicle motion) one site per time step.” The model exhibits the congestion phenomenon when the mean spacing between the cars is smaller then vmax. Below are the links to the simulations of the model for a circular track with 100 lattice sites, the cars are colored circles which move along the track. It helps to follow a particular color car with your eyes to see what’s happening. The two simulations are done with 15 cars (density lower than critical) and with 23 cars (above the critical density–exhibits congestion). As you probably guessed vmax=5 in these simulation hence 20 cars correspond to the critical density. The probability of random slowing down is 10% per turn. Free flowing traffic in a simulation of the Nagel-Schreckenberg model below the critical density threshold. The second simulation (above the critical density) shows the development of a jam of 5 cars. Cars zoom around the track and then spend 5 turns not moving at all, before the traffic clears ahead of them and they can accelerate to full velocity again. The moral of the story? People like you and me can be the cause of traffic congestion! ## Unavoidable Attraction Posted in Statistics, Transportation by Alexander Lobkovsky Meitiv on March 2, 2010 When traffic is heavy, buses tend to form hard to breakup bunches. Everyone who rides buses in a city is familiar with the dreaded “bus bunching” phenomenon. Especially during rush hour, buses tend to arrive in bunches of two, three or even more. Why is that? To begin understanding this phenomenon we must first assimilate the notion of fluctuations. The bus’s progress along the route, though ideally on schedule, in practice is not. At each stop there is a difference between the actual and the scheduled arrival time. The nature of the fluctuations is such that this difference tends to grow along the route of the bus. In technical terms, the bus’s trajectory is called a directed random walk. There are several sources of the fluctuations: stop lights, variation in the number of passengers to be picked up and discharged and, of course, traffic. When fluctuations are strong, and/or, the buses are frequent, it is unavoidable that consecutive buses find themselves at the same bus stop. What happens afterwards is less clear cut. It seems that it is virtually impossible for the buses to separate again. From that point on the two (or more) buses travel in a bunch. The average speed of a bus bunch is frequently greater than that of an isolated bus and therefore bunches tend to overtake and absorb buses that are ahead. Let’s try to come up with a plausible explanation for the two observed phenomena: Why do bus bunches not break up naturally? Why is the average speed of the bunch different from the average speed of an isolated bus? Let’s tackle the questions one at a time. When don’t bunches break up? There could be several reasons. Without real field data, I am afraid, we won’t be able to say for certain which factor is the most important. Possible reason #1: Excluded volume interactions. Analogy with colloids. Colloids are suspensions of small solid particles in a fluid. It is a well know phenomenon, readily reproducible in a lab, that when you combine colloidal particles of two substantially different sizes, they tend to separate even if the particles themselves are not attracted to each other. It may be counterintuitive, but the system can increase its entropy by separating particles by size. Once a small particle escapes from the aggregate of large particles, it is extremely unlikely to make it back there. The same size separation might happen in traffic, although likely for different reasons. How much do you like being sandwiched between a bus and a dump truck? You try to get the hell out of there at the first opportunity. So spaces between traveling buses may be unlikely to be filled up with cars. In a sense, there is an effective attraction between buses cased by the car’s avoidance of the space between them. One would certainly need data to support or reject the excluded volume hypothesis of bus attraction. Possible reason #2: Correlation between the number of waiting passengers and the distance to the nearest bus ahead. Now this idea is something we could sink our teeth into. Suppose that the gap between two buses shrinks due to a random fluctuation of unspecified nature. Then, the mean number of passengers waiting for the second bus, which is proportional to the wait time (if the passengers arrive at the bus stop at a fixed rate), also decreases. Therefore the second bus will spend less time picking up passengers, it’s mean velocity will therefore increase and it will catch up with the bus ahead. We can therefore say that the state with evenly spaced buses along the route is unstable to collapse. This idea can be formalized in the following simple toy model. Suppose there is a circular route with equidistant stops (a linear route is really circular if the buses turn around at the end of the route and go back immediately). Initially a number of buses are uniformly distributed along the route. Passengers arrive at all bus stops at a fixed rate. The time a bus spends at a stop is proportional to the number of passengers waiting there. Passenger discharge can be included in the model. However it does not qualitatively affect the results. There are two important parameters in this model: 1) the product of the travel time between stops and the rate of passenger arrival. This parameter determines whether the bus spends most of its time traveling or picking up passengers. 2) The ratio of the number of stops to the number of buses. It turns out that if the first parameter is large (most time is spent traveling) or the second parameter is small (there are lots of buses), bunching does not occur. However, as illustrated in the figure below, there is a realistic parameter range in which bunching does occur and bunches have no chance to break up. In the figure below (which presents the output of the simple model above), the three buses were initially well spaced. Eventually, buses 1 and 2 form a bunch which catches up to bus 3. Once the bunch of two buses is formed, the buses leapfrog each other and pick up passengers at alternating stops. Here is therefore the answer to our second question why bunches travel faster: each bus only has to accelerate/decelerate less frequently since they only stop at every other stop. Hence the average speed is greater. It would be fun to go out there and time some bus arrivals to see if they can be well described by the model. Any takers? Correlation between the space between buses and the number of waiting passengers results in the bunching behavior. Tagged with: Bus bunching, correlations, instability. ## Decisions, decisions, decisions… Posted in Statistics, Transportation by Alexander Lobkovsky Meitiv on February 25, 2010 This entry is about how the amount of information at the time of a decision can increase the efficacy of the outcome. The specific case I will talk about is public transport. Have you ever been on a bus that sat at a red light only to stop again at a bus stop right after passing the intersection? Did you wonder if it would be better to have the bus stop located before the light? Wonder no more! If you read on, we will answer this question and a few others using simple statistics and a few carefully chosen assumptions. Let us first compute the average waiting time at a red light. Let’s say the light has only two states: red and green which alternate. The durations of the red and green lights are fixed and are $t_r$ and $t_g$. Suppose that the bus arrives at a light at a random time. Then its average waiting time at the red light is $t_\ell=\frac{1}{t_r+t_g}\int_0^{t_r}t\,dt=\frac{t_r^2}{2(t_r+t_g)}$ This is because we assume that the bus arrives at the light at a random time. Without any prior information, the distribution of arrival times is uniform. The behavior of the light is periodic with period $t_r+t_g$ and thus the probability of arriving in any time interval $dt$ is $dt/(t_r+t_g)$. For example, if the red and the green lights are equally long $t_g=t_r,$ the average wait at a stop light will be a quarter of the red light duration $t_\ell = t_r/4$. (To derive that substitute $t_g=t_r$ into the equation above). Now lets add the bus stop to the equation. We will assume that the bus stops for a fixed time $t_s.$ Fluctuations in the stopping time can be added to the model. However, calculations become a bit more involved and the result does not change qualitatively. The questions are: 1) What is the total stoppage time $t_w$: red light + bus stop? 2) Does it depend on whether the bus stop is before or after the red light? If we know anything about information theory, our answer to the second question is NO without doing any algebra. Why? Because the bu arrival time is random and uncorrelated with the timing of the stop light. There is no information that can distinguish stopping before and after the intersection. If the stop is after the light, the bus has the wait at the red light for a time $t_{\ell}$ just computed above. If the bus stops before the light, the “arrival” time is the time at the end of the stop and it is just as random as the arrival to the stop. Therefore, the average total stoppage time is just $t_\ell + t_s$ regardless of whether the stop is before or after the light. How can the total stoppage time be reduced? After all this post is about efficiency of mass transit. The answer, again from the point of view of information theory, is the following. To improve efficiency, we must use available information to make decisions which make the arrival (or departure) time of the bus correlated with the timing of the light. In Switzerland, public support for mass transit is so strong, that people accept that the trolleys actually change the timing of the stop lights to speed up passage at the expense of cars. Here in America this approach may not fly. However, even if the timing of the stop light cannot be changed by the bus/trolley driver, they still have the power to make decisions that would change the total stoppage time. In the example above, the bus stop was always before or after the intersection. Suppose the driver could decide, based on some information about the phase of the stop light, whether to stop before or after the intersection? Let’s call the scenario in which the bus driver does not make a decision where to stop the “null model” or the “no-decision” model. As a better alternative consider the “red-before” scenario in which the driver stops before the intersection if the bus arrives on the red light and after the intersection if the bus arrives on the green light. What is the average stopping time $t_w = t_s + \Delta$? I am not going to bore you with the tedious derivation. The result itself is a bit complicated as we have to consider 4 separate cases. I am going to give a formula for the extra waiting time $\Delta$ on top of the regular stop duration $t_s$ Let’s first define: $I_1=\frac{(t_r-t_s)^2}{2(t_r+t_g)}$ and $I_2=\frac{(t_s-t_g)(t_r-\frac{1}{2}(t_s-t_g))}{t_r+t_g}$ Then the extra waiting time is $\Delta=0$  for   $t_r \le t_s \le t_g$ $\Delta=I_1$  for   $0\le t_s \le \min(t_r,t_g)$ $\Delta=I_2$  for   $\max(t_r,t_g)\le t_s \le t_r+t_g$ $\Delta=I_1 + I_2$  for   $t_g \le t_s \le t_r$ If $t_s \ge t_r + t_g$, just replace $t_s$ everywhere with its remainder when divided by $t_r + t_g.$ To illustrate these formulas here is the graphs comparing the extra stoppage time $\Delta$ for the “no-decision” and the “red-before” scenarios as a function of the stop duration $t_s$ for two different ratios of the red to green light durations. Comparison of the extra waiting time as a function of the bus stop duration for two different decision scenarios and the red light twice as long as the green light. Comparison of the extra waiting time for the green light twice as long as the gred light. Note that for a certain range of bust stop durations, the extra waiting time vanishes completely! The “red-before” scenario which uses only the information about the current state of the stop light does quite well compared to the “no-decision” scenario. When the green light is longer than the red light, the extra waiting time vanishes altogether if the stop duration is chosen properly. Can we do better? Yes! The more information is available to the driver, the better can the strategy be for making the decision where to stop. We can imagine, for example, that when the bus arrives at a red light, the driver knows when it will turn green again. Or, the driver can have complete information and also know the duration of the following green light. Let us compute the extra waiting time for the best stopping strategy with complete information. How much better does it do than the “red-before” strategy which uses only the information about the current state of the stop light? The best stopping strategy which uses all available information is the following. Suppose the bus arrives on a red light. The time till the light change is the extra waiting time if the driver decides to stop after the intersection. This time needs to be compared to the extra waiting time which might result if the bus stops before the intersection. This might happen if the total stop duration is longer than the remainder of the red light plus the following green light so that the light is red again after the bus stop is completed. The best decision will depending on when the bus arrives, the duration of the red and green lights and the the bus stop. I am going to leave you with a comparison of the extra waiting time for the”red-before” strategy with the best stopping strategy with perfect information about the phase of the stop light (length of red, green, time till change). ### The moral of the story: “Information is power!” The perfect information helps reduce the extra waiting time when the red light is longer than the green and when the bus stop duration is longer than the that of the green light. Tagged with: Bus stop duration, decisions., information, optimization
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 54, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944848895072937, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/80025/is-the-duflo-map-for-lie-algs-unique/80045
Is the Duflo map for Lie algs. unique ? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Duflo map is the map S(g) -> U(g), which known to satisfy the following properties: 1) identity on g 2) isomorphism of g-modules (and in particular vector spaces) 3) restricted to Poisson center on S(g) it is ISOMORPHISM of commutative algebras S(g)^g to ZU(g) (the center of U(g)). (This is highly non-trivial property). It predicts that the centers on the "classical" and "quantum" level are the same. (Kontesevich generalized to arbitrary Poisson variety). The question: Is it the only map satisfying such properties ? (At least for semisimple Lie algebras) ? (I think answer is YES, and it should be known, but I have not seen the reference). ========= Example let g-commutative, then it is true: Since S(g)=U(g) and since the map is required to be identity on g and it is homomorphism, so it is identity map on the S(g). === If you read this question I guess you know what S(g) and U(g) means :) But may be nevertheless to keep good spirit of MO it is more polite to include definitions: S(g) - means commutative algebra - symmetric algebra of g (i.e. just take some basis xi in g and consider polynomial algebra C[x1 ... xn] - this is S(g) as a vector space. Lie algebra g acts on - it simple way - it acts on xi by adjoint action, and continued further by Leibniz rule). U(g) - means NON-commutative algebra - universal enveloping algebra of g - which is: you take basis xi and consider non-commutative polynomials C[xi] where generators satisfying the relations: [xi, xj] = C_ij^k x_k , where C_ij^k - are structure constants of g. The Duflo map is for example discussed here: D. Calaque, C. Rossi "Lectures on Duflo isomorphisms in Lie algebras and complex geometry" http://people.mpim-bonn.mpg.de/crossi/LectETHbook.pdf ========= Kontsevich mentions that in general situation (of Poisson manifolds) Grothendieck-Teichmuller groups should act on quantizations and in particular on "Duflo" type maps, but he writes that for semisimple algebras this action is trivial. So we should not have problem from this side. Moreover as far as I understand since nobody seen non-trivial example of this action it might be always like this. - 2 +1 for being polite and keeping the "spirit of MO". – Theo Johnson-Freyd Nov 4 2011 at 17:16 2 Answers Choose a map $\varphi$ satisfying these properties and make the difference $\psi=\varphi^{-1}\varphi_D$ with the Duflo map. Then $\psi$ is an automorphism of the $\mathfrak g$-module $S(\mathfrak g)$ which is the identity on $\mathfrak g$ and is multiplicative on invariants. If you want this map to be universal (namely it should only involve universal formulae in terms of the Lie bracket) then it is very likely to be unique. But if you want the statement to be true independantly for every single Lie algebra, I would believe the answer is "no". Namely, consider the $2$-dimensional solvable Lie algebra $\mathfrak g:={\bf k}x\oplus {\bf k}y$ with $[x,y]=y$. Now we have that $S(\mathfrak g)={\bf k}[x,y]$ and that $S(\mathfrak g)^{\mathfrak g}={\bf k}$. Therefore any non-trivial automorphism of the $\mathfrak g$-module $S^{\geq2}(\mathfrak g)$ (e.g. a non-trivial multiple of the identity) gives a counter-example. . - 1 Aside: I would call the 2-dimensional Lie algebra "solvable", not "nilpotent", because $[x,-]$ is not acting nilpotently. – Theo Johnson-Freyd Nov 4 2011 at 17:22 God, it's a shame. I have fixed it. Thank you. – DamienC Nov 4 2011 at 22:15 @Damien Thank You very much ! What about semisimple case ? – Alexander Chervov Nov 5 2011 at 18:14 I don't know. I'll think about it. – DamienC Nov 7 2011 at 7:52 If you take $\mathfrak{sl}_2$ then invariant polynomials form a polynomial algebra in one variable $k[x]$, where $x=\sigma_2=tr(ad^2)$. In particular there are no invariant polynomials of odd degree. So you have a lot of flexibility. You could e.g. take the identity map on $S^k(\mathfrak{g})$ for $k\neq 3$ and multiply by $2$ on $S^3(\mathfrak{g})$. This would give you a $\psi$ which is not the identity. – DamienC Nov 24 2011 at 16:28 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Well, a look at These notes of Calaque and Rossi would suggest (Remark 1.3) that the answer is no: $e^{\mathrm{tr}(ad)}$ is an automorphism of $S(\mathfrak{g})$, precomposing with this automorphism you get a different Duflo isomorphism. They call it modified Duflo element. We just need to check that the automorphism is the identity on $\mathfrak{g}$, which seems to be true if $\mathfrak{g}$ is unimodular (in particular if $\mathfrak{g}$ is semisimple). Edit: oops, the notes I'm attaching are also mentioned in your post, so I might be wrong cause you obviously read them. Ahh I see, it seems that the whole $e^{tr(ad)}$ is the identity in the unimodular case. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181362390518188, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/24818/beginning-a-sentence-with-a-mathematical-symbol/24860
## Beginning a sentence with a mathematical symbol ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is more of a mathematician question than a mathematics question, but I hope it is still appropriate. Several times now, when friends have been editing my mathematical writing, they have pointed out instances where I began a sentence with a mathematical symbol, such as $X$, or $a\in A$, etc. This is the sort of thing that never struck me as inherently bad writing, since it is often exactly how I would speak the sentence aloud. However, I can also see their point, in that a sentence beginning with a mathematical symbol can look a little weird (especially when it starts with a lowercase letter), and doesn't quite read the same as a sentence beginning with an English word. However, now that I have been trying to avoid doing this, more and more I find myself writing awkward sentences to avoid the most natural way of speaking, and I can't quite decide if I am increasing or decreasing the quality of my writing. Do people have strong opinions on this? Do people have tricks for dealing with this, besides a succession of empty and synonymous clauses ("Thus", "Then", "Therefore", "We find", "Looking at"...)? - 24 I do hope you've seen Serre's lecture on mathematical writing. Remember, write "The map f", or "The function f" instead of just "f". "The manifold M", "The space X", "The scheme S", "The element e", etc. – Harry Gindi May 15 2010 at 21:27 11 "Now" is useful. – Gerald Edgar May 15 2010 at 21:27 2 There is a link to modular.fas.harvard.edu/edu/basic/serre from Serre's wikipedia page. The linked page has a link to a file serre.avi that just might be the lecture in question. (It's 459 megabytes – I am downloading it now … 23 minutes download time remaining.) – Harald Hanche-Olsen May 15 2010 at 22:55 7 I'd also recommend Knuth's "Mathematical Writing" (ISBN 088385063X). Page 1, point 2: "Don't start a sentence with a symbol." – Blue May 15 2010 at 23:04 2 I've hit this question with the wiki hammer. Any question that comes down to "who has strong opinions about X?" should be wiki, please. – Scott Morrison♦ May 16 2010 at 19:34 show 5 more comments ## 8 Answers I also try to avoid starting sentences with a mathematical symbol. I feel strongly about it, but I can't quite articulate why. One trick I use that you haven't mentioned yet is something like, "The space $X$ ..." or "The function $f$...". - 4 Greg, there is a difference between speaking and writing, so you shouldn't worry that writing doesn't always match speaking or that writing may take more effort than speaking. One concrete reason to avoid symbols at the start of a sentence is that it can be hard to see where a sentence ends and begins if you use symbols to start them. (Imagine a symbol ends a sentence, then a period, then another symbol.) Also, Serre has a list of writing tips which includes this advice. Amazingly (to me), some mathematicians don't mind starting sentences with symbols. But please stick to avoiding that! – KConrad May 15 2010 at 21:32 3 There was an earlier MO question on punctuation in math formulas at mathoverflow.net/questions/6675/… with a strong opinion expressed by Allen Hatcher. Greg, I think you can get any answer you wish on your writing questions by asking enough faculty in your department. – KConrad May 15 2010 at 21:46 4 A friend of mine often wrote sentences like "Since for all `$x>0$`, `$f(x)>0$`, `$\int_0^1 f>0$`. Despite they didn't begin with a symbol and all the punctuation was in place, I usually had to spend a few seconds figuring out the meaning (though, if that had been read aloud to me with proper pauses, I would have no trouble with it). So, in my humble opinion, the more words between the formulae, the better... – fedja May 16 2010 at 1:55 6 @fedja: "Let X be an E.C. with. C.M.", where "with." of course stands for "without". – Harry Gindi May 16 2010 at 6:44 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You shouldn't do it. This is covered in Halmos's justifiably famous article "How to write mathematics". (You'll find lots of copies of this on the web.) It's very closely related to the rule that you should avoid having notation separated by punctuation. - If one stoops to starting sentences with a symbol, then one soon descends to finishing a sentence with a symbol and starting the next with a symbol. Then one is liable to finish a sentence with a symbol and start a sentence with the same symbol. To the reader (if you still have one) madness beckons. - I share the dislike that many people have expressed for starting sentences with symbols. In trying to explain why, I wanted to come up with an example that would illustrate the temptation to do so, but I find it surprisingly hard. Suppose, for instance, that f is so clear from the context that to use Tara's trick and start a sentence with "The function f" would be ridiculous. (In general, by the way, using Tara's trick can add clarity by reminding the reader what is what, so I am talking about a special circumstance here.) Now suppose that we want to say something about f, such as that it is a homomorphism. It is very hard to think of a natural context in which one would actually want to start a sentence with "f is a homomorphism". Nearly always there would be a reason for saying that, or a justification for it, such as, "It is not hard to check that", or "We have already seen that". And if it came out of the blue somewhere, then one would want to signal a slight change of subject with a word such as "Now" (a trick that has already been mentioned). So I'm tempted to say that if your prose is flowing properly the problem shouldn't arise, or should arise very infrequently. - I think I agree. I can't actually construct a situation where it doesn't seem somehow nicer to add "Now" or "Then" or "Consider when" or something similar. – Robby McKilliam May 15 2010 at 23:01 1 Examples can be found if you think about statements of theorems. Someone not used to avoiding symbols at the start of a sentence may be inclined to write an equation as the theorem (with quantifiers following rather than preceding it). For example, Mike Rosen told me that he doesn't have a concern about starting a sentence with a symbol and if you look at Ireland & Rosen's "A Classical Introduction to Modern Number Theory" (2nd ed.) you'll see Theorem 3 on p. 212 (or, for that matter, most of the lemmas which precede that theorem) illustrates the possibilities. – KConrad May 15 2010 at 23:17 2 The best examples I've found occur when you have a list of conditions. For example, "Theorem 1. The function f has property X if and only if one of the following is true: (1) f is surjective. (2) f is open. (3) f is bounded." – Vectornaut May 16 2010 at 1:52 3 @Vectornaut: If you're giving the conditions in an ordered list, I think that it's fine to put the symbol at the beginning of the sentence because it's clearly set off from the prose. – Harry Gindi May 16 2010 at 6:48 1 Robby: "I can't actually construct a situation where it doesn't seem somehow nicer to add "Now" or "Then" or "Consider when" or something similar" I think one reason for this is, that many mathematicians often insert a "Now" to avoid beginning a sentence with a symbol, so you are used to this kind of constructions. If you asked a non-mathematician, I think he might prefer a construction that began with a symbol. I agree that you shouldn't begin a sentence with a symbol, but I don't think that these constructions are "nicer" English. – Sune Jakobsen May 16 2010 at 10:36 show 4 more comments This seems widely considered as bad form within mathematics papers. In some cases, it might not be clear that the previous sentence has ended, particularly if it also ends in mathematical typesetting, or the punctuation itself might be considered as part of the mathematics. This applies more generally, in that one should avoid placing too much mathematics around all punctuation marks. There are also capitilisation considerations, e.g. starting sentences with f(x). As already noted, it's usually very easy (and often desirable) to avoid this situation. - Since you almost never have a sentence like "x is in A" without an additional clause, it shouldn't be hard to reverse the order of the sentence. For example, "u and v are harmonic since f is analytic" could be changed to "As f is analytic, u and v are necessarily harmonic" or a number of variants thereof (since, as a consequence of, by virtue of). If there is some case where you would actually want a standalone like "x is in A" --- for example after a long lemma establishing that x is in A --- one could use flashier phrases, e.g. to wit, that is to say, whence etc. Good writing is a lot like good mathematics: it's more art than science. - I personally find that it is occasionally convenient and natural to start sentences with a mathematical symbol, but I have had coauthors who do not like it, and in this case I typically modify the sentence (into what I sometimes feel is more awkward). Tara has mentioned a nice trick for amending sentences starting with a symbol and this is typically the approach I have taken. Another approach that I find sometimes works is to consider merging the sentences (perhaps with a little restructuring) so that the symbol follows a comma or a conjunction such as `and'. I probably have an overly relaxed attitude towards the written language, but in my opinion, if it works and it is clear, do it. I can envisage situations where starting a sentence with a symbol makes the text confusing to read. Obviously it should be avoided in this case. - Under no circumstances can you have symbols then punctuation then other symbols. As a special case you can never have symbols at the end of one sentence and the beginning of the next. Outside of the above case, starting sentences with symbols is bad, but occasionally all other options are worse. - 4 When x > 0, x^3 > 0 too. Hmm.. – KConrad May 16 2010 at 5:17 Writing m = [G:H], mG is a subgroup of H. (Here groups are abelian.) – KConrad May 16 2010 at 5:36 Personally I must admit that both examples would take longer (though far from impossible of course) to parse than versions following Noah's rule. That I think should be the reason for doing it one way or other. (It is very tricky to apply such a principle however as it is very individual. I realised that after having done some programming I tended to put in more parentheses in formulas which probably makes them more difficult to read for some, perhaps most, mathematicians.) – Torsten Ekedahl May 16 2010 at 8:22 2 I would avoid writing either of Keith's examples. For example "Writing m=[G:H], we see that mG is a subgroup of H" or "When x>0, certainly x^3>0." – Noah Snyder May 17 2010 at 2:43 2 How about "For every x∈A, f(x) is prime"? I agree with "avoid", not with "[u]nder no circumstances": the issues are that (i) there's usually poor phrasing going on, and (ii) the typography is tricky. – Charles Stewart May 19 2010 at 12:29 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650425314903259, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/11/18/the-endomorphism-algebra-of-the-left-regular-representation/?like=1&_wpnonce=9d779cb1c0
The Unapologetic Mathematician The Endomorphism Algebra of the Left Regular Representation Since the left regular representation is such an interesting one — in particular since it contains all the irreducible representations — we want to understand its endomorphisms. That is, we want to understand the structure of $\mathrm{End}_G(\mathbb{C}[G])$. I say that, amazingly enough, it is anti-isomorphic to the group algebra $\mathbb{C}[G]$ itself! So let’s try to come up with an anti-isomorphism $\mathbb{C}[G]\to\mathrm{End}_G(\mathbb{C}[G])$. Given any element $v\in\mathbb{C}[G]$, we define the map $\phi_v:\mathbb{C}[G]\to\mathbb{C}[G]$ to be right-multiplication by $v$. That is: $\displaystyle\phi_v(w)=wv$ for every $w\in\mathbb{C}[G]$. This is a $G$-endomorphism, since $G$ acts by multiplication on the left, and left-multiplication commutes with right-multiplication. To see that it’s an anti-homomorphism, we must check that it’s linear and that it reverses the order of multiplication. Linearity is straightforward; as for reversing multiplication, we calculate: $\displaystyle\begin{aligned}\left[\phi_u\circ\phi_v\right](w)&=\phi_u\left(\phi_v(w)\right)\\&=\phi_u\left(wv\right)\\&=wvu\\&=\phi_{vu}(w)\end{aligned}$ Next we check that $v\mapsto\phi_v$ is injective by calculating its kernel. If $\phi_v=0$ then $\displaystyle\begin{aligned}v&=1v\\&=\phi_v(1)\\&=0(1)\\&=0\end{aligned}$ so this is only possible if $v=0$. Finally we must check surjectivity. Say $\theta\in\mathrm{End}_G(\mathbb{C}[G])$, and define $v=\theta(1)$. I say that $\theta=\phi_v$, since $\displaystyle\begin{aligned}\theta(g)&=\theta(g1)\\&=g\theta(1)\\&=gv\\&=\phi_v(g)\end{aligned}$ Since the two $G$-endomorphisms are are equal on the standard basis of $\mathbb{C}[G]$, they are equal. Thus, every $G$-endomorphism of the left regular representation is of the form $\phi_v$ for some $v\in\mathbb{C}[G]$. 1 Comment » 1. [...] we just saw that is anti-isomorphic to as algebras, and this anti-isomorphism induces an anti-isomorphism on [...] Pingback by | November 19, 2010 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9103156924247742, "perplexity_flag": "head"}
http://mathoverflow.net/questions/74508?sort=oldest
## Finite order automorphisms of complex projective manifolds isotopic to identity ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Question. Let $V$ be a complex projective manifold of general type (we can even assume that the canonical bundle of $V$ is ample). Suppose $\varphi: V\to V$ is a non-identical automorphism. Can $\varphi$ be isotopic to the identity map (i.e. $\varphi\in Diff_0(V)$)? I hope the answer is no, and this can be easily proven when $K_V$ is very ample. More generally what restrictions are known on smooth manifolds that admit self-diffeos of finite order that are isotopic to identity? - ## 1 Answer The answer to your question is unknown already for surfaces $S$ of general type. Note that, if $S$ is simply connected, by a result of Quinn (see "Isotopy of 4-manifolds", Journal of Differential Geometry 1986) every automorphism acting trivially on rational cohomology must be topologically isotopic to the identity. At any rate, it seems that people conjecture that the answer to your question is no for simply connected surfaces of general type. See Catanese's paper "A Superficial Working Guide to Deformations and Moduli" (arXiv:1106.1368), Section 1.4 for further details. In this paper, complex manifolds which do not admit non-trivial automorphisms isotopic to the identity are called rigidified. - Francesco, huge thaks! This interesting, I am really surprised that this is yet unknown for surfaces of general type. – Dmitri Sep 4 2011 at 12:52 Dear Dmitri, you are welcome. I was really surprised too, when I read Catanese's survey. This seems really a very basic and interesting question. – Francesco Polizzi Sep 4 2011 at 13:05 Francesco, Frank Quinn (J. Diff. Geom. 1986) shows that for a simply connected compact 4-manifold, $\pi_0$ of the group of homeomorphisms is the group of automorphisms of the intersection form. Is this what you are referring to? If so, isn't differentiable isotopy a different story? – Tim Perutz Sep 4 2011 at 15:33 @Tim: let $V$ be a simply-connected compact $4$-manifold and $f \colon V \to V$ an automorphism acting trivially in cohomology. In particular $f$ acts trivially on the intersection form of $H^2$. Therefore by Quinn's result (Theorem 1.1), it follows that $f$ must be in the identity component of $\pi_0 \textrm{Top}(M)$, i.e. $f$ is isotopic to the identity. I'm missing something? – Francesco Polizzi Sep 4 2011 at 16:07 2 Francesco, Dmitri's question asked about differentiable isotopy, while Quinn's result is about topological isotopy. In dimension 4, there's a gulf between plain topology and differential topology... – Tim Perutz Sep 4 2011 at 21:18 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235787391662598, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/23085/intersection-multiplicity-and-partial-derivatives-of-algebraic-curves
# intersection multiplicity and partial derivatives of algebraic curves this will probably be an easy-to-answer and a not-well-posed question, since I'm a total beginner in the field, but here goes: Let $V(F)$ and $V(G)$ be two projective curves in $\mathbb{P}^2$ ($F,G\in\mathbb{F}[x_0,x_1,x_2]$ homogenous polynomials) and $P\in V(F)\cap V(G)$ a point. Let $\mu_P(F,G)$ denote the intersection number (multiplicity), i.e. the number, defined in Fultons book on p. 36, section 3.3 and also chapter 5. Is there any relationship between $\mu_P(F,G)$ and the possible equality of partial derivatives $$\frac{\partial F}{\partial x_i}\!\!\!(P)\overset{?}{=}\frac{\partial G}{\partial x_i}\!\!\!(P),\;\ldots,\;\frac{\partial^k F}{\partial x_i^k}\!\!\!(P)\overset{?}{=}\frac{\partial^k G}{\partial x_i^k}\!\!\!(P)\;?$$ Do the equalities hold for $k=\mu_P(F,G)$? If not, what is the correspondence? I assume degrees of $F$ and $G$ must be involved? Thank you. P.S. Some references from (preferably recent) books are highly desirable. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9142273664474487, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/37877?sort=newest
## Degree of canonical bundle? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a smooth complete intersection $X=D_{1} \cap D_{2} \cap \cdots \cap D_{k} \subset \mathbb{P}^{n}$ with ${\rm deg}\; D_i=d_i$, one can easily show that `$\omega_{X} \simeq \mathcal{O}_{X}(\sum_{i=1}^{k} d_{i} -n-1)$`, using induction on the number of hypersurfaces and the usual conormal sequence. Here is the question. Suppose $X \subset \mathbb{P}^{n}$ is a smooth projective variety of degree $d$, not necessarily a complete intersection. How to understand $\omega_{X}$ in terms of the embedding? Is it even necessarily true that $\omega_{X}$ is restricted from a line bundle on $\mathbb{P}^{n}$? Similarly, how to work out the cohomology of $\mathcal{O}_X$ and $\omega_X$? Does this only depend on the degree of $X$? - Complete interesections are quite special among projective varieties. I am doubtful as to whether there is much to say for an arbitrary projective variety. – Daniel Loughran Sep 6 2010 at 12:51 5 In general $omega_X$ is not the restriction of a line bundle on $\mathbb{P}^n$. An example is the twisted cubic curve in $\mathbb{P}^3$. In this case the canonical bundle has degree -2, whereas every line bundle obtained by restriction has degree divisible by 3. You can have a look on the final section of chapter IV of Hartshorne. He gives a discussion which pairs (d(C),g(C)) are possible for smooth space curves C. In particular, it is shown that for fixed d there are many possibilities for the genus of C. (Provided that d is not 1 or 2.) – Remke Kloosterman Sep 6 2010 at 13:05 ## 3 Answers Smooth (or Gorenstein) subvarieties in $\mathbb P^n$ whose canonical bundle is a restriction from $\mathbb P^n$ are known as subcanonical, and are very special. A rational twisted cubic in $\mathbb P^3$ is not subcanonical, for obvious reasons of degree. It is most certainly not true that the cohomologies of $\mathcal O_X$ and $\omega_X$ only depend on the degree: for example, consider a twisted cubic as above and a plane cubic embedded in $\mathbb P^3$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. ````Take any curve at all, of any genus g, and any divisor of degree d > 2g. This embeds the curve into projective space with degree d, and a generic projection embeds it in P^3 also with any degree d > 2g. So d and n determine almost nothing about the curve. ```` On the positive side, interestingly, the nice counterexample given for the original question, a rational cubic in P^3, although not determined by its degree, is completely determined by its degree and the fact that (unlike the plane cubic) it spans P^3. (Rational normal curves are about the only examples I can think of, spanning but not a complete intersection, where d,n do determine all the invariants.) ````I guess you could give an inequality at least for the genus (i.e. h^1(O)) of curves in P^3, since a curve of degree d in P^3 projects to a plane curve of degree d-1, hence has genus bounded above by that of a general such plane curve. Indeed Castelnuovo has a famous such inequality. ```` - Im not sure if this counts as a full answer, but it is a nice example which will hopefully shed light on some of your questions. The canonical bundle $\omega_X$ of an Enriques surface $X$ satisfies $\omega_X \otimes \omega_X=\mathcal{O}_X$, but $\omega_X\neq \mathcal{O}_X$ in the Picard group. It follows that $\omega_X$ is not the restriction of any line bundle in $\mathbb{P}^n$, as these can't be non-zero torsion. - That argument seems a bit unclear as written --- you seem to be saying that the image under a homomorphism f: A -> B of a non-torsion element in an abelian group A must be a non-torsion element in B. (Maybe I'm misreading your intention, in which cases apologies; if so, maybe the answer could be rewritten a little for clarity. The idea is obviously correct, in any case.) – Artie Prendergast-Smith Sep 6 2010 at 14:24 1 The point is that the restiction of a line bundle from $\mathbb P^n$ is either ample, anti-ample, or zero, and only in this last case it can be torsion. – Angelo Sep 6 2010 at 14:27 Right. My point was that the phrase "as these can't be non-zero torsion" is ambiguous --- on first reading "these" seems to refers to line bundles on P^n, or at least it did to me. – Artie Prendergast-Smith Sep 6 2010 at 14:33 2 Just to build slightly on Angelo's comment, I was implicitly using the fact that if $X \subset \mathbb{P}^n$ is a non-singular projective variety which is not contained in a hyperplane, then the natural map $\mathbb{Z} \cong Pic(\mathbb{P}^n) \to Pic(X)$ is an injection. Hopefully this clears up my answer. – Daniel Loughran Sep 6 2010 at 15:06 Perhaps an even simpler argument is that, as Angelo points out, every non-trivial line bundle on $\mathbb P^n$ is either ample or anti-ample. The restriction of these, by definition, remain ample or respectively anti-ample, in particular non-torsion. The only remaining case is $\mathcal O_{\mathbb P^n}$ which restricts to $\mathcal O_X$ on any $X$. – Sándor Kovács Oct 29 2010 at 8:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432178735733032, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/149527/the-law-of-sines-and-a-second-forces-magnitude
# The Law of Sines and a second forces magnitude How do you solve for the second forces magnitude? The 17' and 30' threw me off. - ## 1 Answer Look for example at $29^\circ 17'$. That means $29$ degrees and $17$ minutes. But a minute is one-sixtieth of a degree, so $17$ minutes is $\frac{17}{60}$ degrees, approximately $0.2833333$ degrees. Therefore $29^\circ 17'$ is approximately $29.2833333$ degrees. Similarly, $76^\circ 30'$ just means $76+\frac{30}{60}$ degrees, that is, $76.5$ degrees. So in the "normal" formulas that you would use to find the magnitude of the resultant force, just use the appropriate decimal approximation. Some calculators allow direct input in degrees and minutes, and do the conversion to decimal automatically. In old-fashioned astronomy, there is a finer subdivision still, the second. There are $60$ seconds in a minute. So $1$ second is $\frac{1}{3600}$ degrees. If you are told that an angle is $29^\circ 17'\, 34''$, that means $29+\frac{17}{60}+\frac{34}{3600}$ degrees. You could use a calculator to get a good decimal approximation to this, and then proceed as usual. Remark: The use of minutes and seconds to measure angles is slowly (too slowly!) fading. However, you may need to acquire some ability to transform a decimal degree answer, such as $42.372$ degrees, to degrees, minutes, and perhaps even seconds. (It may be that in your answers, angles are expected to be given in degrees, minutes, and even seconds.) Sometimes a hybrid notation is used, dropping seconds, as in $46^\circ 27.4'$. - There is some historical value to 'earth-based' units. One second of latitude corresponds to a nautical mile, which makes for convenient estimates. The separation of scales reduces magnitude errors as well. However, as a child, computing in multiple bases was no fun (money had multiple bases too). – copper.hat May 25 '12 at 7:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382234811782837, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/117917-monte-carlo-method.html
# Thread: 1. ## Monte Carlo Method Determine a method to generate random observations for the extreme valued pdf given by: $e^{x-e^x} \ \ \ \ \ - \infty < x < \infty$ So I start by finding it's CDF: $\int^{x}_{-\infty}e^{x-e^x} dx$ and this is where I get stuck. Any help would be appreciated. 2. $\int e^{x-e^x} dx= -e^{-e^x}$ 3. Originally Posted by statmajor Determine a method to generate random observations for the extreme valued pdf given by: $e^{x-e^x} \ \ \ \ \ - \infty < x < \infty$ So I start by finding it's CDF: $\int^{x}_{-\infty}e^{x-e^x} dx$ and this is where I get stuck. Any help would be appreciated. It is advisable not to use the same variable name for the dummy variable of integration and the limit of integration. Instead write this as: $\int^{x}_{-\infty}e^{\xi-e^{\xi}} d\xi$ (or use whatever name you want for the dummy variable but not $x$) CB 4. $\int^{x}_{-\infty}e^{t-e^t} dt = -e^{-e^t} |^x_{-\infty} = 1 -e^{-e^x}$ $y = 1 -e^{-e^x} => 1 - y = e^{-e^x}$ $ln(1 -y) = -e^x => -ln(1 -y) = e^x => ln(-ln(1 -y)) = x$ U ~ Uniform(0,1) $ln(-ln(1 - U)) = F^{-1}(u) = X$ Is this correct?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8747726082801819, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-challenge-problems/69449-proof-1-1-a-print.html
# proof of -1 = +1 Printable View Show 40 post(s) from this thread on one page • January 22nd 2009, 12:58 PM mnova proof of -1 = +1 [I hope I did the LaTex correctly.] We know that $\textit{i}^2 = \sqrt{-1}^2$ and the square root and square operations are inverses so they cancel and $\textit{i}^2 = -1$ . And we know that $\sqrt{a}\sqrt{b} = \sqrt{ab}$ . example: $2\textit{i} = \sqrt{4}\sqrt{-1} = \sqrt{4(-1)} = \sqrt{-4}$ So $\textit{i}^2 = \textit{ii} = \sqrt{-1}\sqrt{-1} = \sqrt{(-1)(-1)} = \sqrt{+1} = +1$ . So -1 = +1 QED Where's the problem with this? • January 22nd 2009, 01:09 PM ursa Quote: [I hope I did the LaTex correctly.] We know that http://www.mathhelpforum.com/math-he...e3c90701-1.gif and the square root and square operations are inverses so they cancel and http://www.mathhelpforum.com/math-he...4d3f1e01-1.gif . And we know that http://www.mathhelpforum.com/math-he...b54f0fae-1.gif . example: http://www.mathhelpforum.com/math-he...1a190073-1.gif So http://www.mathhelpforum.com/math-he...00032a69-1.gif . So -1 = +1 QED Where's the problem with this? hi you are only considering one part that is: (under-root of x^2)=x but actually it is (under-root of x^2)=+x or -x • January 22nd 2009, 01:20 PM mnova Ursa, Yes, sqrt(1) has two solutions, -1 and +1. The point is that the +1 answer is just as valid as the -1 answer is. • January 22nd 2009, 01:25 PM Jhevon Quote: Originally Posted by mnova [I hope I did the LaTex correctly.] We know that $\textit{i}^2 = \sqrt{-1}^2$ and the square root and square operations are inverses so they cancel and $\textit{i}^2 = -1$ . And we know that $\sqrt{a}\sqrt{b} = \sqrt{ab}$ . example: $2\textit{i} = \sqrt{4}\sqrt{-1} = \sqrt{4(-1)} = \sqrt{-4}$ So $\textit{i}^2 = \textit{ii} = \sqrt{-1}\sqrt{-1} = \sqrt{(-1)(-1)} = \sqrt{+1} = +1$ . So -1 = +1 QED Where's the problem with this? as ursa hinted, there are two square roots to any number--and as such, they cannot be distinguished from each other. in the reals, we distinguish between them in terms of the ordering we have, that is, positives are "greater than" negatives. however, the complex numbers have no such ordering. the proof plays on the fact there are 2 square roots and the fact that the complex numbers are not ordered to give rise to what seems like a contradiction. so really what is going on here is that both 1 and -1 are square roots of 1. so the "equality" is playing on that fact that in the complex numbers, we cannot distinguish between them in terms of order. passing through the complex numbers removes the "natural" ordering we are used to with the real numbers and so, we get the obviously silly statement 1 = -1. by the way, the $\sqrt{\; \;}$ symbol denotes the principal square root, that is, the positive one. but in idea of "square root" extends beyond that other views are given here. i do not agree with most of them though • January 22nd 2009, 01:27 PM ursa Quote: Ursa, Yes, sqrt(1) has two solutions, -1 and +1. The point is that the +1 answer is just as valid as the -1 answer is. Noooooo you simply cant ignore the other part put -1 instead of +1 and see if you are getting the same. its maths dear, you have to consider all the situation one example is enough for contradiction here you are developing your own moves • January 22nd 2009, 03:31 PM mnova Jhevon That link you provided is fascinating. Among the posts are two posters who cleverly demand that i is defined by the equation $\textit{i}^2 = -1$ and that i is definitely NOT equal to $\sqrt{-1}$. That seems suspicious to me, as then how can you justify $2]\textit{i} = \sqrt{-4}$ ? So I consulted several texts on complex analysis and found: 1- Some authors define, and explicitely state, that $]\textit{i} = \sqrt{-1}$ 2- Some authors define math]\textit{i}^2 = -1[/tex] and never bring up the subject of what i is. 3- Some authors don't seem to consider complex numbers at all, but teach it using only the geometric interpretation of i = (0, 1) in the 2-D plane. They seem to be only teaching geometry, although I didn't look into the texts thoroughly. 4- One author doesn't seem to consider complex numbers at all, but teaches it using only vectors in the plane. Again, I didn't look into the text thoroughly. Only one author, Ahlfors, (of type 2 above) goes further, referring to square roots of positive reals and of complex numbers but not mentioning square roots of negative numbers. It seems that the conumdrum of what i really is is something mathematicians don't want to talk about. • January 22nd 2009, 03:36 PM Jhevon Quote: Originally Posted by mnova Among the posts are two posters who cleverly demand that i is defined by the equation $\textit{i}^2 = -1$ and that i is definitely NOT equal to $\sqrt{-1}$. that is the part i do not agree with i think both definitions are equally valid. the latter is just saying that $i$ is one of the roots to the equation $x^2 + 1 = 0$. that is, it is one of the square roots of -1 (which can be found using basic methods in complex analysis) Quote: Originally Posted by mnova That seems suspicious to me, as then how can you justify $2]\textit{i} = \sqrt{-4}$ ? this is one of the reasons i do not like that view. saying $i$ is not $\sqrt{-1}$ is so limiting. it really makes it a pain (for no reason) for doing a lot of problems involving algebra in the complex numbers. as far as i can see, and several professors i have spoken to about it, $i = \sqrt{-1}$ makes perfect "sense". it gets the job done, and is consistent with the way things work in the complex numbers. writing complex numbers in polar form, for instance, makes it clear that something like $\sqrt{-1}$ can be defined in a meaningful way (and 2i is only one of the square roots of -4) Quote: So I consulted several texts on complex analysis and found: 1- Some authors define, and explicitely state, that $]\textit{i} = \sqrt{-1}$ 2- Some authors define math]\textit{i}^2 = -1[/tex] and never bring up the subject of what i is. 3- Some authors don't seem to consider complex numbers at all, but teach it using only the geometric interpretation of i = (0, 1) in the 2-D plane. They seem to be only teaching geometry, although I didn't look into the texts thoroughly. 4- One author doesn't seem to consider complex numbers at all, but teaches it using only vectors in the plane. Again, I didn't look into the text thoroughly. i do not know what texts you are referring to, or what goals those texts had in mind when they were dealing with complex analysis, so i cannot comment intelligibly on that. but i will say that complex numbers haven't always been popular. the notion of $i$ was as controversial as Cantors notion of "infinite sets can be of different sizes, one can be bigger than another and the line has as many points as the plane etc etc etc" once upon a time. depending on when these texts were written, it may have some lingering touches of that "timidness" to venture into the world of unknown math or math that many were not comfortable with at the time. Quote: It seems that the conumdrum of what i really is is something mathematicians don't want to talk about. it is not that they don't want to talk about it, it is just that math has to make sense and be consistent and follow certain rules. unless mathematicians are convince beyond the shadow of a doubt that a certain definition complies with this, they will hesitate to move forward with it, and talk about it in ways as "official" as writing a text. today, however, is not so bad. a lot of research has been done and is being done in complex analysis, and i would be as bold as to say $\sqrt{-1}$ is A-OK with modern mathematics as far as most mathematicians today are concerned • January 22nd 2009, 08:39 PM ThePerfectHacker The mistake is $\sqrt{ab} = \sqrt{a}\sqrt{b}$! • January 22nd 2009, 09:04 PM Jhevon Quote: Originally Posted by ThePerfectHacker The mistake is $\sqrt{ab} = \sqrt{a}\sqrt{b}$! i don't recall doing the proof that this does not work in complex analysis. what is it? • January 22nd 2009, 10:09 PM Chop Suey The property $\sqrt{ab} = \sqrt{a} \sqrt{b}$ is defined for $a, b \geq 0$. And hence, as TPH mentioned, your step here: $<br /> \textit{i}^2 = \textit{ii} = \sqrt{-1}\sqrt{-1} = \sqrt{(-1)(-1)} = \sqrt{+1} = +1<br />$ is erroneous. • January 23rd 2009, 07:23 AM ThePerfectHacker Quote: Originally Posted by Jhevon i don't recall doing the proof that this does not work in complex analysis. what is it? The example in Post #1 as Chop Suey said is an example when this does not work. • January 23rd 2009, 01:54 PM Jhevon Quote: Originally Posted by ThePerfectHacker The example in Post #1 as Chop Suey said is an example when this does not work. yes, proof by counter-example is obvious here. i was hoping for something else. because this seems to imply a larger problem, namely, the distribution of powers over a product. if it doesn't work for the 1/2 power, that would mean it probably won't work for all or some other powers, right? a proof i would like to see is when this move is illegal, or is it always illegal? is $(ab)^3 = a^3b^3$ false for $a,b \in \mathbb{C}$? etc • January 24th 2009, 07:18 AM lebanon hi there is an error...........and -1 is undifined • January 24th 2009, 07:20 AM janvdl Quote: Originally Posted by lebanon look i knew that if x^2=16 then, x= redical 16=4 and if i^2=-1 then, i= redical -1 $i$ is defined as $\sqrt{-1}$ and $i^2 = -1$ $i$ is a complex number. You don't do it on your level. • January 24th 2009, 08:09 AM Constatine11 Quote: Originally Posted by lebanon look i knew that if x^2=16 then, x= redical 16=4 If $x^2=16$ then $x=\pm\sqrt{16}=\pm4$ . Show 40 post(s) from this thread on one page All times are GMT -8. The time now is 04:53 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9646693468093872, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/94724?sort=oldest
## How to compute the Monopole Floer Homology for Surface $\times S^1$ ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) We know that Monopole Floer homology of a 3-manifold $M$ depends on a spin-c structure. My question is that if $M$ is $F\times S^1$ ($F$ is a surface of genus larger than 1) then how can we compute the Floer homology for it? For the spin-c structures satisfying $\langle c_1(L),F\rangle>2g-2$ ($L$ is the determinant bundle of the spin-c structure,$g$ is the genus of $F$), Kronheimer and Mrowka prove that the Floer homology vanishes. They also proved that if $\langle c_1(L),F\rangle=2g-2$ then the Floer homology is $\mathbb{Z}$. But what about the other spin-c structure (when $\langle c_1(L),F\rangle<2g-2$)? Also, what is the answer for this question if we consider Heegaard Floer Homology instead of Monopole Floer Homology? - As for your latter question, the flavors of $HM^*$ are isomorphic to that of $HF^*$. – Chris Gerig Apr 21 2012 at 10:14 ## 1 Answer I would assume you are interested in $HM$-to as opposed to $HM$-bar ($HM$-bar is mostly computed in the book Monopoles and 3-manifolds by Kronheimer and Mrowka). For the case of $HM$-to, you should use (as answered above) that monopole is isomorphic to Heegaard Floer (Kutluhan-Lee-Taubes or Taubes + Colin-Ghiggini-Honda). If you want the trivial torsion Spin$^c$ structure, this is computed by Jabuka and Mark: http://arxiv.org/pdf/math/0502328v4.pdf This paper also has the references to the earlier computations for the other Spin$^c$ structures, done by Ozsvath and Szabo. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8955953121185303, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/91105/a-space-x-is-locally-connected-if-and-only-if-every-component-of-every-open-se/91126
# A space $X$ is locally connected if and only if every component of every open set of $X$ is open? It is claimed on the wikipedia page that a space $X$ is locally connected if and only if every component of every open set of $X$ is open without any proof. What is the proof behind this fact? Am I correct in assuming this in turn implies that a space is locally connect if and only if the open connected subsets are actually a base for topology? - ## 2 Answers If $X$ is locally connected and $C$ is a connected component of an open subset $U \subseteq C$, then every point $c \in C$ contains a connected neighborhood which lies in $U$ (because $X$ is locally connected and $U$ is open). But then it has to lie completely in the connected component of $c$, i.e. in $C$. This shows that $C$ is open. The proof of the converse is very, very similar. Can you write it down for yourself? - Suppose $X$ is locally connected, and take an open set $A$ and a component $C$ of $A$. We want to show that $A \backslash C$ is closed in $A$. Take a point $x$ in the closure of $A \backslash C$, by local connectedness we can find a connected neighborhood $N$ of $x$. If $x \in C$ then $N \subseteq C$, but $N$ intersects $A \backslash C$, which is absurd. So $C$ is open in $A$, hence in $X$. The converse follows from the obvious fact that every open set is the union of its components. As for your second question, the statement is true, but it follows directly from the definition. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650052189826965, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/random+statistics
# Tagged Questions 1answer 143 views ### Calculate variance of random walk? How can I symbolically calculate the variance of the following random walk in Mathematica? Given several discrete random variables such that $p(Z_i=1-2k)=p$, where $k$ is a small real number, and ... 1answer 332 views ### Recommended book on random processes to understand new functionality in Mathematica 9? I am interested in exploring the new functionality on random processes available in Mathematica 9, but I am not familiar with all of the underlying mathematics. Could you recommend a book that ... 1answer 70 views ### Draw from HistogramDistribution with ParallelTable I wanted to check something, but ran into troubles using HistogramDistribution in combination with ParallelTable. The code does the following: Compute a HistogramDistribution of some sample and use ... 2answers 167 views ### Generating a range of numbers according to some rules I'm pretty new to Mathematica, and I'm mainly a programmer so I don't have a lot of knowledge about maths. I want to generate a set of UNIQUE incremental numbers (series) according to the following ... 1answer 595 views ### Wald–Wolfowitz Runs Test Does Mathematica 8 implement Wald–Wolfowitz runs test for randomness? I can't find it in the documentation. I would like to test some fit residuals. 4answers 876 views ### Efficiently generating n-D Gaussian random fields I am interested in an efficient code to generate an $n$-D Gaussian random field (sometimes called processes in other fields of research), which has applications in cosmology. I wrote the following ... 1answer 215 views ### RandomVariate with a Discrete Distribution Nature has provided me with a random variable $Z$ taking on the values $0, 1, 2, \ldots$, with probabilities $z_0, z_1, \cdots$. I can sample from the distribution of $Z$ reasonably efficiently (I ... 2answers 318 views ### Most efficient way to obtain samples from high-dimensional multivariate distributions? Is MultinormalDistribution[] efficient and easy to use for high dimensions? I have a variable $n$ representing the dimension of a Monte Carlo integration I do on a ... 1answer 289 views ### Which Distributions can be Compiled using RandomVariate Recently, Oleksandr kindly showed a list of Mathematica commands that can be compiled. RandomVariate was part of that list. However, whether this can be compiled depends upon the distribution that is ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9050389528274536, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/128287-truth-value-quantifier-statement-print.html
# Truth value of a quantifier statement Printable View • February 10th 2010, 07:05 PM Nostalgia Truth value of a quantifier statement Hello, I need to determine whether the following statement is true of false: $\forall x(x > 1\to x^2 > x)$ Domain: All reals I think the statement is true since I cannot find a value which would make the first part true while the second part false. However, I don't know how to prove the statement true without plugging in various numbers, but doing so would not prove that the statement is true in general, so I was wondering how one would start this question. Thanks • February 10th 2010, 08:14 PM Danneedshelp Quote: Originally Posted by Nostalgia Hello, I need to determine whether the following statement is true of false: $\forall x(x > 1\to x^2 > x)$ Domain: All reals I think the statement is true since I cannot find a value which would make the first part true while the second part false. However, I don't know how to prove the statement true without plugging in various numbers, but doing so would not prove that the statement is true in general, so I was wondering how one would start this question. Thanks Here are some thoughts Notice, $x^{2}=xx>x$ $\Leftrightarrow$ $x>\frac{x}{x}=1$. So, clearly a contradiction will arise if we assume $x\leq\\1$ for $\forall\\x$. • February 11th 2010, 05:58 AM Grandad Hello Nostalgia Quote: Originally Posted by Nostalgia Hello, I need to determine whether the following statement is true of false: $\forall x(x > 1\to x^2 > x)$ Domain: All reals I think the statement is true since I cannot find a value which would make the first part true while the second part false. However, I don't know how to prove the statement true without plugging in various numbers, but doing so would not prove that the statement is true in general, so I was wondering how one would start this question. Thanks The proof is quite simple, provided we are given that we may multiply both sides of an inequality by a positive number; i.e. provided we know that: $a > b$ and $c > 0 \Rightarrow ac > bc$ For we simply multiply both sides by $x$, noting that $x>1 \Rightarrow x >0$. So: $x>1 \Rightarrow xx > 1x \Rightarrow x^2>x$ Grandad All times are GMT -8. The time now is 09:51 AM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947708249092102, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/50629/math-without-infinity
# Math without infinity Does math require a concept of infinity? For instance if I wanted to take the limit of $f(x)$ as $x \rightarrow \infty$, I could use the substitution $x=1/y$ and take the limit as $y\rightarrow 0^+$. Is there a statement that can be stated without the use of any concept of infinity but which unavoidably requires it to be proved? - 7 Believe it or not, adding $\infty$ into a system sometimes makes things neater. There is a very short proof of Liouville's theorem (a bounded entire function is constant) that goes via Riemann surfaces and the fact that the Riemann sphere is compact. But the Riemann sphere is just the complex plane with $\infty$ glued in! – Zhen Lin Jul 10 '11 at 12:07 I think some parts of mathematics need it. For example you want to say that the number of primes is not bounded, or do you rather want to stick to arithmetics? – Listing Jul 10 '11 at 12:07 5 I'm not sure the notion of a statement or a proof using a concept of infinity is well-defined. For example, the statement that most directly uses a concept of infinity I can think of would be $$\mathbb{N}\text{ is infinite}$$ and surely any proof of this fact must somehow use the concept of infinity... except, I could rephrase the statement as $$\text{There does not exist a bijection }f:\mathbb{N}\rightarrow\{1,2,\ldots,n\}\text{ for any }n\in\mathbb{N}.$$ which arguably doesn't use the concept of infinity. – Zev Chonoles♦ Jul 10 '11 at 12:11 2 – Willie Wong♦ Jul 10 '11 at 14:55 5 @Sivaram I think it's perfectly reasonable to discuss the three things you mentioned without once mentioning infinity. Convergence of sequence requires understanding arbitrarily large natural numbers, not necessarily infinity itself. The $\infty$ notation in this context is more convenience than anything. – user92843 Jul 10 '11 at 19:22 show 8 more comments ## 3 Answers Surprisingly, infinity proves necessary even for finite combinatorial mathematics. For a nice explanation as to why there cannot be any such as thing as a comprehensive, self-contained discipline of finite combinatorial mathematics see Stephen G. Simpson's writeup of his expository talk Unprovable Theorems and Fast-Growing Functions, Contemporary Math. 65 1987, 359-394. Simpson gives a detailed discussion of three theorems about finite objects whose proofs necessarily require the use of infinite sets. The three theorems discussed are about colorings of finite sets (modified finite Ramsey theorem), embeddings of finite trees (Friedman's finite form of Kruskal's theorem) and iterated exponential notation for integers (Goodstein's theorem). Below is an excerpt from the introduction. $\quad$The purpose of the talk is to exposit some recent results (1977 and later) in which mathematical logic has impinged upon finite combinatorics. Like most good research in mathematical logic, the results which I am going to discuss had their origin in philosophical problems concerning the foundations of mathematics. Specifically, the results discussed here were inspired by the following philosophical question. Could there be such a thing as a comprehensive, self-contained discipline of finite combinatorial mathematics? $\quad$ It is well known that a great deal of reasoning about finite combinatorial structures can be carried out in a self-contained finitary way, i.e. with no reference whatsoever to infinite sets or structures. I have in mind whole branches of mathematics such as finite graph theory, finite lattice theory, finite geometries, block designs, large parts of finite group theory (excluding character theory, in which use is made of the field of complex numbers), and large parts of number theory (including the elementary parts but excluding analytical techniques such as contour integrals). One could easily imagine comprehensive textbooks of these subjects in which infinite sets are never mentioned, even tangentially. All of the reasoning in such textbooks would be concerned exclusively with finite sets and structures. $\quad$ Consequently, there is a strong naive impression that the answer to our above-mentioned philosophical question is "yes." $\quad$ However, naive impressions can be misleading. I am going to discuss three recent results from mathematical logic which point to an answer of "no." Namely, I shall present three examples of combinatorial theorems which are finitistic in their statements but not in their proofs. Each of the three theorems is simple and elegant and refers only to finite structures. Each of the three theorems has a simple and elegant proof. The only trouble is that each of the proofs uses an infinite set at some crucial point. Moreover, deep logical investigations have shown that the infinite sets are in fact indispensable. Any proof of one of these finite combinatorial theorems must involve a detour through the infinite. Thus, in a strong relative sense, the three theorems are "unprovable" -- they cannot be proved by means of the finite combinatorial considerations in terms of which they are stated. - Is there a version of this in a more readable form? (eg, pdf) – Nick Alger Oct 7 '12 at 16:56 @Nick There was not at the time I posted it to sci.math (iirc it was composed in some pre-TeX language). When reading the post via Google Groups it is essential to choose the fixed width font option (or view the source). Otherwise the formatted equations will be illegible. – Gone Oct 7 '12 at 17:02 – Ben Crowell Oct 7 '12 at 18:05 @Ben What don't you believe? The proofs are correct. NSA is not aanalogous. – Gone Oct 7 '12 at 18:51 I don't believe that the result has the philosophical significance that he claims it has. NSA is of course either analogous or not analogous, depending on what kind of analogy one considers appropriate. Similarly, first-order PA either is or isn't an appropriate characterization of "finite mathematics," depending on what one considers to be an appropriate characterization. These are all matters of opinion, taste, and philosophy, not things that can be proved or disproved mathematically. – Ben Crowell Oct 7 '12 at 20:31 show 2 more comments Does math require an $\infty$? This assumes that all of math is somehow governed by a single set of universally agreed upon rules, such as whether infinity is a necessary concept or not. This is not the case. I might claim that math does not require anything, even though a mathematician requires many things (such as coffee and paper to turn into theorems, etc etc). But this is a sharp (like a sharp inequality) concept, and I don't want to run conversation off a valuable road. So instead I will claim the following: there are branches of math that rely on infinity, and other branches that do not. But most branches rely on infinity. So in this sense, I think that most of the mathematics that is practiced each day relies on a system of logic and a set of axioms that include infinities in various ways. Perhaps a different question that is easier to answer is - "Why does math have the concept of infinity?" To this, I have a really quick answer - because $\infty$ is useful. It lets you take more limits, allows more general rules to be set down, and allows greater play for fields like Topology and Analysis. And by the way - in your question you distinguish between $\lim _{x \to \infty} f(x)$ and $\lim _{y \to 0} f(\frac{1}{y})$. Just because we hide behind a thin curtain, i.e. pretending that $\lim_{y \to 0} \frac{1}{y}$ is just another name for infinity, does not mean that we are actually avoiding a conceptual infinity. So to conclude, I say that math does not require $\infty$. If somehow, no one imagined how big things get 'over there' or considered questions like How many functions are there from the integers to such and such set, math would still go on. But it's useful, and there's little reason to ignore its existence. - I would love to know why the downvotes, especially why the downvotes almost a year after I wrote this answer. – mixedmath♦ Mar 31 '12 at 22:26 2 (I upvoted by the way) your very last sentence is interesting. Your whole answer talked about the usefulness of infinity, but then you threw in, right at the end, "there is little reason to ignore its existence." Indeed, whether infinitudes are useful and whether they exist (e.g. in the real-world) are two different questions. Out of curiosity, what is your take on the latter question (and I realize that this might not be what you meant by "existence")? – pichael Jun 8 '12 at 0:35 There are many different notions of "infinity" in math, and you haven't defined what you mean by infinity, so your question doesn't have a well-defined answer. But let me try to interpret your question the way I think you meant it, and try to clear up som confusion. Let me first state that in most of mainstream mathematics, the symbol $\infty$ is merely notation, and not an actual object. 1. For instance, when we say that the size of the set $\mathbb R$ of real numbers is "infinite", we simply mean that it is not finite, that is, it does not contain exactly an integer number of elements. Nothing more magical than that. We don't mean that it contains exactly some number $\infty$ of elements. 2. For another example, when we say that $\lim_{x\to\infty}f(x)=\infty$, we do not mean that the limit equals some number $\infty$ when $x$ approaches that same number $\infty$. The limit notation $\lim_{x\to\infty}f(x)=\infty$ is actually a special case, and needs its own definition, differing from the definition for $\lim_{x\to a}f(x)=b$ where $a,b$ are real numbers. By definition, the notation $\lim_{x\to\infty}f(x)=\infty$ means that we can make $f(x)$ larger than any given number $N$ by letting $x$ be larger than some number $M$ depending on $N$. (The formal definition is $\lim_{x\to\infty}f(x)=\infty$ if and only if $\forall N>0\exists M>0: x>M\implies f(x)>N.$) In this definition, there is no mention of any object called $\infty$. This is an important distinction. When calculus was invented by Newton and Leibniz, they used a "number" $\infty$ in their derivations, which happened to usually give correct answers, but there are cases when doing so results in paradoxes. Therefore a lot of effort was made to reformulate calculus without using infinities, by for example using the limit definition above. (For more about this, you can read the Wikipedia section on the foundations of calculus.) For this reason, I would claim that $\infty$ is not "used" or "required" in most of mathematics, because it is just a notation, and not an object by itself. However, one can construct objects to represent some kind of "infinity", and use these as tools in mathematics. Let me tell you how to do this with the two examples above. 1. One can assign a "number" $|S|$ to any set $S$, called a cardinal number, which represents the size of that set. For any finite set, this will just be an integer, the number of elements in that set. But infinite sets will also have a size, and we can view these as infinite numbers. In this setting, there exists many different infinite numbers, not just one. And fascinatingly, one can show that $|\mathbb Z|=|\mathbb Q|<|\mathbb R|$, that is, the number of integers is the same as the number of rational numbers (!), but the number of real numbers is strictly more than the number of integers (or rationals)! 2. In calculus, instead of working with the number set $\mathbb R$, one can work with the extended reals $\bar{\mathbb R}=\mathbb R\cup\{-\infty,+\infty\}$, which consists of all real numbers, and two new objects which we will denote by $-\infty$ and $+\infty$. These two new objects are formally just symbols, and so far don't have any meaning. We can then introduce a notion of "neighborhoods" of numbers. A neighborhood of a real number is just something which contains an open set around that real number. A neighborhood of $+\infty$ is any set which contains an interval $\{x:x>a\}\cup\{+\infty\}$ for some real number $a$. Now we can define the limit $\lim_{x\to a}f(x)=b$ as follows: for any neighborhood $N$ of $b$, there exists a neighborhood $M$ (depending of $N$) of $a$ such that if $x\in M$, then $x\in N$. This definition also works for $a=+\infty$ and $b=+\infty$! We have therefore managed to define the limit $\lim_{x\to+\infty}f(x)=+\infty$ actually in terms of an object $+\infty$. To end my comment, there are certain areas of math where some concepts are most naturally expressed in terms of infinities, and some people explicitly study infinites just because they find them interesting. - +1 I didn't see this til now. Great exposition. – Ross Millikan Jun 13 '12 at 3:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499354362487793, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/68285/calculating-number-of-items-displayed-pagination
# Calculating number of items displayed - pagination This is my first post here and found it through Stack Overflow. I am a web designer but I am currently pulling my hair out over the lack of my mathematical knowledge. On a site displaying a category, there may be a scenario where a user will put subcategories and products within a parent category. On these pages (categories) where products are displayed, there is a "showing $N$ to $I$ of $T$ products" message (e.g. "showing 1 to 6 of 10 products"). So I am in a rut about how I am to calculate this to only take into account the products and not the categories. Here's what I know (variables): • $E =$ the current page number (there can be many pages to split up the view) • $F =$ the amount of products or subcategories allowed on any 1 page • $Y =$ the total amount of subcategories being displayed in this category • $L =$ the total amount of products displayed in this category Also the subcategories are always displayed first before the products. If anyone can help me out or give a push in the right direction, it would be much appreciated. EDIT as per the solution below, here is the PHP interpritation: (valiables are in relation to the post and relating comments) ````function f( $x, $f, $y, $l ) { return ( ( $x * $f ) - $y < 0 ? 0 : ( ( $x * $f ) - $y > $l ? $l : ( $x * $f ) - $y ) ); } $n = 1 + f($e-1, $f, $y, $l); $i = f($e, $f, $y, $l); echo 'Showing ' . $n .' to ' . $i . ' of ' . $l . ' Products'; ```` - ## 3 Answers If I understand you correctly, a category can contain two kinds of items: subcategories and products. Out of those, subcategories are always sorted before products. On each page, you display (up to) $F$ items, and, for the $E$-th page, you want to know • $N =$ the total number of products displayed on pages $1$ to $E-1$ plus one, and • $I =$ the total number of products displayed on pages $1$ to $E$. Clearly, there are $E$ times $F$ items (products or subcategories) displayed on the pages $1$ to $E$ (unless this would exceed $L+Y$, in which case that's the upper limit). Out of those, up to $Y$ will be subcategories, so we're left with $EF-Y$ products (unless $EF<Y$, in which case there will be no products on those pages). So, let's use $f(X)$ to denote the total number of products displayed on pages $1$ to $X$. Then $$f(X) = \begin{cases} 0 & \text{if } XF-Y < 0 \\ L & \text{if } XF-Y > L \\ XF-Y & \text{otherwise}. \end{cases}$$ Using this rule, we can then easily calculate $N = 1 + f(E-1)$ and $I = f(E)$. (Ps. Note that, if $Y \ge F$, using this rule the first page would carry a message saying "showing $1$ to $0$ of $L$ products". You may want to have a separate message for the case $I=0$ if there's any chance that it could occur in practice.) - this is the most detailed message I have ever recieved on ANY forum... I'm just going to spend a few minutes finding out what the squiggly *f*() means... brb – Phil Jackson Sep 28 '11 at 20:08 think it means function and X being the passing value of total products... hope so – Phil Jackson Sep 28 '11 at 20:11 Yes, the "squiggly $f$" is just the name of a function. Its argument $X$ is the number of a page; I used $X$ there instead of $E$ to avoid confusion below, where we pass $E-1$ as the argument to $f$ when calculating $N$. – Ilmari Karonen Sep 28 '11 at 20:11 one more thing (trying to remember what a teacher tried to teach me many moons ago..) does XF mean X*F... sorry if im being a pain – Phil Jackson Sep 28 '11 at 20:14 – Ilmari Karonen Sep 28 '11 at 20:26 show 1 more comment When you don't have categories, the number of pages is $\frac{L}{F}$, rounded up to the next whole number. So if there are 53 products and you display 8 per page, you will need 7 pages. On page E, you will have objects (E-1)*F+1 to E*F.If you have categories, does each one start on a new page? Then you just have the same calculation for each category. If your 53 objects are in categories of 12,23,5, and 13, you would have 2,3,1, and 2 pages, for a total of 8. You then have to decide whether the showing N to P of M is within the category or over all products. Is this what you were after? - if there are lets say 3 sub cats and 6 prods and each page only allows 6 items per page to be displayed, then 3 cats and 3 prods would be on the first page (showing 1 to 3 of 6 products) and 3 on the next. – Phil Jackson Sep 28 '11 at 19:30 I think I may have sused N = ( ( ( E * F ) - ( F - 1 ) ) - Y ); – Phil Jackson Sep 28 '11 at 19:31 Also the subcategories are always displayed first before the products. The alarm bell wrang after reading that line, it is not a requirement of the problem but just a viewing of the results. More than math looking at Design Patterns will help you. The mathematical solution of this problem is completely wrong for Web Programming. First separate your entities i.e. divid your problem to Categories, Products etc., 2nd solve the problem for each group separately, Last display the products and categories in a way that they are not coupled ( i.e. choosing one does not effect what you see in other one). Lastly try asp.net, there were pre made samples that have already this scenario implemented. This is not a job for a web designer but a programmer. If this is a job for a client you will be way behind by the time it is gonna take to design, implement and test this approach, get a php shopping card close to what you need, then ponder about the math part separately. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8948853611946106, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/246341/examples-of-integral-affine-manifold
# Examples of (integral) affine manifold An $n$-dimensional affine manifold $M$ is a topological manifold which admits a system of charts such that the coordinate changes are affine transformations i.e. in $GL(n,\mathbb{R}^n)\rtimes \mathbb{R}^n$. $\mathbb{R}^n$ and Tori $\mathbb{R}^n/\mathbb{Z}^n$ are examples of affine manifolds but what are other examples? In a similar way, we can defined an integral affine manifold by imposing coordinate change to be in $GL(n,\mathbb{Z}^n)\rtimes \mathbb{Z}^n$. What example of these are known? Since they have constant metric, I guess there won't be many. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9122448563575745, "perplexity_flag": "head"}
http://nrich.maths.org/7677
### Ball Bearings If a is the radius of the axle, b the radius of each ball-bearing, and c the radius of the hub, why does the number of ball bearings n determine the ratio c/a? Find a formula for c/a in terms of n. ### Overarch 2 Bricks are 20cm long and 10cm high. How high could an arch be built without mortar on a flat horizontal surface, to overhang by 1 metre? How big an overhang is it possible to make like this? ### Cushion Ball The shortest path between any two points on a snooker table is the straight line between them but what if the ball must bounce off one wall, or 2 walls, or 3 walls? # More Realistic Electric Kettle ##### Stage: 4 and 5 Challenge Level: An electric kettle is a quite simple household appliance which is used everyday to convert electrical energy to heat energy. The diagram shows a circuit which provides a simple model for what happens in the electric kettle. The kettle diagram: V is the voltage of the power supply (a power supply in UK has 240 volts (V)), R is the resistance of a heating element measured in ohms ($\Omega$), I is a current flowing around the circuit in amps (A). The experiment was carried out to investigate the relationship between the resistance of heating element and the temperature which settle down after a while. Results are shown in the table. Resistance R/$\Omega$ Temperature T/°C 65 97 100 70 150 53 200 45 250 40 1. Why does the temperature settle down after a while? 2. Plot the graph of data for the temperature against the resistance. 3. Could you recongise the function? Using points (100, 70) and (250, 40) find the equation of this function. (Hint: the function is of the form T = T0 + C/R where T0 and C are constants.) 4. Find the temperature of the room where the experiment was conducted. 5. Does this equation is suitable for all values of resistance? In practise we know that for an electric kettle to boil water takes 3-4 minutes. Could you investigate how the time needed to boil water is related with the amount of water which are you heating? Extension: the power of the kettle is given by the formula where R is the resistance of the heating element, r is the internal resistance and V is the voltage of power supply.  $$P = \frac{RV^2}{(r+R)^2}$$ Suppose it is given that V = 240 volts, r = 5 ohms. Find R such that we get the maximum power. Try to deduce generally the value of R for which the power is the biggest if the internal resistance is r. An old electric kettle picture is taken from http://www.sciencemuseum.org.uk/images/I059/10325939.aspx The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209625124931335, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/29632/circle-preserving-homeomorphisms-in-the-closure-of-mathbbc-and-mobius-trans
# Circle preserving homeomorphisms in the closure of $\mathbb{C}$ and Möbius Transformations I am presently a learner of Hyperbolic Geometry and am using J. W. Anderson's book $Hyperbolic$ $Geometry$. Now the author presents a sketch proof of why every circle preserving homeomorphism in $\overline{\mathbb{C}}$ is an element of the general Möbius group, which is what I am struggling to understand. First, a brief outline. Let $f$ be an element of the set of all circle preserving homeomorphisms which we denote Homeo$^{C}(\overline{\mathbb{C}})$ and let $p$ be a Möbius transormations that maps the triples $(f(0),f(1),f(\infty))$ to $(0,1,\infty)$ Then we see that $p\circ f(0) = 0, p\circ f(1) = 1$ and $p \circ f(\infty)=\infty$, and since $p \circ f(\mathbb{R}) = \mathbb{R}$, either such a composition maps the upper half of the complex plane, $\mathbb{H}$ to itself or the lower half of the complex plane. If $p \circ f(\mathbb{H}) = \mathbb{H}$, we take $m =p$, while if $m \circ (\mathbb{H})$ goes to the lower half, we just take $m = W \circ p$, where $W(z) = \overline{z}$ Now here is the thing I don't understand: 1. Let $A$ be an euclidean circle in $\mathbb{C}$ with euclidean centre $\frac{1}{2}$ and radius $\frac{1}{2}$. Let $V(0), V(1)$ be the vertical lines through the points $x=0$ and $x=1$. Can anybody explain why as $V(0)$ and $V(1)$ are vertical tangents to the circle, then these lines under the map $m \circ f(z)$, namely $m \circ f\Big(V(0)\Big)$ and $m \circ f\Big(V(1)\Big)$ are again vertical tangents to the circle $m \circ f(A)$ at $m \circ f(0) = 0$ and $m \circ f(1) = 1$? I am trying to conclude from here that $m \circ f = Id_z$ , the identity transformation. Thanks, Ben - Hi David, Möbius transformations are conformal mappings. If you take a look at wikipedia, you will see that those transformations preserve angles, exactly what you need ! To see that they are indeed conformal I guess you could use Conway, Complex analysis. – Leandro Mar 29 '11 at 3:57 One more detail, m will be a möbius transformation even if it map the upper half plane to the lower half plane. – Leandro Mar 29 '11 at 3:59 @Leandro, I am just trying to show that every circle preserving homeomorphism is a mobius transform, please help me in trying to understand the above! – BenjaLim Mar 29 '11 at 6:03 ## 1 Answer I would assume that "circle-preserving" here means "preserving circles and straight lines"; that is, if $L$ is a straight line in the plane, then $L \cup \lbrace\infty \rbrace$ is a circle in $\overline{\mathbb{C}}$ (this is the usual definition in this context). I think that assuming this, all you really then need is that these maps are bijections: $m \circ f (V(0))$ and $m \circ f (V(1))$ are circles/straight lines, and since $m \circ f$ fixes $0$, $1$ and $\infty$, we see that $m \circ f (V(0))$ is a straight line through $0$, and $m \circ f (V(1))$ is a straight line through $1$. Then if $m \circ f (V(0))$ were not vertical, it would intersect $A$ at some point $w \neq 0$, which is a contradiction, because $(m \circ f)^{-1}(w) \neq 0$ would be another point of $V(0) \cap A$. Then $m \circ f (V(1))$ is vertical for similar reasons, or because it must be parallel to $m \circ f (V(0))$. - Ok thanks for your answer that makes things clear. – BenjaLim Apr 20 '11 at 0:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303572177886963, "perplexity_flag": "head"}
http://math.stackexchange.com/users/10948/caleb-jares?tab=activity&sort=comments&page=2
Caleb Jares reputation 29 bio website location Colorado and Lincoln, NE age 20 member for 2 years seen Apr 20 at 2:21 profile views 67 Computer Science Undergraduate at UNL. C#, .NET 4.5, Windows 8 lover. | | | bio | visits | | | |-----------|----------------|--------------------------|----------|----------------|---------| | | 318 reputation | website | | member for | 2 years | | 29 badges | location | Colorado and Lincoln, NE | seen | Apr 20 at 2:21 | | 29 Comments | | | | |-------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | May16 | comment | Finding the limit when denominator = 0Thank you, that does make sense (of course DNE being the same thing as + or - infinity - + in this case). When I asked this question, I didn't know that it was positive both ways. Can you explain how to get + or - infinity from the following problem? $$\lim_{x \to 3^+} \frac{x - 4}{x - 3}$$ | | May16 | comment | Finding the limit when denominator = 0Yes, but what if the problem is $$\lim_{x \to 3^+} \frac{x - 4}{x - 3}$$. Approached from the right, it is $-\infty$ and from the left, it is $+\infty$. How do I tell if it is positive or negative infinity without graphing it or plugging in numbers? | | May16 | comment | Finding the limit when denominator = 0Thanks for the answer, but I'm having trouble understanding this (I'm reviewing for a Calc 1 final). I'm not sure what $\forall$ and $\in$ are. Also, I'm not sure what M stands for. | | May16 | comment | Finding the limit when denominator = 0I know that I could do it like that, but that's still plugging in values (albeit inside your head). I'm interested to find a way to solve it without plugging in numbers (even if it's in your head) or graphing it. It makes more sense to me if I can understand how the math works in an absolute sense. | | May16 | comment | Finding the limit when denominator = 0I did not know you could change limits like that. If I change $$\lim_{x \to -2^-}$$ to $$\lim_{x \to 0^-}$$ what must happen to the rest of the function? Is there a rule for this? | | May16 | comment | Finding the limit when denominator = 0yes, but the question is how do I solve it without plotting? how do I know that it goes to infinity and if it is positive or negative? | | May16 | comment | Finding a one sided limit algebraically (not plugging in numbers)@Tyler - Can you give an example of working that out? | | May16 | comment | Finding a one sided limit algebraically (not plugging in numbers)and what about equations such as lim(x->-2 from the left) of 1/(x+2)^2 | | May16 | comment | Finding a one sided limit algebraically (not plugging in numbers)thank you! I can't believe I missed that |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307522177696228, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/24321/computing-a-set-of-coset-representatives-for-mathbbzn-lambda
## Computing a set of coset representatives for $\mathbb{Z}^n / \Lambda$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\Lambda$ be an $n$ dimensional sublattice of the integer lattice $\mathbb{Z}^n$. The quotient $\mathbb{Z}^n/\Lambda$ has order $\sqrt{\det{\Lambda}}$. What is the best/standard way to compute a set of coset representatives for this quotient? Edit: I initially forgot to take the square root of $\det{\Lambda}$, which is likely the reason for KCronrad's initial comment. - Pedantic point: the quotient has order the absolute value of the determinant. – KConrad May 12 2010 at 3:02 1 Hi, Robby. If you have a Z-basis for the lattice, can you use a "lower-left" rule on the fundamental parallelopiped spanned by the basis? Meaning the points of Z^n internal, then on lower left faces. I don't know, I just made it up. – Will Jagy May 12 2010 at 3:02 4 One thing you might try to do is find a basis e_1,...,e_n of Z^n and positive integers a_1,...,a_n such that a_1e_1,...,a_ne_n is a basis of your lattice. Then Z^n/Lambda is represented by sums c_1e_1 + ... + c_ne_n with c_i running from 0 to a_i - 1. A suitable normal form associated to any matrix whose columns are a known basis of the lattice should let you read off what the a_i's (and e_i's?) are. – KConrad May 12 2010 at 3:04 1 It might help if you explain how you are actually being "given" the lattice: as the solution space to a system of linear equations, as the dual to some other lattice,... – KConrad May 12 2010 at 3:07 2 While (as Keith suggests) you can use the Smith normal form, you can also use the Hermite normal form. Find (using integer row operations) a generator matrix for $\Lambda$ which is upper triangular. If the diagonal entries are $d_1,\dots,d_n$ then coset reps are the $\sum a_i e_i$ where $0\le a_i < | d_i|$. – Robin Chapman May 12 2010 at 7:01 show 4 more comments ## 1 Answer As KConrad suggested (why only in the comments?), Smith's normal formal is your best bet. Its running time is insensitive to $m=|{\mathbb Z}^n/\Lambda |$ (unless you need to use arbitrary long entries in your matrix) and behaves as $n^3$. You may also try coset enumeration, whose running time is usually unbounded but may be bounded in this case by something like $(mn)^2$. - Well, I guess you get the brownie points then :) – Robby McKilliam May 12 2010 at 9:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152618050575256, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/191844/calculating-circumference-from-2d-coords
# Calculating circumference from 2d coords I'm trying to calculate the circumference of a circle given say three reference points from a 2d coordinates that would form an arc. The problem is the reference points may be slightly inaccurate so i'm using three to hopefully get an exact arc. Once I have my 3 reference points how can I calculate the circumference? I'm attaching an image so you may better understand what I'm trying to do. http://i47.tinypic.com/2j2vpzq.jpg Also, the reason I'm doing it this way is that the image is a scan and top or side of the circle may be chopped off so getting the diameter may not always be possible and the size of the circle differs from time to time. Thanks, Craig - ## 2 Answers It is very easy. You have 3 points $A, B, C$. The center of circle is a point, which has the same distance from these 3 points (it is an intersection of normals (axes) of $AB$ and $BC$). Once you have a center, the radius is distance between center and $A$, $B$ or $C$. When you have a radius, you can calculate the circumference $=2\pi r$. In other words, you must find a circumcenter of triangle - http://blog.ivank.net/basic-geometry-functions.html - Cool, that's exactly what I was looking for on the link. Thank you. – Craig Stewart Sep 6 '12 at 9:17 Basically you're trying to calculate the radius of the circle circumscribed about the triangle $ABC$. Given all the coordinates of those points, you can calculate the length of each side $a,b,c$ of the triangle and its surface $S$ (by the Heron's formula for instance) and then use the $R = \dfrac{abc}{4S}$ identity. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220086336135864, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/1347/is-conditional-value-at-risk-cvar-coherent/1354
# Is Conditional Value-at-Risk (CVaR) coherent? When the risk is defined by a discrete random variable, is CVaR a coherent risk measure? I stick to the following definition of CVaR: $$CVaR_\alpha(R) = \min_v \quad \left\{ v + \frac{1}{1-\alpha} \mathbb{E}[R-v]^+ \right \}$$ where $R$ is the DISCRETE random variable for the loss and $\alpha$ is the confidence level. - 1 I'm almost sure CVaR has been created to satisfy the sub-additivity property which isn't valid for normal VaR. The whole point of this new measure was to "enhance" VaR and have a coherent measure. I must say that I am unsure of what the fact that it is DISCRETE changes? – SRKX♦ Jun 24 '11 at 10:51 ## 3 Answers I found this paper: Conditional value-at-risk for general loss distributions by Rockafellar and Uraysev http://dx.doi.org/10.1016/S0378-4266(02)00271-6 which says CVaR is coherent for general loss distributions, including discrete distributions. I think that I was confused by other authors who were also confused with the definitions of CVaR. In particular, in the following paper, the author mistakenly stated that Tail Conditional Expectation (TCE) is same as CVaR, and they are not coherent. http://dx.doi.org/10.1016/S0378-4266(02)00281-9 However, TCE is not same as CVaR in general. If the underlying distribution is continuous, they are same. - $VaR^\alpha$ is not a coherent risk measure because it fails sub-additivity (a coherent risk measure is monotonic, sub-additive, positive homogenous, and translation invariant). The expectation operator $E[\cdot]$ is linear, so it meets sub-additivity, as well as the other three properties, so $CVaR$ is a coherent risk measure. - Why can't use "minimum"? I can't think of a good example without a minimum. Could you suggest one? – FEQ Jun 24 '11 at 20:01 @chang -- Good catch! You're right. I read too quickly and misinterpreted discrete $R$ as $R$ in a finite set, which would require infimum. – richardh♦ Jun 25 '11 at 15:58 Thanks. I thought you thought in that way. – FEQ Jun 27 '11 at 17:31 Conditional VaR (CVaR), which is also called Expected Shortfall, is a coherent risk measure (although being derived from a non-coherent one, namely VaR). See this paper: Expected Shortfall: a natural coherent alternative to Value at Risk from Carlo Acerbi and Dirk Tasche http://www.bis.org/bcbs/ca/acertasc.pdf EDIT: I just saw that you emphasized discrete but that shouldn't change the general situation. - 1 [2002 - Acerbi] Spectral measures of risk: a coherent representation of subjective risk aversion This paper actually says that CVaR is not a coherent measure in general. But, now I think the author was confused with different names for similar concepts. – FEQ Jun 24 '11 at 19:46 1 I agree to @vonjd's answer. In his link in equation (12) that's the thing to do if you have atoms in the distribution (could be continuous with points of mass - for example at 0 in an insurance/operational loss example). Formula (12) applies for discrete distributions and the "correction term" vanishes for continuous ones (or when there is no atom). – Richard Jul 24 '12 at 12:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516637921333313, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/64477/
## Universal sets in metric spaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) (I am cross-posting this from math.SE as it seems to be slightly over the top for that site.) I saw in the class the theorem: Suppose $X$ is a separable metric space, and $Y$ is a polish space (metric, separable and complete) then there exists a $G\subseteq X\times Y$ which is open and has the property: For all $U\subseteq X$ open, there exists $y\in Y$ such that `$U = \{x\mid\langle x,y\rangle\in G\}$`. $G$ with this property is called universal. The proof is relatively simple, however the $y$ we have from it is far from unique, in fact it seems that it is almost immediate that there are countably many $y$'s with this property. My question is whether or not this $G$ can be modified such that for every $U\subseteq X$ open there is a unique $y\in Y$ such that `$U = \{x\mid\langle x,y\rangle\in G\}$`? Perhaps we need to require more, or possibly even less, from $X$ and $Y$? Some thoughts: Firstly $X$ cannot be finite, otherwise there are less than continuum many open subsets, and since $G$ is open we have that the projection on $Y$ is open, since $Y$ is Polish we have that this projection is of cardinality continuum, which in turn implies there are continuum many $y$'s with the same cut. Secondly, as the usual proof goes through a Lusin scheme over $Y$, and using it to define $G$, I thought at first that using the axiom of choice we can select a set of points on which the mapping to open sets of $X$ is 1-1, and somehow remove some of the sets from the scheme. This proved to be a bad idea, as we remove sets that can be used for other open sets. Thirdly, I thought about enumerating the open sets according to a rational enumeration so $A_i\subseteq A_j$ if and only if $q_i\le q_j$, and then instead of just placing the open sets of $X$ arbitrarily by the Lusin scheme, we use the rationals somehow. - The original question: math.stackexchange.com/questions/36634 – Asaf Karagila May 10 2011 at 8:19 It might be worth pointing out that another way to phrase your question is along the lines of "what sort of Polish topology can I put on the set of open subsets of $X$ that makes the membership relation open in the product?" Then, for Polish $X$, the answer would be that there's always at least one Polish topology that works, but it's hopeless to expect in general that you can meet any homeomorphism class with such a topology. – Clinton Conley May 11 2011 at 8:20 ## 3 Answers The possibly unsatisfying answer to your question is "sometimes." I will instead discuss the obviously equivalent question about universal closed subsets (it will let me use more standard notation later). Moreover, I will focus on the special case that $X$ and $Y$ are both Polish, since that has been examined more in the literature. First, let me point out an oversight in your analysis of the case that $X$ is finite. Certainly $X$ must have the discrete topology, so every subset of $X$ is closed. However, $Y = \mathcal{P}(X)$ is a perfectly fine Polish space when endowed with its own discrete topology. Then the set `$\{(x,A) \in X \times \mathcal{P}(X) : x \in A\}$` is "uniquely" universal closed. This may seem pedantic, but it actually generalizes to large $X$. Suppose now that $X$ is a compact Polish space, and endow its space of compact (equiv., closed) subsets $\mathcal{K}(X)$ with the Vietoris topology, generated by sets of the form `$\{K : K \subseteq U\}$` and `$\{K : K \cap U = \emptyset\}$`, where $U\subseteq X$ is open. For Polish $X$, this is a Polish topology on $\mathcal{K}(X)$. Note that in the special case where $X$ is finite (thus compact), this coincides with the discrete topology on $\mathcal{P}(X)$. Motivated by this analogy, we proceed as before and choose our uniquely universal closed set to equal `$G = \{(x,K) \in X \times \mathcal{K}(X) : x \in K\}$`. The only thing left to check is that this set is indeed closed. You can see this directly by assuming $(x_0,K_0) \notin G$, fixing a little open neighborhood $U$ around $x_0$ disjoint from $K_0$, and then checking that `$U \times \{K : K \cap U = \emptyset\}$` is an open neighborhood of $(x_0, K_0)$ disjoint from $G$. The obvious place to look for more information about this is Kechris' descriptive set theory text. Unfortunately I don't have a copy on hand at the moment (which makes me feel like a child without a security blanket), so I can't give more specific references. Moving on. For noncompact Polish spaces $X$ you can endow the space $\mathrm{CL}(X)$ of closed subsets of $X$ with a topology called the Wijsman topology. Well, really there are several such topologies, since the definition relies on a choice of compatible complete metric $d$ on $X$. This topology is the weakest topology making the functions $f_x : A \mapsto d(x, A)$ continuous for each $x \in X$. It is a result of Gerald Beer's that this topology is Polish for $(X,d)$ as above. (This might well be in Kechris' book, but as I mentioned I don't have it on hand so I'll regurgitate the reference that google gave me.) Beer, Gerald. A Polish topology for the closed subsets of a Polish space. Proc. Amer. Math. Soc. 113 (1991), no. 4, 1123–1133. Edit: but Theo Buehler has given a relevant reference to Kechris. See his comment. A variation of the earlier argument in the compact case should work in this context. Edit again: I just noticed that the definition of the topology I gave makes sense for nonempty closed subsets of $X$. This is not a serious problem and is in fact addressed in Beer's paper. Finally, it is hopeless to expect this to work for arbitrary Polish spaces $X$ and $Y$. As you noticed, for small spaces there are cardinality issues. When the spaces are large, you can also fiddle around with compactness/noncompactness, and other topological notions. There are just too many wild Polish spaces. - And of course Juris' example is precisely what you get when you look at the Wijsman topology associated with the discrete metric on $\omega$. – Clinton Conley May 10 2011 at 11:04 1 Kechris only mentions Beer's result and cites the exact same paper. However, in proving that the Effros Borel space is standard, he embeds $X$ into some compactification $\overline{X}$ and shows that the closed subsets of $X$ are a $G_{\delta}$ in the closed subsets of $\overline{X}$. By Kuratowski's theorem then, $\operatorname{CL}(X)$ is Polish, which seems good enough for your answer. This can be found in section 12.C on page 75. – Theo Buehler May 10 2011 at 11:15 @Theo: Thanks for the reference! – Clinton Conley May 10 2011 at 11:26 @Clinton: Sorry for the delayed answer. I had to catch my teacher so we could sit and go over your answer (as it was slightly over my head). Firstly, I noted that much would have been simpler had I required $Y$ perfect, regardless I was told that it is not a problem, as Polish spaces tend to have injective continuous (but not necessarily homeomorphism) functions from the Baire space, which is perfect. So indeed you have answered my question "For all $X$ there is a uniquely-universal set in the Baire space, for $\Pi^0_1$ sets of $X$". (cont...) – Asaf Karagila May 18 2011 at 20:06 (...cont) So did I understand that correctly? A natural question now would be whether or not this extends to the rest of the Borel hierarchy of $X$? And what would be if we also require $X$ to be perfect? Many many thanks! – Asaf Karagila May 18 2011 at 20:09 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If $X =\omega$ with the discrete topology and $Y= \mathcal{P}(\omega)$ with the Cantor set topology let $G$ be the set of all $(A,n)$ such that $n\in A$. - 1 I'm sorry, but I don't get the point you're making. Could you be so kind as to elaborate a little? – Theo Buehler May 10 2011 at 10:38 I am simply pointing out a case in which there is a universal set that provides a unique slice for each open set. In the case of $\omega$ with the discrete topology every set ia open. Each such set is also a unique element of the Cantor space providing the unique $y$ you asked for. Of course, this does not answer the general question you asked. – Juris Steprans May 10 2011 at 10:54 This should have been a comment rather than an answer. – Juris Steprans May 10 2011 at 10:55 Thanks for the clarification, that's what I suspected, but I was afraid I missed an important point towards the general answer. I didn't ask the question, I was just curious about the answer to Asaf's question. – Theo Buehler May 10 2011 at 10:59 While idly browsing around I stumbled over the follwing paper and remembered this question: A.W. Miller, Uniquely Universal Sets, Topology and its Applications 159 (2012), pp. 3033–3041. It's available in various formats here. Let me quote the abstract (to avoid confusion: Miller's terminology reverses the rôles of $X$ and $Y$ in your question): We say that $X \times Y$ satisfies the Uniquely Universal property (UU) iff there exists an open set $U \subseteq X \times Y$ such that for every open set $W \subseteq Y$ there is a unique cross section of $U$ with $U_x=W$. Michael Hrušák raised the question of when does $X \times Y$ satisfy UU and noted that if $Y$ is compact, then $X$ must have an isolated point. We consider the problem when the parameter space $X$ is either the Cantor space $2^\omega$ or the Baire space $\omega^\omega$. We prove the following: 1. If $Y$ is a locally compact zero dimensional Polish space which is not compact, then $2^\omega\times Y$ has UU. 2. If $Y$ is Polish, then $\omega^\omega \times Y$ has UU iff $Y$ is not compact. 3. If $Y$ is a $\sigma$-compact subset of a Polish space which is not compact, then $\omega^\omega \times Y$ has UU. His results are mostly positive: “a certain space or family of spaces has UU” and various permanence properties. One nice “negative” result: Proposition 30: There exists a partition $X\cup Y=2^\omega$ into Bernstein sets $X$ and $Y$ such that for every Polish space $Z$ neither $Z\times X$ nor $Z\times Y$ has UU. He also raises a few questions, e.g.: • Question 4: Does $(2^\omega\oplus 1) \times [0,1]$ have UU? • Question 6: Does either $\mathbb{R} \times \omega$ or $[0,1]\times \omega$ have UU? Or more generally, is there any example of UU for a connected parameter space? • Question 11: Is the converse of Corollary 10 false? That is: Does there exist $Y$ such that $\omega^\omega \times Y$ has UU but $2^\omega\times Y$ does not have UU? - That is awesome. Thanks for posting this! – Asaf Karagila Sep 23 at 13:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 110, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561474323272705, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/72352-integrate-ln-function.html
# Thread: 1. ## integrate ln function integrate ln(x^2-1) 2. Use the rules of exponents to rewrite as $ln(x^{2}-1)=ln((x+1)(x-1))=ln(x+1)+ln(x-1)$ 3. Originally Posted by galactus Use the rules of exponents to rewrite as $ln(x^{2}-1)=ln((x+1)(x-1))=ln(x+1)+ln(x-1)$ But care needs to be taken with domain issues since the original function is defined for x > 1 or x < -1 whereas the re-write has problems when x < -1 .... 4. Yep. I was being lazy and careless. 5. so how should I go about solving this problem? 6. Do exactly what galactus said but be careful like mr fantastic said! $ln(x^2- 1)$ is an even function so you really only need to integrate for x> 1, where galactus' method has no problem, and then use the same integral for x< -1. 7. Or you can go with integration by parts $\int{\ln(x^2-1)}$ sub. $\ln(x^2-1)=u \Rightarrow \frac{2x dx}{x^2-1}=du; dv = dx \Rightarrow v = x;$ so we have $x\ln(x^2-1)-2\int{\frac{x^2}{x^2-1}dx}$ $= x\ln(x^2-1)-2\int{\frac{x^2-{\color{red}1}+{\color{red} 1}}{x^2-1}dx}$ $=x\ln(x^2-1)-2\int{\left (1+\frac{1}{x^2-1}\right )dx} =...$ correct me anyone if I did anything wrong.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270942211151123, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/34730/bell-series-of-a-quotient-of-arithmetic-functions
# Bell series of a quotient of arithmetic functions? If $f,g$ are multiplicative functions (with bell series $F_p(x), G_p(x)$) so is $n \mapsto f(n)/g(n)$, what is its Bell series? (or is there no nice way to write it in terms of them?) I think it's not possible since it's composition in the case of completely multiplicative functions. I just want to know bell series for $\frac{n}{\varphi(n)}$ and $\sum_{d|n} \frac{\mu(d)^2}{\varphi(d)}$. For the first I got $1 + \frac{p}{p-1}\cdot\frac{1}{1-x}$ and for the second $\tfrac{1}{1-x}\left(1 + \frac{1}{p-1}\right)$ so again I have this wrong. - ## 1 Answer You do not need a formula for your problem, just calculate the Bell series of the two expressions. In the first case, you get a geometric series, in the second case a finite sum. For $\frac n {\varphi (n)}$: $$1+x \frac p {p-1} + x^2\frac {p^2}{p(p-1)}+\dots = 1+\frac{xp}{p-1} \cdot (1+x+x^2+\dots)= 1+\frac{xp}{(p-1)(1-x)}$$ For $\frac {\mu(n)^2} {\varphi (n)}$ : $$1+x \frac 1 {p-1}$$ Now, to get your original identity, you needed to compose the second expression with the function that is identical one, so multiplying with $\frac 1 {1-x}$. Note that for the general question, there is no hope for a simple answer as this is related to the Hadamard product and the extraction of the "diagonal" of a two-variable power-series. compare: Formal power series coefficient multiplication - Thank you! ----- – quanta Apr 23 '11 at 16:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478954672813416, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/160806/condition-for-commuting-matrices?answertab=oldest
# Condition for commuting matrices Let $A,B$ be $n \times n$ matrices over the complex numbers. If $B=p(A)$ where $p(x) \in \mathbb{C}[x]$ then certainly $A,B$ commute. Under which conditions the converse is true? Thanks :-) - 1 What exactly do you mean by the converse? – copper.hat Jun 20 '12 at 15:06 i am looking for an if and only if statement – Manos Jun 20 '12 at 15:09 – Jonas Meyer Jun 20 '12 at 15:14 – Peter Sheldrick Jun 20 '12 at 15:17 So yeah going from what Jonas said, it probably is not the case, and it only works if $A$ has $n$ distinct eigenvalues. – Peter Sheldrick Jun 20 '12 at 15:24 show 7 more comments ## 3 Answers The usual condition I have seen is that matrices commute if and only if they have a common basis of generalized eigenvectors. See also Commuting Matrices Another interpretation: It has been pointed out that my first interpretation of the question is most likely wrong. The intended question is probably similar to this question. In that case, the answer would be that if a matrix $A$ has distinct eigenvalues, then $B$ commutes with $A$ if and only if $B=P(A)$ for some complex coefficient polynomial $P$. If $A$ is $n\times n$, then $P$ need be at most degree $n-1$. Justification: Suppose that $A$ has distinct eigenvalues, then it is diagonalizable with a basis of eigenvectors. Thus, we can write $A=ED_AE^{-1}$ where $D_A$ is a diagonal matrix whose diagonal entries are the distinct eigenvalues of $A$ and $E$ is a matrix whose columns are the eigenvectors of $A$. Furthermore, suppose that $B$ commutes with $A$. Since $A$ shares its basis of eigenvectors with $B$, we have that $B=ED_BE^{-1}$, where $D_B$ is diagonal and the diagonal elements of $D_B$ are the eigenvalues of $B$. Suppose $P$ is the degree $n-1$ polynomial that takes the $n$ distinct diagonal elements of $D_A$ to the $n$ diagonal elements of $D_B$. Then, because $D_A$ and $D_B$ are diagonal, $P(D_A)=D_B$, which then gives us $$P(A)=P(ED_AE^{-1})=EP(D_A)E^{-1}=ED_BE^{-1}=B$$ - this concerne a single couple $(A,B)$ of matrices. The question concerne a given matrice $A$ , when $C(A) =\mathbb C[A]$ ? – Mohamed Jun 20 '12 at 15:53 @Mohamed: I second your apprehension that this might not be a full answer to the question, but I also don't believe that Manos has really yet clarified what exactly the question is. Based on the comments, it seems that Manos might have been satisfied with the answer to a linked question, which is only a special case of your more general answer. – Jonas Meyer Jun 20 '12 at 16:07 @robjohn: Could you please clarify exactly what your first sentence is asserting? The usual condition for what? For one of the matrices to be a polynomial in the other? – Jonas Meyer Jun 20 '12 at 16:12 @JonasMeyer: I had interpreted the question as asking under what conditions on $A$ and $B$ does $AB=BA$. I may have misinterpreted the question. – robjohn♦ Jun 20 '12 at 16:29 @robjohn: I think you did, because however unclear the precise question is, it asks for a converse of an implication that concludes that $AB=BA$. The question appears to be something like, "When does $AB=BA$ imply $B=p(A)$"? Of course I do not know why Manos accepted this answer nor why Manos hasn't made precisely clear what is being asked. – Jonas Meyer Jun 20 '12 at 19:35 show 6 more comments The converse is true if the minimal polynom degree equal $n$ If $C(A)$ denote the space of matrix $B$ who commut with $A$. There is o formulla who gives the $C(A)$ dimension using the similarity invariants of $A$, thus : if $P_1|...|P_s$ is the similarity invariants sequence of $A$ then : $$\dim (C(A))=\sum_{i=1}^{s} (2(s-i) + 1) d_i$$ where $d_i=\text{deg} P_i$ Since $\mathbb C[A] \subset C(A)$ this formula can help us to regard is convesly if minimal polynom degree equal $n$ thens $C(A)=\mathbb C[A]$ - THEOREM: The following are equivalent conditions about a matrix $A$ with entries in $\mathbb C$: (I) $A$ commutes only with matrices $B = p(A)$ for some $p(x) \in \mathbb C[x]$ (II) The minimal polynomial and characteristic polynomial of $A$ coincide (III) $A$ is similar to a companion matrix. (IV) Each characteristic value of $A$ occurs in only one Jordan block. This includes the possibility that all eigenvalues are distinct, but allows for repetition if they all occur in one Jordan block. - – Will Jagy Jun 21 '12 at 15:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276222586631775, "perplexity_flag": "head"}
http://nrich.maths.org/564/index?nomenu=1
## 'Legs Eleven' printed from http://nrich.maths.org/ ### Show menu This problem is in two parts. The first part provides some building blocks which will help you to solve the final challenge. These can be attempted in any order. Of course, you are welcome to go straight to the Final Challenge! Click a question from below to get started. Question A Choose any two numbers from the $7$ times table. Add them together. Repeat with some other examples. Notice anything interesting? Now do the same with a different times table. What do you notice this time? Convince yourself it always happens. Question B Choose two digits and arrange them to make two double-digit numbers. For example, if you choose $5$ and $2$, you can make $52$ and $25$. Repeat with some other examples. Notice anything interesting? Convince yourself it always happens. Question C Look at this sequence of numbers: $11, 101, 1001, 10001, 100001, ...$ Divide numbers in this sequence by $11$, WITHOUT using a calculator. Notice anything interesting? Convince yourself it always happens. FINAL CHALLENGE Take any four-digit number, move the first digit to the 'back of the queue' and move the rest along. For example $5238$ would become $2385$. Is the answer always a multiple of $11$? Can you convince yourself?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8786723017692566, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/79799/homotopy-equivalence-between-the-grassmannian-gr-n-m-and-gr-n-times-gr-m/79801
## Homotopy equivalence between the Grassmannian Gr_{n,m} and Gr_n \times Gr_m. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The following assertion appears in a paper I am reading, and I can't seem to verify it. Let $\text{Gr}_{n,m}$ denote the set of pairs $(V,W)$ where $V$ and $W$ are as follows. 1. $V$ is an $n$-dimensional subspace of $\mathbb{C}^{\infty}$. 2. $W$ is an $m$-dimensional subspace of $\mathbb{C}^{\infty}$. 3. $V$ and $W$ are orthogonal. The space $\text{Gr}_{n,m}$ has an obvious topology. If $\text{Gr}_n$ and $\text{Gr}_m$ are the usual Grassmannians of $n$ and $m$ planes in $\mathbb{C}^{\infty}$, then there is an obvious map $\psi : \text{Gr}_{n,m} \rightarrow \text{Gr}_n \times \text{Gr}_m$. The map $\psi$ is almost a homeomorphism, but not quite because of condition 3 above. The paper claims that $\psi$ is a homotopy equivalence. Thanks for any help! - Is $\mathbb{C}^\infty$ $\bigoplus_{i=1}^{\infty} \mathbb{C}$ or $\bigtimes_{i=1}^{\infty} \mathbb{C}$? – David Roberts Nov 2 2011 at 4:47 Oh, and what paper? – David Roberts Nov 2 2011 at 4:50 It is the direct sum of infinity many copies of $\mathbb{C}$. The paper is actually the 10th lecture (on Bott periodicity) from the following set of lecture notes : math.stanford.edu/~church/stablehomology.html – Ralph Nov 2 2011 at 4:57 @Ralph Thanks for the link, btw, — very interesting. – Grigory M Nov 12 2011 at 18:51 ## 2 Answers The forgetful map $Gr_{n,m} \to Gr_n$ that drops $W$ is a fiber bundle (exercise), and the map $Gr_{n,m} \to Gr_n \times Gr_m$ is a map of fiber bundles. It's an equivalence on the (connected) base space, so it suffices to check that the map of fibers is an equivalence. The fibers over $V$ are, respectively: $m$-dimensional subspaces in $V^\perp \subset \mathbb{C}^\infty$, and $m$-dimensional subspaces in $\mathbb{C}^\infty$. The inclusion of one infinite-dimensional complex vector space in another induces a homotopy equivalence of Grassmannians; you could construct an explicit homotopy equivalence by choosing an appropriate basis, or you could argue that the associated map of Stiefel manifolds is a homotopy equivalence (both are contractible, so this is easy) and so it passes to an equivalence after taking the quotient by the general linear group. - Beautiful, thanks! – Ralph Nov 2 2011 at 5:06 2 Yes, very nice. But where I come from "hyperplane" means a linear subspace of codimension one. – Tom Goodwillie Nov 2 2011 at 5:15 @Tom: That's a bad habit that I picked up for no apparent reason, and it's proved difficult to drop. Edited. – Tyler Lawson Nov 2 2011 at 5:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Look at the canonical principal $U(n)\times U(m)$ bundle over $Gr_{n,m}$ given by pairs of orthonormal frames $(v_1,\ldots, v_n), (w_1,\ldots w_m)$. Its total space is the set of all orthonormal $n+m$-frames in $\mathbb C^\infty$. It's contractible (that's well-known) and hence $Gr_{n,m}$ is a homotopy $B_{U(n)\times U(m)}$ which is clearly homotopy equivalent to $B_{U(n)}\times B_{U(m)}=Gr_n\times Gr_m$. To see that the natural map $Gr_{n,m}\to Gr_n\times Gr_m$ is the one inducing an equivalence notice that it's obviously covered by a map of principal bundles and hence the result follows by 5-lemma since total spaces are contractible and the fibers are the same. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204258322715759, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/35080/visualise-the-sound-intensity
# Visualise the sound intensity I'm studying Biophysics and my current subject is sound. One of the properties of sound is intensity. From my notes I can see the following definition: Intensity Formula is: $(I = w*m^{-2})$ or $(I = \frac{w}{m^2})$ where w = amound of energy and m = area. Definition: Intensity is the amount of energy passing through an area of $1m^2$ perpendicular to the direction of sound wave propagation within 1 second. So I came with this picture: http://i.stack.imgur.com/3OZXH.jpg I know the picture is lame. What I care about is if the above definition is diplayed correctly here. I'm not a native English speaker and the word perpendicular in this context confuses me. Thanks - 1 They mean how much energy is crossing the surface, so that if the surface is tilted so that it is parallel to the sound propagation direction, there is no flux of energy through. This is adressed here on questions on fluid flux, which is the same idea, and the flux of any vector field in general. The vector field here is the energy flow. – Ron Maimon Aug 28 '12 at 10:56 I can't get your picture. Why there are three objects on it? There should be just two --- the area (a surface) and the sound energy flowing through it (an arrow). – Yrogirg Aug 28 '12 at 11:16 The 2nd object (black) is there to show the perpendicular (vertical) dimension to the sound wave direction. Maybe though, it's not as important as it seemd to me, still don't quite get it though. – atmosx Aug 28 '12 at 11:29 Re-reading both answers I think I undersood the definition. tx – atmosx Aug 28 '12 at 11:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416993856430054, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/quantum-electrodynamics?sort=votes&pagesize=30
# Tagged Questions Quantum-ElectroDynamics (QED) is the quantum field theory believed to describe the electromagnetic interaction (and with some extension the weak nuclear force). learn more… | top users | synonyms (1) 2answers 2k views ### Why did Feynman's thesis almost work? A bit of background helps frame this question. The question itself is in the last sentence. For his PhD thesis, Richard Feynman and his thesis adviser John Archibald Wheeler devised an astonishingly ... 2answers 2k views ### How are classical optics phenomena explained in QED (Snell's law)? How is the following classical optics phenomenon explained in quantum electrodynamics? Reflection and Refraction Are they simply due to photons being absorbed and re-emitted? How do we get to ... 1answer 210 views ### Spontaneous breaking of Lorentz invariance in gauge theories I was browsing through the hep-th arXiv and came across this article: Spontaneous Lorentz Violation in Gauge Theories. A. P. Balachandran, S. Vaidya. arXiv:1302.3406 [hep-th]. (Submitted on 14 ... 1answer 369 views ### how is shown that photon speed is constant using QED? In Feynman's simple QED book he talks about the probability amplitude P(A to B) ,where A and B are events in spacetime, and he says that it depends of the spacetime interval but he didn't put the ... 3answers 627 views ### Simple (but wrong) argument for the generality of positive beta-functions In the introduction (page 5) of Supersymmetry and String Theory: Beyond the Standard Model by Michael Dine (Amazon, Google), he says (Traditionally it was known that) the interactions of ... 2answers 263 views ### What tree-level Feynman diagrams are added to QED if magnetic monopoles exist? Are the added diagrams the same as for the $e-\gamma$ interaction, but with "$e$" replaced by "monopole"? If so, is the force between two magnetic monopoles described by the same virtual ... 4answers 725 views ### QM and Renormalization (layman) I was reading Michio Kaku's Beyond Einstein. In it, I think, he explains that when physicsts treat a particle as a geometric point they end up with infinity when calculating the strength of the ... 3answers 256 views ### Can the path of a charged particle under the influence of a magnetic field be considered piecewise linear? Ordinarily we consider the path of a charged particle under the influence of a magnetic field to be curved. However, in order for the trajectory of the particle to change, it must emit a photon. ... 1answer 371 views ### how does dynamic casimir effect generate correlated photons There is a recent paper on arxiv receiving lot of acclaim http://arxiv.org/abs/1105.4714 The authors experimentally show that moving a mirror of a cavity at high speeds produces light from high ... 2answers 314 views ### Using photons to explain electrostatic force I am trying to understand the idea of a force carrier with the following example. Let's say there are two charges $A$ and $B$ that are a fixed distance from each other. What is causing the force on ... 4answers 317 views ### The Schwinger model The Schwinger model is the 2d QED with massless fermions. An important result about it (which I would like to understand) is that this is a gauge invariant theory which contains a free massive vector ... 2answers 1k views ### Virtual photon description of B and E fields I continue to find it amazing that something as “bulky” and macroscopic as a static magnetic or electric field is actually a manifestation of virtual photons. So putting on your QFT spectacles, look ... 3answers 392 views ### Why muonium is unstable? This question is closely related to my previous question Bound states in QED. Muonium is a system of electron and anti-muon. This article in wikipedia claims that muonium is unstable. QUESTION: Why ... 2answers 319 views ### Quantizing EM field Why when we quantize EM field, whe quantize the vector potential $A^\mu$ obtaining vectorial particles (photons) like the elastic field (phonons) and we can't quantize directly the EM-field tensor ... 3answers 421 views ### Why isn't light scattered through transparency? I'm asking a question that has bothered me for years and years. First of all, let me give some context. I'm a layman in physics (college educated, math major). I've read Feynman's QED cover to cover, ... 1answer 53 views ### Relativistic corrections to quantum mechanics of Coloumb potential Systems of charged particles (such as atomic nuclei and electrons) can be described by nonrelativistic quantum mechanics with the Coloumb interaction potential. A fully relativistic description is ... 2answers 728 views ### Bound states in QED I am a beginner in QED and QFT. What is known (or expected to be) about bound states in QED? As far as I understand, in non-relativistic QM electron and positron can form a bound state. Should it be ... 1answer 193 views ### Effect of introducing magnetic charge on use of vector potential It is well known that Maxwell equations can be made symmetric w.r.t. $E$ and $B$ by introducing non-zero magnetic charge density/flux. In this case we have $div B = \rho_m$, where $\rho_m$ is a ... 1answer 224 views ### Did the Feynman heuristic of “simple effects have simple causes” fail for spin statistics? Someone here recently noted that "The spin-statistics thing isn't a problem, it is a theorem (a demonstrably valid proposition), and it shouldn't be addressed, it should be understood and celebrated." ... 2answers 116 views ### Simulation of QED Can anyone point me to a paper dealing with simulation of QED or the Standard Model in general? I will particularly appreciate a review paper. 1answer 402 views ### How are classical optics phenomena explained in QED (color)? How is the following classical optics phenomenon explained in quantum electrodynamics? Color According to Schroedinger's model of the atom, only particular colors are emitted depending on the type ... 0answers 131 views ### Magnetic monopole and electromagnetic field quantization procedure From the Maxwell's equations point of view, existence of magnetic monopole leads to unsuitability of the introduction of vector potential as $\vec B = \operatorname{rot}\vec A$. As a result, it was ... 3answers 260 views ### What is the massless limit of massive electromagnetism? Consider electromagnetism, an abelian gauge theory, with a massive photon. Is the massless limit equal to electromagnetism? What does it happen at the quantum level with the extra degree of freedom? ... 2answers 349 views ### critical electric field that spontaneously generates real pairs With the current QED framework, If an electric field is strong enough (say, near a nucleus with $Z > 140$) , pair production will occur spontaneously? Is this a real effect or an artifact before ... 2answers 468 views ### Deriving Planck's radiation law from microscopic considerations? In the usual derivation of Planck's radiation law, the energies or frequencies $\omega$ of the oscillators depend on the measurements $L$ of the black body. The model is such that the only ... 2answers 396 views ### Why is the spinor field anti-commutator not made gauge invariant? When we introduce minimal coupling for the Dirac spinor field, we introduce terms into the Lagrangian, by the substitution \$i\frac{\partial}{\partial x^\mu}\mapsto i\frac{\partial}{\partial ... 1answer 180 views ### Can a photon exhibit multiple frequencies? Can a photon be a superposition of multiple frequency states? Kind of similar to how an electron can be a superposition of multiple spin states. 3answers 194 views ### Is there a simple way to compute some physical constant from Feynman diagram statistics? I've been playing around writing some software to generate Feynman diagrams for QED, respecting the vertex "rules" described here, and avoiding creating isomorphic duplicates. So from a starter ... 2answers 497 views ### How do electrons interact if one of them had just exited the two slits of the double-slit experiment? Consider the following experiment: a double-slit set-up for firing electrons one at a time. Let's now add a second electron (orange), which is fired parallel to the first one, but in the opposite ... 1answer 254 views ### Is there a strong force analog to magnetic fields? In special relativity, magnetism can be re-interpreted as an aspect of how electric charges interact when viewed from different inertial frames. Color charge is more complex than electric charge, but ... 1answer 213 views ### Where is the velocity term in Dirac current hiding? The dirac current is $$J^\mu = \bar{\psi}\gamma^\mu \psi$$ It looks weird at first because there is no derivative in the expression. So the velocity must be hidden somewhere in either $\gamma$ or ... 4answers 793 views ### How does charge work if photons are neutral? How can an electron distinguish between another electron and a positron? They use photons as exchange particles and photons are neutral, so how does it know to repel or attract? 3answers 2k views ### Properties of the photon: Electric and Magnetic field components Consider an electromagnetic wave of frequency $\nu$ interacting with a stationary charge placed at point $x$. My question concerns the consistency of two equally valid quantum-mechanical descriptions ... 1answer 187 views ### How to quantize the free electro-magnetic field in 2d? I am wondering how one can quantize the free electro-magnetic field in the two dimensional space-time. The standard method of fixing the Coulomb gauge in 4d does not seem to generalize immediately to ... 2answers 617 views ### EM wave function & photon wavefunction According to this review Photon wave function. Iwo Bialynicki-Birula. Progress in Optics 36 V (1996), pp. 245-294. arXiv:quant-ph/0508202, a classical EM plane wavefunction is a wavefunction (in ... 1answer 127 views ### Database of scattering amplitudes I want to check whether my result for the invariant amplitude of the electron-electron scattering (to lowest order in $\alpha$; t+u channels) is correct or not. I can't find any reference that has ... 2answers 226 views ### Geometrical significance of gauge invariance of the QED Lagrangian The QED Lagrangian is invariant under $\psi(x) \to e^{i\alpha(x)} \psi (x)$, $A_{\mu} \to A_{\mu}- \frac{1}{e}\partial_{\mu}\alpha(x)$. What is the geometric significance of this result? Also why is ... 1answer 85 views ### Is there Pair production in between charged plates In classical electromagnetic theory, If parallel plates are charged oppositely and placed close to each other, there will be no charge will not flow from one plate to another. How does this situation ... 4answers 428 views ### How is the path integral for light explained, or how does it arise? In a question titled How are classical optics phenomena explained in QED (Snell's law)? Marek talked about the probability amplitude for photons of a given path. He said that it was $\exp(iKL)$, and ... 2answers 285 views ### On Electromagnetic Self Energy In the process of pair annihilation an electron and a positron annihilate each other to produce a pair of photons, conserving momentum and energy. As the oppositely charged particles approach each ... 3answers 189 views ### Representation of phase in quantum mechanics [Note: My discussion of the three answers can be found just after the question.] Imagine three points in space that differ only by a phase angle of "something" (what doesn't really matter). One way ... 1answer 210 views ### Can a photon see ghosts? Does it make sense to introduce Faddeev–Popov ghost fields for abelian gauge field theories? Wikipedia says the coupling term in the Lagrangian "doesn't have any effect", but I don't really know ... 1answer 157 views ### Why are geons unstable? Are there other problems with geons? I read in various places geons are "generally considered unstable." Why? How solid is this reasoning? Is the reason geons are not studied much anymore because we can't make more progress without ... 1answer 407 views ### Why is the Gupta-Bleuler gauge unfashionable? In the early days of quantum electrodynamics, the most popular gauge chosen was the Gupta-Bleuler gauge stating that for physical states, $$\langle \chi | \partial^\mu A_\mu | \psi \rangle = 0.$$ ... 1answer 443 views ### How to calculate the properties of Photon-Quasiparticles in recent questions like "How are classical optics phenomena explained in QED (Snell's law)?" and "Do photons gain mass when they travel through glass?" we could learn something about effective ... 1answer 84 views ### How can an asymptotic expansion give an extremely accurate predication, as in QED? What is the meaning of "twenty digits accuracy" of certain QED calculations? If I take too little loops, or too many of them, the result won't be as accurate, so do people stop adding loops when the ... 1answer 77 views ### Is diffraction affected by interaction between photons and electrons? Suppose we take a sheet of ordinary metal, make a narrow slit in it, and shine a light beam through the slit onto a screen. The light beam will diffract from the edges of the slit and spread out onto ... 1answer 193 views ### Is it true that the angular momentum of electromagnetic waves in an anisotropic medium is an integral of motion? Extending my previous question Angular moment and EM wave, does it make sense to talk about the angular momentum of electromagnetic waves in an anisotropic medium? It is not obvious that the angular ... 1answer 255 views ### Why can't fermions be affected by effective gravity in non-linear quantum electrodynamics? Quantum electrodynamics based upon Euler-Heisenberg or Born-Infeld Lagrangians predict photons to move according to an effective metric which is dependent on the background electromagnetic field. In ... 0answers 233 views ### Do EM waves transmit spin polarization? Suppose you have a normal dipole antennae (transmitter and receiver) . Spin polarized current (as opposed to normal current) is sent into the transmitter, it emits an EM wave and the Receiver receives ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.907940685749054, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=354287&page=2
Physics Forums Thread Closed Page 2 of 5 < 1 2 3 4 5 > ## A Few Questions for Twofish-Quant. I know I didn't actually ask the question, but this thread has been really informative and helpful to me as well. I've appreciated reading your responses as well twofish-quant. I do have something I'd like to ask though, if it's not too intrusive. What kind of programming jobs did you have before you made it to Wall Street? Does your current job make use of a lot of the skills you had to use/develop at earlier jobs? I do assume the types of projects you do are going to be dramatically different, but I'm just a little curious. The reason Wall Street likes people that are good with math is that if you are good with math, you are probably very flexible. Suppose you are really, really good with algebraic topology. Now it turns out that algebraic topology isn't that useful, but suppose you find out that stochastic statistical auto-coorelations models are import. Someone that doesn't have a math Ph.D. may just wait around looking for someone to teach them this stuff, but no one has time. If you are really good with math, then you go on google, download papers, buy some books from Amazon, and teach themselves what they need to know. I love this piece of advice. As I get older and work a few more professional-type jobs, I'm beginning to realize it's not always about what you know but the qualities you've developed through the experiences you've had. I think that's why the best employers typically want both a resume and a cover letter from candidates. Maybe just to get a small sample of how you think. Recognitions: Gold Member Science Advisor Staff Emeritus Just a small and maybe off-topic reply to the idea of open admissions at MIT. The same issues are often discussed about some famous schools in France: extremely difficult to get into it (but nothing special once you are there). I think that what has not been integrated (strange for someone from Wall Street ) is that value = rareness. If you distributed 50 diamonds to every girl in the world, they wouldn't love diamonds anymore. What makes these hot shot schools worthy is that they only admit a small number ("the best"), so having been there has value because it is rare. The aim is not the quality of the teaching or the special skills you get there. The aim is to be able to say that you belong to the small and select club who went there. That you've been a winner of a particular kind of competition. Make 10 million people follow freshman physics at MIT, and MIT doesn't mean anything anymore. I like your "out-of-the-box" thinking, twofish-quant. I doubt that MIT will ever change that drastically (or any of the other prestige universities as vanesch points out). However, it is the kind of thinking that gets some other university who is not in the "club" to change and suddenly eat everyone else's lunch. Recognitions: Science Advisor Quote by vanesch Just a small and maybe off-topic reply to the idea of open admissions at MIT. The same issues are often discussed about some famous schools in France: extremely difficult to get into it (but nothing special once you are there). I think that what has not been integrated (strange for someone from Wall Street ) is that value = rareness. If you distributed 50 diamonds to every girl in the world, they wouldn't love diamonds anymore. What makes these hot shot schools worthy is that they only admit a small number ("the best"), so having been there has value because it is rare. The aim is not the quality of the teaching or the special skills you get there. The aim is to be able to say that you belong to the small and select club who went there. That you've been a winner of a particular kind of competition. Make 10 million people follow freshman physics at MIT, and MIT doesn't mean anything anymore. So if MIT admits everyone, then it will be valueless, then no one will want to go there, so it can admit everyone. QED Quote by vanesch Make 10 million people follow freshman physics at MIT, and MIT doesn't mean anything anymore. Yes, this is exactly right. Make 10 million people freshman at MIT, and MIT loses it's prestige. It's not just that value = rareness though. There are a lot of really small schools out there no one cares for. The trick is, MIT has an image and reputation that evokes ideas in a person's mind. It's a brand. There's a surprising amount of information conveyed in a brand name, and I can't say I blame people for wanting to be a part of the club. There's a lot of financial value in owning a brand/ being branded. The school's position is probably a little different though. They have a brand that they need to protect and that is done by carefully selecting who they allow into "the club." They need people to go out in the world, attain respected positions and be successful. Then donate. That makes school look like a terrific place to attend and study. It's kind of an organic marketing process. Get smart, successful people to attend. Make money, gain respect. Then respect draws more smart, successful people to the school. And the people who are willing to throw money at them. Recognitions: Science Advisor Quote by vanesch What makes these hot shot schools worthy is that they only admit a small number ("the best") So if in fact "the best" is not true - then the non-elitist advice would not be for MIT to admit everyone, but to find the best somewhere else - another school, another profession - maybe even one that doesn't involve science. I dare say many from MIT might even agree that there is no best - worthy human endeavours are more like the complex numbers than the reals. A couple of essays that seem apropos to the discussion: http://www.paulgraham.com/colleges.html http://philip.greenspun.com/teaching...conomic-growth Quote by vanesch Make 10 million people follow freshman physics at MIT, and MIT doesn't mean anything anymore. And it shouldn't mean anything. The fact that you got your degree from MIT ***should*** be totally irrelevant. It's a *HORRIBLE* thing when people want to take a waves class at MIT when they could get a better educational experience at a community college. If a community college can teach freshman physics better than MIT, then people should go to the community college, and if they won't because of branding nonsense then we got to change this. The fact of the matter is that classroom instruction at MIT is not particularly good. Your average community college has better teachers than MIT has. MIT's main focus is research, and it has motivated, brilliant students. If your students are good then you can have some professors that are totally incompetent at teaching. You can have a professor that just cannot teach, and then the students will figure out how to learn the material. MIT has a wonderful culture, and one part of the culture that is wonderful is that MIT teaches you to hate brands, and hate MIT. Quote by Sankaku I like your "out-of-the-box" thinking, twofish-quant. I doubt that MIT will ever change that drastically (or any of the other prestige universities as vanesch points out). I don't think we have a choice. The thing about branding is that brand perception can change in an instant. MIT like any other bureaucratic institution will refuse to change if it can survive without changing. But I don't really think it has a choice. The latest financial crisis are creating *huge* financial problems for the prestige universities. In 1950, if you wanted to learn freshman physics, you basically *had* to go to MIT. Today, you can download the entire class on the internet. That radically changes the world, and MIT has to figure out how to change with it. However, it is the kind of thinking that gets some other university who is not in the "club" to change and suddenly eat everyone else's lunch. University of Phoenix. Quote by AsianSensationK I do have something I'd like to ask though, if it's not too intrusive. What kind of programming jobs did you have before you made it to Wall Street? I worked for about five years as a software developer, my Ph.D. involved huge amounts of programming, then there is programming I did at MIT, then you can go back to when my father brought home the TRS-80 Model I that I used when I was six. Also, it's just not jobs, but I spend a lot of my free time programming. I do assume the types of projects you do are going to be dramatically different, but I'm just a little curious. It's not that different. I think that's why the best employers typically want both a resume and a cover letter from candidates. Maybe just to get a small sample of how you think. It's really not. No employer that I know of will hire someone because of their resume or cover letter. The problem is that the resume and cover letter is just something that you rapidly look through to eliminate people that have no chance of getting the job. You take the stack, arrange them in an order, and then you call up the top ten candidates. Then you send in the top five for several rounds of face to face interviews. You cannot find out how someone thinks from a piece of paper, and it's easy to create a good looking resume even if you know nothing. The resume is used to screen out candidates that obvious won't get the job, and then if you want to see how someone thinks, you give them math problems. Quote by AsianSensationK The trick is, MIT has an image and reputation that evokes ideas in a person's mind. It's a brand. There's a surprising amount of information conveyed in a brand name. There's really very little information conveyed in a brand. A lot of emotion, but very little information. Coca-Cola is a powerful, powerful brand. It's just fizzy water in bottles. But Coca-Cola tastes *great!!!* Hmmmm...... Could it be that it tastes great because you see so many commercials convincing you that it tastes great. Also this is really why you are in a better shape if you take some marketing courses. Graduate schools are trying to sell you something. *PROFESSOR* is a brand. The school's position is probably a little different though. They have a brand that they need to protect and that is done by carefully selecting who they allow into "the club." They need people to go out in the world, attain respected positions and be successful. Then donate. And the money that you get from donations go into glossy brochures that convince everybody else that the brand is cool. However, the trouble is that to maintain a brand, you need large amounts of cash, and most of the major elite universities got hit really, really hard by the financial crisis. Harvard lost $5 *billlion* and MIT lost something like$2 *billion*. It's much worse, because they were counting on 15% growth for money. If you end up with 3-5% growth, everything falls apart. My argument is that in the next three or four years, it will become obvious that MIT has to do something really radical to survive. One problem is that if you think of education as an industry, there is only room for about three or four prestige brands. The *BIG* money is where the University of Phoenix is at. Think Walmart. It's kind of an organic marketing process. Get smart, successful people to attend. Make money, gain respect. Then respect draws more smart, successful people to the school. And the people who are willing to throw money at them. The last part is where they are going to have problems...... I went to MIT. I'm smart. I'm successful. Why should I throw money at MIT when in fact I'll do much more social good if I throw money at a community college or raise money to improve inner city high schools? If MIT is going to focus on executive MBA programs and buying up Cambridge real estate, then that's great, but why should my money go to that. Now if MIT were spending my money improving science teaching at in poor neighborhoods in Boston, I can go for that, but then the system falls apart when you have large numbers of people that now have decent skills, but can't get into MIT because their backgrounds aren't perfect. Quote by atyy So if in fact "the best" is not true The problem here is "best" is subjective. OK you are a brilliant research mathematician but you stink at teaching algebra. Who made up the rule that you are "better"? (Any my answer, is that in the Cold War both the US and the Russians needed relatively small numbers of physicists and mathematicians to work on weapons systems, at so research got defined as "better.") I dare say many from MIT might even agree that there is no best - worthy human endeavours are more like the complex numbers than the reals. That's fine but then you run into a problem in that it's weird that people that keep on saying that there is no "best" usually end up with money, power, prestige. If you aren't "better" than me, then why are you telling me what to do? You are in a great position of power if you can define what "best" is, and people that get to define what the "best" is usually define it so that the "best" people are people that look, think, and act like they do. Actions speak louder than words, and you can usually figure out what someone really believes not by what they say, but how they act. Quote by atyy So if in fact "the best" is not true - then the non-elitist advice would not be for MIT to admit everyone, but to find the best somewhere else - another school, another profession It's easy to be an elitist, if you think you are going to be a member of the elite. I'm not an elitist because it turns out that its very likely that I'm *not* going to be in the elite. I'm a loser and a failure. One problem with getting into the "inner circle" is that once you get in, you find that there is an "inner inner circle". Once you get into the "inner inner circle" you find that there is an "inner, inner, inner circle" At some point you just throw your hands up and just want to blow the system up. Quote by twofish-quant There's really very little information conveyed in a brand. A lot of emotion, but very little information. Coca-Cola is a powerful, powerful brand. It's just fizzy water in bottles. But Coca-Cola tastes *great!!!* Hmmmm...... Could it be that it tastes great because you see so many commercials convincing you that it tastes great Actually, yes, this is the best way to phrase it. A brand is most like an emotional experience, or an attitude. There isn't a lot of relevant stuff conveyed in a brand name. But they're powerful things because of how people think. I have a quote in my marketing book from the CEO of McDonald's basically saying, "If we lost every single asset on our books today through some freak disaster, we would still make tons of money because of our brand. We could borrow what we need, and continue business with relatively little trouble. Heck, we could give you every single asset of ours, keep our brand, and still make more money than you this next quarter." And the money that you get from donations go into glossy brochures that convince everybody else that the brand is cool. However, the trouble is that to maintain a brand, you need large amounts of cash, and most of the major elite universities got hit really, really hard by the financial crisis. Harvard lost $5 *billlion* and MIT lost something like$2 *billion*. It's much worse, because they were counting on 15% growth for money. If you end up with 3-5% growth, everything falls apart. My argument is that in the next three or four years, it will become obvious that MIT has to do something really radical to survive. I believe it. They have to make more money somewhere in order to meet their budgets. Blog Entries: 2 Hello twofish-quant, some of my fellow students work at banks which got me interested. Does your job involve "financial mathematics" as described on this website "Finance for physicists" (for example option pricing)? Do you use any of the books cited on the website? And have you read Derman's "My life as a quant"? You mentioned your C++ skills. Is there anything else that can make me interesting for banks? Thanks in advance. Quote by Edgardo some of my fellow students work at banks which got me interested. Does your job involve "financial mathematics" as described on this website "Finance for physicists" (for example option pricing)? More or less. The trouble with that website is that it's a bit outdated. The good/bad news is that there are about a dozen different types of jobs for physicists on Wall Street. Something interesting about mathematical finance is that it's a tiny, tiny part of finance, but finance is so large that it keeps physicists quite strongly employed. Do you use any of the books cited on the website? Sort of, but again the website is out of date. Curiously it doesn't contain one of the books which I think is essential reading which is Kuznetsov's "The Complete Guide to Capital Markets" and Fusai and Roncoroni's Implementing Models in Quantitative Finance: Methods and Cases. Also you should look at Springer-Verlag and Wiley Finance since they will get you an idea of the type of problems that you need. And have you read Derman's "My life as a quant"? Yes. Interesting book, but again, somewhat out of date. It's a very, very useful book if you look at it was history, but don't take it as a accurate representation of what Wall Street is like right now. One of the larger differences is that these are much. much more heavily computational than they were in the late-1990's. This is bad if you are a string theorist. Good if you are doing computational fluid dynamics. Just to give one example of how Derman's book is a bit old. No one right now is that interested in modeling complex interest rate derivatives. People are interested right now in modeling negative and zero interest rates and counterparty default. You won't find a book that gives the standard method for doing that, because if there were a standard method for doing it, they wouldn't need you to figure out how to do it. Next year, they'll be interested in something very different. You mentioned your C++ skills. Is there anything else that can make me interesting for banks? One big thing is to be a physicist and focus on writing a quality dissertation. There's nothing wrong with being a finance or economics Ph.D. except that if your Ph.D. is in physics, you are going find it much easier to to be a physicist with basic finance knowledge rather than an expert in finance. If you try to turn into an MBA or finance expert, then you'll probably lose the skills that make you interesting to banks and hedge funds. One other curious thing is that you will be seriously, seriously miserable if you go into finance in order to make lots of money. It's really weird, because once you are in finance, you will make money that most people outside of finance consider to be huge, but because you are constantly meeting people that make even more money, so you end up feeling quite poor. Finance is also a horrible place to be if you like to be on top of the heap, for the same reasons. Also you'll probably end up living in the NYC area, which is both good and bad. The one thing that would be useful is to read some standard finance texts. There is a huge amount of jargon. It's not that difficult to pick up, and it helps if you know it. The other thing is to try to *think* about the "big questions." Are there too many investment bankers? Is what you are doing really economically useful? Personally, I think finance is economically useful, and there aren't too many investment bankers, but you have to think about this for yourself. The basic issue is that if you aren't really generating economic value, then you are just part of a bubble, and if you are part of a bubble, you are likely to get smashed. The reason that you need to think about this for yourself, is that I really don't want to take the moral responsibility for encouraging you to go into a career that is going to blow up in three years. On the other hand, I don't want to the talk the moral responsibility for discouraging you from going into something that is going to boom. So I'll just say that its something that you have to think about. Quote by twofish-quant Intelligence is overrated. I'm not particularly smart. Persistent as heck, yes, but I'm not a math genius. If you are brilliant, but you can't take orders and work with other people, then you are going to be have serious, serious problems on Wall Street. Being pleasant and cheerful under stress, is probably a much, much more important job qualification than intelligence. If you are average intelligence but you are cool when everything falls apart, you are going to do better than someone that is a super math genius that people can't stand to talk to or who completely freezes when everything goes bad. The reason Wall Street likes Ph.D.'s has little to do with intelligence. If you have a Ph.D. then this is proof that once in your life you sat down and did a project on your own, and did whatever you need to do to get the job done even if you were bored and depressed. The reason Wall Street likes people that are good with math is that if you are good with math, you are probably very flexible. Suppose you are really, really good with algebraic topology. Now it turns out that algebraic topology isn't that useful, but suppose you find out that stochastic statistical auto-coorelations models are import. Someone that doesn't have a math Ph.D. may just wait around looking for someone to teach them this stuff, but no one has time. If you are really good with math, then you go on google, download papers, buy some books from Amazon, and teach themselves what they need to know. Again, this is one reason why Wall Street likes Ph.D.'s since this is the type of stuff that you have to do for your dissertation. If you have a Ph.D. and you don't know something, your first reaction is to go to the library to read about it, or to call someone up and talk to them about it. If you don't have a Ph.D., then you might just be sitting around waiting for someone to tell you what to do, which is just deadly, since everyone is too busy to tell you want you need to do, and no one really knows. Twofish-Quant, I just happened to read your responses and wanted to ask for some advice. I was actually looking into these kinds of Wall Street jobs and got shot down since essentially all of them require MS and Ph.D and I currently have a BS in Physics and minor in Mathematics. The only thing that is preventing me from getting an MS or Ph.D in physics is the money. If it wasn't for that and \$100k already in student debt, I would be comfortably studying for an MS and Ph.D. To me, it sounds like you are saying that Ph.D is mostly used as an indicator of proof that someone can work independently for a long long time without giving up easily and having the ability to adapt and learn new things (much like my time studying the physics major). I have much of these qualities and have the determination to self-teach myself new things. In fact, I am moving towards a different career path as an Actuary (while still looking for physics related jobs.) So I had to go out, buy Actuary texts, study them and take special exams for this field. I've never taken any finance or Actuary classes in college, yet I am able to improvise. The benefit here is I can actually afford these books rather than a Ph.D. On top of that, I've self-taught myself in computer skills and programming to a point good enough to build and run an online business over the past 5 years (more proof of teaching myself new skills to make money.) Now what am I suppose to do in order to get one of these Wall Street jobs by getting around the MS and Ph.D wall that is pretty much impossible for me to do because of a lack of money? To me it sounds like I can fit right in with that Ph.D mindset and research determination to do tough long jobs, plus I have some intelligence, but the only thing missing is the actual Ph.D or MS title. Any advice? Thanks Thread Closed Page 2 of 5 < 1 2 3 4 5 > Thread Tools | | | | |---------------------------------------------------------|---------------------------|---------| | Similar Threads for: A Few Questions for Twofish-Quant. | | | | Thread | Forum | Replies | | | Career Guidance | 4 | | | Career Guidance | 1 | | | Beyond the Standard Model | 11 | | | Quantum Physics | 1 | | | General Physics | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.976287841796875, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/37424/why-are-multinomial-coefficients-with-same-entropy-equal-usually/37437
## Why are multinomial coefficients with same entropy equal? (usually) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose $p_1,\ldots,p_d$ and $q_1,\ldots,q_d$ are positive real numbers such that $$p_1+\cdots+p_d=q_1+\cdots+q_d=n$$ and $$p_1 \log p_1+\cdots+p_d\log p_d=q_1 \log q_1+\cdots+q_d \log q_d$$ Then the following seems to hold $$\frac{n!}{p_1!\cdots p_d!}=\frac{n!}{q_1!\cdots q_d!}$$ why? Edit: JBL correctly notices that it doesn't always hold. I just didn't go far enough. Still, it's surprising to me that it holds so frequently. If we put a black disk at x,y if equality seems to hold (in machine precision) for x=n,y=d, and positive integer coefficients, it'll look like this Red circle is JBL's example. Blue circle is n=18,d=3 which fails for (12,3,3) and (9,8,1). ```docheck[n_, d_] := ( coefs = IntegerPartitions[n, {d}, Range[1, n]]; entropy[x_] := N[Total[# Log[#] & /@ x]]; groupedCoefs = GatherBy[coefs, entropy]; allEqual[list_] := And @@ (First[list] == # & /@ list); multinomials = Apply[Multinomial, groupedCoefs, {2}]; And @@ (allEqual /@ multinomials) ); vals = Table[docheck[#, d] & /@ Range[1, 30], {d, 1, 20}]; Graphics[Table[Disk[{n, d}, If[vals[[d, n]], .45, .1]], {d, 1, Length[vals]}, {n, 1, 30}]] ``` Edit: Updated version that does exact checking and allows coefficients with 0 components. Still only one example of failure for d=3. ```docheck[n_, d_] := (coefs = IntegerPartitions[n, {d}, Range[0, n]]; entropy[x_] := Exp[Total[If[# == 0, 0, # Log[#]] & /@ x]]; groupedCoefs = GatherBy[coefs, entropy]; allEqual[list_] := And @@ (First[list] == # & /@ list); multinomials = Apply[Multinomial, groupedCoefs, {2}]; And @@ (allEqual /@ multinomials)); maxn = 30; maxd = 20; vals = Table[docheck[#, d] & /@ Range[1, maxn], {d, 1, maxd}]; Graphics[Table[Disk[{n, d}, If[vals[[d, n]], .45, .1]], {d, 1, Length[vals]}, {n, 1, maxn}]] ``` - Can you construct two partitions of $n$ with the same entropy that are not rearrangements of each other? (Also, why do you apply Permutations over IntegerPartitions? Every permutation of a partition of $n$ clearly has the same entropy and multinomial coefficient, so you can just take what IntegerPartitions spits out instead.) – drvitek Sep 1 2010 at 21:04 Permutations/IntegerPartitions is a Mathematica speed trick for listing all multinomial coefficients --stackoverflow.com/questions/3563762/… – Yaroslav Bulatov Sep 1 2010 at 21:09 but yes, you could make the checking faster by ignoring permutations – Yaroslav Bulatov Sep 1 2010 at 21:10 After all, $p_1 \log p_1+\cdots+p_d\log p_d=c$ defines a certain hypersurface in $\mathbb{R}_+^d$. In a sense, it is more surprising that in some cases it can meets more than one point of the lattice $\mathbb{Z}^d$, as in JBL's example. – Pietro Majer Sep 1 2010 at 23:49 Note that for the entropies to be equal, any prime that divides one of the $p_i$ must also divide one of the $q_i$, and vice versa. That alone may rule out many chances. Note that the blue and red circles are on the same diagonal of slope 1. Does absence of a black disk indicate existence of a counterexample or just ignorance of the status? – Gerry Myerson Sep 2 2010 at 0:02 show 2 more comments ## 3 Answers It's false: consider the two nine-part partitions $\langle 8, 2^8\rangle$ and $\langle 4^5, 1^4\rangle$ of $24$, where exponentiation represents multiplicities. They have equal entropy, but not the same multinomial coefficient. (I have no idea whether this example is minimal. I generated it by considering partitions whose parts are all powers of 2, since this simplifies the entropy condition. If we allow only three powers of $2$ to appear in our equations, we must solve $a + b + c = d + e + f$, $a + 2b + 4c = d + 2e + 4f$ and $2b + 8c = 2e + 8f$, with unique solution $d = a, e = b, f = c$. However, if we allow our partitions to include 1, 2, 4 and 8 then we still have only three constraints but now four variables, and we can find nontrivial solutions.) - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The asymptotics of multinomial coefficients for large values of $n$ can be expressed in terms of entropy using Stirling's formula to find $\ln(p_1!p_2!\ldots p_n!)$ approximately. This approximation works well even for small values of $p_k$'s. - Terence Tao gives a statement of this approximation for binomial coefficients here: terrytao.wordpress.com/2010/01/02/… – Qiaochu Yuan Sep 2 2010 at 1:57 2 Stirling's implies that coefficient with entropy H has value approximately Exp[n H], but that approximation seems too loose to explain observed closeness of equal entropy coefficients. IE, for d=3, n=18, that approximation can be off by a factor of 10, yet coefficients with equal entropy tend to be equal – Yaroslav Bulatov Sep 2 2010 at 3:33 EDIT You still have not given interesting examples of equal multinomal coefficients with equal entropy. I'm not saying there aren't any (or many) but it would help to see some. Or even interesting examples (especially for d=3) with at least the same prime divisors showing up. Either of these tasks is pretty easy (I added the condition disjoint which is no loss) : 1. find n and d and two disjoint d-tuples of integers A and B both with sum n and $\prod_A a!=\prod_B b!$ (equal multinomials with the same n and d) 2. find n and d and two disjoint d-tuples of integers A and B with sum n $\prod_A a^a=\prod_B b^b$ (equal entropy) I'd venture that most of the time a solution of one is not a solution of the other. For equal entropy: There is a 4 parameter family of solutions to the equal entropy problem (usually with d large) using 1,2,3,4,6,8,9,12 spread out between the sets A and B (Choose the number of 4s,8s,9s and 12s and where they go, there is a unique way to choose the 1s,2s,3s and 6s) . If one of the sets uses 12 then there will be a discrepancy in the multinomial coefficients related to the prime 11. for equal multinomials: `$<x,y,xy-1>$` and `$<x-1,y-1,xy>$` give equal product of factorials so `$<x,y,xy-1,u-1,v-1,uv>$` and `$<x-1,y-1,xy,u,v,uv-1>$` gives a d=6 case of the multinomial problem. It is easy to arrange for cancelation. Set u=y and v=xy-1 to get a d=3 multinomial `$<x,xy-2,(xy-1)y>$` and `$<x-1,xy,(xy-1)y-1>$`. Here there will be primes present in one side but not the other so even the shifted entropy won't be exactly the same. previous answer - The missing black dot at n=19,d=4 is from $<6,6,6,1>,<9,2,4,4>$ which have equal entropy yet the corresponding multinomial coefficients have ratio 25/28. - The missing black dot at n=20,d=5 is from $<8,3,3,3,3>,<6,6,4,2,2>$ which have equal entropy yet the corresponding multinomial coefficients have ratio 21/20 . - What are some interesting examples of equal entropy and equal multinomial coefficients? - The ratio of similar binomial coeffcients will usually consist of powers of relatively small primes so coincidences are likely. - There are an enormous number of equal multinomial coefficients and they do not lead to equal entropy in most cases (not that you say they will). - $(19,4)$ and $(20,5)$ lie on that same diagonal as the red and blue circles in the first diagram. Time to look at $(21,6)$, $(22,7)$, ...? – Gerry Myerson Sep 2 2010 at 6:51 btw, the grid of n,d for which equality always holds seems to get more filled out if we offset entropy by 1/2, ie, look for difference in coefficient values for which the following quantity is equal $(k_1−1/2)\log k_1+⋯+(k_d−1/2)\log k_d$ yaroslavvb.com/upload/multinomial-equality3.png – Yaroslav Bulatov Sep 2 2010 at 7:46 Since Stirlings formula is $ln(k!)=(k+1/2)ln(k)-k+ln(2 \pi)/2$ (approx) I would expect $(k+1/2)log(k)$ to be even better. But do you have examples? – Aaron Meyerowitz Sep 2 2010 at 20:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9149060249328613, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/22163/differentiating-inside-an-integral-sign
# Differentiating inside an integral sign I'm reading John Taylor's Classical Mechanics book and I'm at the part where he's deriving the Euler-Lagrange equation. Here is the part of the derivation that I didn't follow: I don't get how he goes from 6.9 to 6.10 by partial-differentiating the term inside the integral. If this is allowed, I was probably missed my calculus class the day it was covered. Can someone tell me more about this? Which part of calculus is this from? - Be sure to check out the 'in popular culture' section of the wikipedia entry on differentiating under the integral sign for the anecdote about Feynman. Apparently, it is a long-standing problem that this powerful tool is not typically taught in introductory calculus classes. – kleingordon Mar 10 '12 at 5:23 2 @kleingordon interesting. It was never taught to us either, but our teacher used it to solve a sum in a beautiful manner. Fortunately, he gave a quick intuitive explanation when he realised that none of us knew it. I guess the rule is so obvious to some that they forget to teach it. – Manishearth♦ Mar 10 '12 at 5:32 2 @Manishearth, I agree that it seems like a pretty obvious thing to do (at least in the simple case when the limits aren't functions of your variable), but often enough in math doing something that seems obvious can get you into trouble, especially when you're relatively inexperienced. I think there should be a major effort to put this into the standard calc curriculum. – kleingordon Mar 10 '12 at 5:37 Well, now I know for certain that I'm not going to be a Feynmann. :( – Joebevo Mar 10 '12 at 5:39 Huh, interesting, I never realized that this was a common misunderstanding. Good question Joebevo :-) – David Zaslavsky♦ Mar 11 '12 at 6:20 show 2 more comments ## 1 Answer It's known as the Leibniz integral rule. As long as $\alpha$ is not the variable being integrated over, then $$\frac{\mathrm{d}}{\mathrm{d}\alpha}\int f(x,\alpha) \mathrm{d}x=\int\frac{\partial f(x,\alpha)}{\partial \alpha}\mathrm{d}x$$ $x$ will not be present outside the integral anyways (due to limits of the integral). As it is, while differentiating wrt $\alpha$, $x$ is constant. So it becomes a partial derivative inside. You may want to check out the proof and more complicated forms (involving limits as functions of $\alpha$) on the linked wiki page. - It's not that intuitive to me, but I can see that it works by plugging in simple functions. Thanks. – Joebevo Mar 10 '12 at 6:06 1 Don't worry, I hated it too at first :) – Manishearth♦ Mar 10 '12 at 6:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550331830978394, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Bingham_plastic
# Bingham plastic Mayonnaise is a Bingham plastic. The surface has ridges and peaks because Bingham plastics mimic solids under low shear stresses. A Bingham plastic is a viscoplastic material that behaves as a rigid body at low stresses but flows as a viscous fluid at high stress. It is named after Eugene C. Bingham who proposed its mathematical form.[1] It is used as a common mathematical model of mud flow in drilling engineering, and in the handling of slurries. A common example is toothpaste,[2] which will not be extruded until a certain pressure is applied to the tube. It then is pushed out as a solid plug. ## Explanation Figure 1. Bingham Plastic flow as described by Bingham Figure 1 shows a graph of the behaviour of an ordinary viscous (or Newtonian) fluid in red, for example in a pipe. If the pressure at one end of a pipe is increased this produces a stress on the fluid tending to make it move (called the shear stress) and the volumetric flow rate increases proportionally. However for a Bingham Plastic fluid (in blue), stress can be applied but it will not flow until a certain value, the yield stress, is reached. Beyond this point the flow rate increases steadily with increasing shear stress. This is roughly the way in which Bingham presented his observation, in an experimental study of paints.[3] These properties allow a Bingham plastic to have a textured surface with peaks and ridges instead of a featureless surface like a Newtonian fluid. Figure 2. Bingham Plastic flow as described currently Figure 2 shows the way in which it is normally presented currently.[2] The graph shows shear stress on the vertical axis and shear rate on the horizontal one. (Volumetric flow rate depends on the size of the pipe, shear rate is a measure of how the velocity changes with distance. It is proportional to flow rate, but does not depend on pipe size.) As before, the Newtonian fluid flows and gives a shear rate for any finite value of shear stress. However, the Bingham Plastic again does not exhibit any shear rate (no flow and thus no velocity) until a certain stress is achieved. For the Newtonian fluid the slope of this line is the viscosity, which is the only parameter needed to describe its flow. By contrast the Bingham Plastic requires two parameters, the yield stress and the slope of the line, known as the plastic viscosity. The physical reason for this behaviour is that the liquid contains particles (e.g. clay) or large molecules (e.g. polymers) which have some kind of interaction, creating a weak solid structure, formerly known as a false body, and a certain amount of stress is required to break this structure. Once the structure has been broken, the particles move with the liquid under viscous forces. If the stress is removed, the particles associate again. ## Definition The material is an elastic solid for shear stress τ, less than a critical value $\tau_0$. Once the critical shear stress (or "yield stress") is exceeded, the material flows in such a way that the shear rate, ∂u/∂y (as defined in the article on viscosity), is directly proportional to the amount by which the applied shear stress exceeds the yield stress: $\frac {\partial u} {\partial y} = \left\{\begin{matrix} 0 &, \tau < \tau_0 \\ (\tau - \tau_0)/ {\mu_\infty} &, \tau \ge \tau_0 \end{matrix}\right.$ ## Friction Factor Formulae In fluid flow, it is a common problem to calculate the pressure drop in an established piping network.[4] Once the friction factor, f, is known, it becomes easier to handle different pipe-flow problems, viz. calculating the pressure drop for evaluating pumping costs or to find the flow-rate in a piping network for a given pressure drop. It is usually extremely difficult to arrive at exact analytical solution to calculate the friction factor associated with flow of non-Newtonian fluids and therefore explicit approximations are used to calculate it. Once the friction factor has been calculated the pressure drop can be easily determined for a given flow by the Darcy–Weisbach equation: $\ f = \ {2 h_f g D \over L V^2}$ where: • ${\bold \ h_f}$ is the frictional head loss (SI units: m) • ${\bold \ f}$ is the Darcy friction factor (SI units: Dimensionless) • ${\bold \ L}$ is the pipe length (SI units: m) • ${\bold \ g}$ is the gravitational acceleration (SI units: m/s²) • ${\bold \ D}$ is the pipe diameter (SI units: m) • ${\bold \ V}$ is the mean fluid velocity (SI units: m/s) ### Laminar flow An exact description of friction loss for Bingham plastics in fully developed laminar pipe flow was first published by Buckingham.[5] His expression, the Buckingham-Reiner equation, can be written in a dimensionless form as follows: $\ f_L = \ {64 \over Re}\left[1 + {He\over 6 Re} - {64\over3}\left({He^4\over {f}^3 Re^7}\right)\right]$ where: • ${\bold \ f}$ is the laminar flow friction factor (SI units: Dimensionless) • ${\bold \ Re}$ is the Reynolds number (SI units: Dimensionless) • ${\bold \ He}$ is the Hedstrom number (SI units: Dimensionless) The Reynolds number and the Hedstrom number are respectively defined as: $\mathrm{Re} = { \rho {\ V} D \over {\mu}}$, and $\mathrm{He} = { \rho {\ D^2 } {\ \tau_o} \over {{\mu}^2}}$ where: • ${\bold \rho}$ is the mass density of fluid (SI units: kg/m3) • ${\bold \ \mu}$ is the dynamic viscosity of fluid (SI units: kg/m s) ### Turbulent flow Darby and Melson developed an empirical expression [6] that was then refined, and is given by:[7] $\ f_T = \ {10^a} \ {Re^{-0.193}}$ where: • ${\bold \ f_T}$ is the turbulent flow friction factor (SI units: Dimensionless) • $\ a = -1.47\left[1 + 0.146{\ e^{-2.9\times {10^{-5}}He}}\right]$ ## Approximations of the Buckingham-Reiner equation Although an exact analytical solution of the Buckingham-Reiner equation can be obtained because it is a fourth order polynomial equation in f, due to complexity of the solution it is rarely employed. Therefore, researchers have tried to develop explicit approximations for the Buckingham-Reiner equation. ### Swamee-Aggarwal Equation The Swamee Aggarwal equation is used to solve directly for the Darcy–Weisbach friction factor f for laminar flow of Bingham plastic fluids.[8] It is an approximation of the implicit Buckingham-Reiner equation, but the discrepancy from experimental data is well within the accuracy of the data. The Swamee-Aggarwal equation is given by: $\ f_L = \ {64 \over Re} + {10.67 + 0.1414{({He\over Re})^{1.143}}\over {\left[1 + 0.0149{({He\over Re})^{1.16}}\right]Re }}\left({He\over Re}\right)$ ### Danish-Kumar Solution Danish et al. have provided an explicit procedure to calculate the friction factor f by using the Adomian decomposition method.[9] The friction factor containing two terms through this method is given as: $f_L = \frac{K_1 + \dfrac{4 K_2}{\left( K_1 + \frac{K_1 K_2}{K_1^4 + 3 K_2}\right)^3}}{1+ \dfrac{3 K_2}{\left(K_1 + \frac{K_1 K_2}{K_1^4 + 3 K_2}\right)^4}}$ where: $\ K_1 = \ {16 \over Re} + {16 He \over 6{Re^2}}$, and $\ K_2 = \ - {16 {He^4} \over 3{Re^8}}$ ## Combined Equation for friction factor for all flow regimes ### Darby-Melson Equation In 1981, Darby and Melson, using the approach of Churchill[10] and of Churchill and Usagi,[11] developed an expression to get a single friction factor equation valid for all flow regimes:[6] $\ f = \ {\left[{f_L}^m + {f_T}^m\right]}^{1\over m}$ where: $\ m = \ 1.7 + {40000\over Re}$ Both Swamee-Aggarwal equation and the Darby-Melson equation can be combined to give an explicit equation for determining the friction factor of Bingham plastic fluids in any regime. Relative roughness is not a parameter in any of the equations because the friction factor of Bingham plastic fluids is not sensitive to pipe roughness. ## References 1. E.C. Bingham,(1916) U.S. Bureau of Standards Bulletin, 13, 309-353 "An Investigation of the Laws of Plastic Flow" 2. ^ a b J. F. Steffe (1996) Rheological Methods in Food Process Engineering 2nd ed ISBN 0-9632036-1-4 3. E. C. Bingham (1922) Fluidity and Plasticity McGraw-Hill (New York) page 219 4. Darby, Ron (1996). Chemical Engineering Fluid Mechanics. Marcel Dekker. ISBN 0-8247-0444-4 . See Chapter 6. 5. Buckingham, E. (1921). "on Plastic Flow through Capillary Tubes". ASTM Proceedings 21: 1154–1156. 6. ^ a b Darby, R. and Melson J.(1981). "How to predict the friction factor for flow of Bingham plastics". Chemical Engineering 28: 59–61. 7. Darby, R. et al. (1992). "Prediction friction loss in slurry pipes." Chemical Engineering September: . 8. Swamee, P.K. and Aggarwal, N.(2011). "Explicit equations for laminar flow of Bingham plastic fluids". Journal of Petroleum Science and Engineering. doi:10.1016/j.petrol.2011.01.015. 9. Danish, M. et al. (1981). "Approximate explicit analytical expressions of friction factor for flow of Bingham fluids in smooth pipes using Adomian decomposition method". Communications in Nonlinear Science and Numerical Simulation 16: 239–251. 10. Churchill, S.W. (1977). "Friction factor equation spans all fluid-flow regimes". Chemical Engineering Nov. 7: 91–92. 11. Churchill, S.W. and Usagi, R.A. (1972). "A general expression for the correlation of rates of transfer and other phenomena". AIChE Journal 18(6): 1121-1128.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8809827566146851, "perplexity_flag": "middle"}
http://pediaview.com/openpedia/Function_(mathematics)
# Function (mathematics) A function f takes an input x, and returns an output f(x). One metaphor describes the function as a "machine" or "black box" that for each input returns a corresponding output. The red curve is the graph of a function f in the Cartesian plane, consisting of all points with coordinates of the form (x,f(x)). The property of having one output for each input is represented geometrically by the fact that each vertical line (such as the yellow line through the origin) has exactly one crossing point with the curve. In mathematics, a function[1] is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that relates each real number x to its square x2. The output of a function f corresponding to an input x is denoted by f(x) (read "f of x"). In this example, if the input is −3, then the output is 9, and we may write f(−3) = 9. The input variable(s) are sometimes referred to as the argument(s) of the function. Functions are "the central objects of investigation"[2] in most fields of modern mathematics. There are many ways to describe or represent a function. Some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function. In science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function can be described through its relationship with other functions, for example as an inverse function or as a solution of a differential equation. The input and output of a function can be expressed as an ordered pair, ordered so that the first element is the input (or tuple of inputs, if the function takes more than one input), and the second is the output. In the example above, f(x) = x2, we have the ordered pair (−3, 9). If both input and output are real numbers, this ordered pair can be viewed as the Cartesian coordinates of a point on the graph of the function. But no picture can exactly define every point in an infinite set. In modern mathematics, a function is defined by its set of inputs, called the domain, a set containing the outputs, called its codomain, and the set of all paired input and outputs, called the graph. For example, we could define a function using the rule f(x) = x2 by saying that the domain and codomain are the real numbers, and that the ordered pairs are all pairs of real numbers (x, x2). Collections of functions with the same domain and the same codomain are called function spaces, the properties of which are studied in such mathematical disciplines as real analysis and complex analysis. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, and division of functions, in those cases where the output is a number. Another important operation defined on functions is function composition, where the output from one function becomes the input to another function. ## Introduction and examples[] A function that associates to any of the four colored shapes its color. For an example of a function, let X be the set consisting of four shapes: a red triangle, a yellow rectangle, a green hexagon, and a red square; and let Y be the set consisting of five colors: red, blue, green, pink, and yellow. Linking each shape to its color is a function from X to Y: each shape is linked to a color (i.e., an element in Y), and each shape is linked to exactly one color. There is no shape that lacks a color and no shape that has two or more colors. This function will be referred to as the "color-of-the-shape function". The input to a function is called the argument and the output is called the value. The set of all permitted inputs to a given function is called the domain of the function, while the set of permissible outputs is called the codomain. Thus, the domain of the "color-of-the-shape function" is the set of the four shapes, and the codomain consists of the five colors. The concept of a function does not require that every possible output is the value of some argument, e.g. the color blue is not the color of any of the four shapes in X. A second example of a function is the following: the domain is chosen to be the set of natural numbers (1, 2, 3, 4, ...), and the codomain is the set of integers (..., −3, −2, −1, 0, 1, 2, 3, ...). The function associates to any natural number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6. A third example of a function has the set of polygons as domain and the set of natural numbers as codomain. The function associates a polygon with its number of vertices. For example, a triangle is associated with the number 3, a square with the number 4, and so on. The term range is sometimes used either for the codomain or for the set of all the actual values a function has. To avoid ambiguity this article avoids using the term. ## Definition[] The above diagram represents a function with domain $\{ 1, 2, 3 \}$, codomain $\{ A, B, C, D \}$ and set of ordered pairs $\{ (1,D), (2,C), (3,C) \}$. The image is $\{C,D\}$. However, this second diagram does not represent a function, since 2 is the first element in more than one ordered pair. In particular, (2, B) and (2, C) are both elements of the set of ordered pairs. In order to avoid the use of the not rigorously defined words "rule" and "associates", the above intuitive explanation of functions is completed with a formal definition. This definition relies on the notion of the cartesian product. The cartesian product of two sets X and Y is the set of all ordered pairs, written (x, y), where x is an element of X and y is an element of Y. The x and the y are called the components of the ordered pair. The cartesian product of X and Y is denoted by X × Y. A function f from X to Y is a subset of the cartesian product X × Y subject to the following condition: every element of X is the first component of one and only one ordered pair in the subset.[3] In other words, for every x in X there is exactly one element y such that the ordered pair (x, y) is contained in the subset defining the function f. This formal definition is a precise rendition of the idea that to each x is associated an element y of Y, namely the uniquely specified element y with the property just mentioned. Considering the "color-of-the-shape" function above, the set X is the domain consisting of the four shapes, while Y is the codomain consisting of five colors. There are twenty possible ordered pairs (four shapes times five colors), one of which is ("yellow rectangle", "red"). The "color-of-the-shape" function described above consists of the set of those ordered pairs, (shape, color) where the color is the actual color of the given shape. Thus, the pair ("red triangle", "red") is in the function, but the pair ("yellow rectangle", "red") is not. ## Notation[] A function f with domain X and codomain Y is commonly denoted by $f: X \rightarrow Y$ or $X \stackrel f \rightarrow Y.$ In this context, the elements of X are called arguments of f. For each argument x, the corresponding unique y in the codomain is called the function value at x or the image of x under f. It is written as f(x). One says that f associates y with x or maps x to y. This is abbreviated by $y = f(x).$ A general function is often denoted by f. If a function is often used, it may be given a special name as, for example, the signum function of a real number x is denoted by sgn(x). The argument is often denoted by the symbol x, but in other contexts may be denoted differently, as well. For example, in physics, the velocity of some body, depending on the time, is denoted v(t). It is common to omit the parentheses around the argument when there is little chance of confusion, thus: sin x; this is known as prefix notation. In order to specify a concrete function, the notation $\mapsto$ (an arrow with a bar at its tail) is used. For example, the above function reads $\begin{align} f\colon \mathbb{N} &\to \mathbb{Z} \\ x &\mapsto 4-x. \end{align}$ The first part is read: • "f is a function from $\mathbb{N}$ (the set of natural numbers) to $\mathbb{Z}$ (the set of integers)" or • "f is an $\mathbb{Z}$-valued function of an $\mathbb{N}$-valued variable". The second part is read "x maps to 4−x." In other words, this function has the natural numbers as domain, the integers as codomain. A function is properly defined only when the domain and codomain are specified. For example, the formula f(x) = 4 − x alone (without specifying the codomain and domain) is not a properly defined function. Moreover, the function $\begin{align} g\colon \mathbb{Z} &\to \mathbb{Z} \\ x &\mapsto 4-x. \end{align}$ (with different domain) is not considered the same function, even though the formulas defining f and g agree, and similarly with a different codomain. Despite that, many authors drop the specification of the domain and codomain, especially if these are clear from the context. So in this example many just write f(x) = 4 − x. Sometimes, the maximal possible domain is also understood implicitly: a formula such as $f(x)=\sqrt{x^2-5x+6}$ may mean that the domain of f is the set of real numbers x where the square root is defined (in this case x ≤ 2 or x ≥ 3).[4] To define a function, sometimes a dot notation is used in order to emphasize the functional nature of an expression without assigning a special symbol to the variable. For instance, $\scriptstyle a(\cdot)^2$ stands for the function $\textstyle x\mapsto ax^2$, $\scriptstyle \int_a^{\, \cdot} f(u)du$ stands for the integral function $\scriptstyle x\mapsto \int_a^x f(u)du$, and so on. ## Specifying a function[] A function can be defined by any mathematical condition relating each argument (input value) to the corresponding output value. If the domain is finite, a function f may be defined by simply tabulating all the arguments x and their corresponding function values f(x). More commonly, a function is defined by a formula, or (more generally) an algorithm — a recipe that tells how to compute the value of f(x) given any x in the domain. There are many other ways of defining functions. Examples include piecewise definitions, induction or recursion, algebraic or analytic closure, limits, analytic continuation, infinite series, and as solutions to integral and differential equations. The lambda calculus provides a powerful and flexible syntax for defining and combining functions of several variables. In advanced mathematics, some functions exist because of an axiom, such as the Axiom of Choice. ### Graph[] The graph of a function is its set of ordered pairs F. This is an abstraction of the idea of a graph as a picture showing the function plotted on a pair of coordinate axes; for example, (3, 9), the point above 3 on the horizontal axis and to the right of 9 on the vertical axis, lies on the graph of y=x2. ### Formulas and algorithms[] Different formulas or algorithms may describe the same function. For instance f(x) = (x + 1) (x − 1) is exactly the same function as f(x) = x2 − 1.[5] Furthermore, a function need not be described by a formula, expression, or algorithm, nor need it deal with numbers at all: the domain and codomain of a function may be arbitrary sets. One example of a function that acts on non-numeric inputs takes English words as inputs and returns the first letter of the input word as output. The factorial function $!: \mathbb{N} \rightarrow \mathbb{N}$ is defined by the following inductive algorithm: 1! is defined to be 1, and n! is defined to be $n (n-1)!$, using that the factorial of the predecessor was already defined. Unlike other functions, the factorial function is denoted with the exclamation mark (serving as the symbol of the function) after the variable (postfix notation). ### Computability[] Main article: computable function Functions that send integers to integers, or finite strings to finite strings, can sometimes be defined by an algorithm, which gives a precise description of a set of steps for computing the output of the function from its input. Functions definable by an algorithm are called computable functions. For example, the Euclidean algorithm gives a precise process to compute the greatest common divisor of two positive integers. Many of the functions studied in the context of number theory are computable. Fundamental results of computability theory show that there are functions that can be precisely defined but are not computable. Moreover, in the sense of cardinality, almost all functions from the integers to integers are not computable. The number of computable functions from integers to integers is countable, because the number of possible algorithms is. The number of all functions from integers to integers is higher: the same as the cardinality of the real numbers. Thus most functions from integers to integers are not computable. Specific examples of uncomputable functions are known, including the busy beaver function and functions related to the halting problem and other undecidable problems. ## Basic properties[] There are a number of general basic properties and notions. In this section, f is a function with domain X and codomain Y. ### Image and preimage[] Main article: Image (mathematics) The graph of the function f(x) = x3 − 9x2 + 23x − 15. The interval A = [3.5, 4.25] is a subset of the domain, thus it is shown as part of the x-axis (green). The image of A is (approximately) the interval [−3.08, −1.08]. It is obtained by projecting to the y-axis (along the blue arrows) the intersection of the graph with the light green area consisting of all points whose x-coordinate is between 3.5 and 4.25. the part of the (vertical) y-axis shown in blue. The preimage of B = [1, 2.5] consists of three intervals. They are obtained by projecting the intersection of the light red area with the graph to the x-axis. If A is any subset of the domain X, then f(A) is the subset of the codomain Y consisting of all images of elements of A. We say the f(A) is the image of A under f. The image of f is given by f(X). On the other hand, the inverse image (or preimage, complete inverse image) of a subset B of the codomain Y under a function f is the subset of the domain X defined by $f^{-1}(B) = \{x \in X : f(x) \in B\}.$ So, for example, the preimage of {4, 9} under the squaring function is the set {−3,−2,2,3}. The term range usually refers to the image,[6] but sometimes it refers to the codomain. By definition of a function, the image of an element x of the domain is always a single element y of the codomain. Conversely, though, the preimage of a singleton set (a set with exactly one element) may in general contain any number of elements. For example, if f(x) = 7 (the constant function taking value 7), then the preimage of {5} is the empty set but the preimage of {7} is the entire domain. It is customary to write f−1(b) instead of f−1({b}), i.e. $f^{-1}(b) = \{x \in X : f(x) = b\}.$ This set is sometimes called the fiber of b under f. Use of f(A) to denote the image of a subset A⊆X is consistent so long as no subset of the domain is also an element of the domain. In some fields (e.g., in set theory, where ordinals are also sets of ordinals) it is convenient or even necessary to distinguish the two concepts; the customary notation is f[A] for the set { f(x): x ∈ A }. Likewise, some authors use square brackets to avoid confusion between the inverse image and the inverse function. Thus they would write f−1[B] and f−1[b] for the preimage of a set and a singleton. ### Injective and surjective functions[] A function is called injective (or one-to-one, or an injection) if f(a) ≠ f(b) for any two different elements a and b of the domain. It is called surjective (or onto) if f(X) = Y. That is, it is surjective if for every element y in the codomain there is an x in the domain such that f(x) = y. Finally f is called bijective if it is both injective and surjective. This nomenclature was introduced by the Bourbaki group. The above "color-of-the-shape" function is not injective, since two distinct shapes (the red triangle and the red rectangle) are assigned the same value. Moreover, it is not surjective, since the image of the function contains only three, but not all five colors in the codomain. ### Function composition[] Main article: Function composition A composite function g(f(x)) can be visualized as the combination of two "machines". The first takes input x and outputs f(x). The second takes f(x) and outputs g(f(x)). The function composition of two functions takes the output of one function as the input of a second one. More specifically, the composition of f with a function g: Y → Z is the function $g \circ f: X \rightarrow Z$ defined by $(g \circ f)(x) = g(f(x)).$ That is, the value of x is obtained by first applying f to x to obtain y = f(x) and then applying g to y to obtain z = g(y). In the notation $g\circ f$, the function on the right, f, acts first and the function on the left, g acts second, reversing English reading order. The notation can be memorized by reading the notation as "g of f" or "g after f". The composition $g\circ f$ is only defined when the codomain of f is the domain of g. Assuming that, the composition in the opposite order $f\circ g$ need not be defined. Even if it is, i.e., if the codomain of f is the codomain of g, it is not in general true that $g \circ f = f \circ g.$ That is, the order of the composition is important. For example, suppose f(x) = x2 and g(x) = x+1. Then g(f(x)) = x2+1, while f(g(x)) = (x+1)2, which is x2+2x+1, a different function. ### Identity function[] Main article: Identity function The unique function over a set X that maps each element to itself is called the identity function for X, and typically denoted by idX. Each set has its own identity function, so the subscript cannot be omitted unless the set can be inferred from context. Under composition, an identity function is "neutral": if f is any function from X to Y, then $\begin{align} f \circ \mathrm{id}_X &= f , \\ \mathrm{id}_Y \circ f &= f . \end{align}$ ### Restrictions and extensions[] Main article: Restriction (mathematics) Informally, a restriction of a function f is the result of trimming its domain. More precisely, if S is any subset of X, the restriction of f to S is the function f|S from S to Y such that f|S(s) = f(s) for all s in S. If g is a restriction of f, then it is said that f is an extension of g. The overriding of f: X → Y by g: W → Y (also called overriding union) is an extension of g denoted as (f ⊕ g): (X ∪ W) → Y. Its graph is the set-theoretical union of the graphs of g and f|X \ W. Thus, it relates any element of the domain of g to its image under g, and any other element of the domain of f to its image under f. Overriding is an associative operation; it has the empty function as an identity element. If f|X ∩ W and g|X ∩ W are pointwise equal (e.g., the domains of f and g are disjoint), then the union of f and g is defined and is equal to their overriding union. This definition agrees with the definition of union for binary relations. ### Inverse function[] Main article: Inverse function An inverse function for f, denoted by f−1, is a function in the opposite direction, from Y to X, satisfying $f \circ f^{-1} = id_Y, f^{-1} \circ f = id_X.$ That is, the two possible compositions of f and f−1 need to be the respective identity maps of X and Y. As a simple example, if f converts a temperature in degrees Celsius C to degrees Fahrenheit F, the function converting degrees Fahrenheit to degrees Celsius would be a suitable f−1. $\begin{align} f(C) &= \frac {9}{5} C + 32 \\ f^{-1}(F) &= \frac {5}{9} (F - 32) \end{align}$ Such an inverse function exists if and only if f is bijective. In this case, f is called invertible. The notation $g \circ f$ (or, in some texts, just $gf$) and f−1 are akin to multiplication and reciprocal notation. With this analogy, identity functions are like the multiplicative identity, 1, and inverse functions are like reciprocals (hence the notation). ## Types of functions[] ### Real-valued functions[] A real-valued function f is one whose codomain is the set of real numbers or a subset thereof. If, in addition, the domain is also a subset of the reals, f is a real valued function of a real variable. The study of such functions is called real analysis. Real-valued functions enjoy so-called pointwise operations. That is, given two functions f, g: X → Y where Y is a subset of the reals (and X is an arbitrary set), their (pointwise) sum f+g and product f ⋅ g are functions with the same domain and codomain. They are defined by the formulas: $\begin{align} (f+g)(x) &= f(x)+g(x) , \\ (f\cdot g)(x) &= f(x) \cdot g(x) . \end{align}$ In a similar vein, complex analysis studies functions whose domain and codomain are both the set of complex numbers. In most situations, the domain and codomain are understood from context, and only the relationship between the input and output is given, but if $f(x) = \sqrt{x}$, then in real variables the domain is limited to non-negative numbers. The following table contains a few particularly important types of real-valued functions: Affine functions Quadratic function Continuous function Trigonometric function An affine function A quadratic function. The signum function is not continuous, since it "jumps" at 0. The sine and cosine function. f(x) = ax + b. f(x) = ax2 + bx + c. Roughly speaking, a continuous function is one whose graph can be drawn without lifting the pen. e.g., sin(x), cos(x) ### Further types of functions[] Further information: List of mathematical functions There are many other special classes of functions that are important to particular branches of mathematics, or particular applications. Here is a partial list: ## Function spaces[] Main article: Function space The set of all functions from a set X to a set Y is denoted by X → Y, by [X → Y], or by YX. The latter notation is motivated by the fact that, when X and Y are finite and of size |X| and |Y|, then the number of functions X → Y is |YX| = |Y||X|. This is an example of the convention from enumerative combinatorics that provides notations for sets based on their cardinalities. If X is infinite and there is more than one element in Y then there are uncountably many functions from X to Y, though only countably many of them can be expressed with a formula or algorithm. ### Currying[] Main article: Currying An alternative approach to handling functions with multiple arguments is to transform them into a chain of functions that each takes a single argument. For instance, one can interpret Add(3,5) to mean "first produce a function that adds 3 to its argument, and then apply the 'Add 3' function to 5". This transformation is called currying: Add 3 is curry(Add) applied to 3. There is a bijection between the function spaces CA×B and (CB)A. When working with curried functions it is customary to use prefix notation with function application considered left-associative, since juxtaposition of multiple arguments—as in (f x y)—naturally maps to evaluation of a curried function. Conversely, the → and ⟼ symbols are considered to be right-associative, so that curried functions may be defined by a notation such as f: Z → Z → Z = x ⟼ y ⟼ x·y. ## Variants and generalizations[] ### Alternative definition of a function[] The above definition of "a function from X to Y" is generally agreed on, however there are two different ways a "function" is normally defined where the domain X and codomain Y are not explicitly or implicitly specified. Usually this is not a problem as the domain and codomain normally will be known. With one definition saying the function defined by f(x) = x2 on the reals does not completely specify a function as the codomain is not specified, and in the other it is a valid definition. In the other definition a function is defined as a set of ordered pairs where each first element only occurs once. The domain is the set of all the first elements of a pair and there is no explicit codomain separate from the image.[7][8] Concepts like surjective have to be refined for such functions, more specifically by saying that a (given) function is surjective on a (given) set iff its image equals that set. If a function is defined as a set of ordered pairs with no specific codomain, then f: X → Y indicates that f is a function whose domain is X and whose image is a subset of Y.[6] Y may be referred to as the codomain but then any set including the image of f is a valid codomain of f. This is also referred to by saying that "f maps X into Y"[6] In some usages X and Y may subset the ordered pairs, e.g. the function f on the real numbers such that y=x2 when used as in f: [0,4] → [0,4] means the function defined only on the interval [0,2].[9] With the definition of a function as an ordered triple this would always be considered a partial function. An alternative definition of the composite function g(f(x)) defines it for the set of all x in the domain of f such that f(x) is in the domain of g.[10] Thus the real square root of −x2 is a function only defined at 0 where it has the value 0. When the definition of a function by its graph only is used, since the codomain is not defined, the "surjection" must be accompanied with a statement about the set the function maps onto. For example, we might say f maps onto the set of all real numbers. Functions are commonly defined as a type of relation. A relation from X to Y is a set of ordered pairs (x, y) with $x \in X$ and $y \in Y$. A function from X to Y can be described as a relation from X to Y that is left-total and right-unique. However when X and Y are not specified there is a disagreement about the definition of a relation that parallels that for functions. Normally a relation is just defined as a set of ordered pairs and a correspondence is defined as a triple (X, Y, F), however the distinction between the two is often blurred or a relation is never referred to without specifying the two sets. The definition of a function as a triple defines a function as a type of correspondence, whereas the definition of a function as an ordered pair defines a function as a type of relation. Many operations in set theory, such as the power set, have the class of all sets as their domain, and therefore, although they are informally described as functions, they do not fit the set-theoretical definition outlined above, because a class is not necessarily a set. However some definitions of relations and functions define them as classes of pairs rather than sets of pairs and therefore do include the power set as a function.[11] ### Partial and multi-valued functions[] $f(x) = \pm \sqrt x$ is not a function in the proper sense, but a multi-valued function: it assigns to each positive real number x two values: the (positive) square root of x, and $-\sqrt x.$ In some parts of mathematics, including recursion theory and functional analysis, it is convenient to study partial functions in which some values of the domain have no association in the graph; i.e., single-valued relations. For example, the function f such that f(x) = 1/x does not define a value for x = 0, since division by zero is not defined. Hence f is only a partial function from the real line to the real line. The term total function can be used to stress the fact that every element of the domain does appear as the first element of an ordered pair in the graph. In other parts of mathematics, non-single-valued relations are similarly conflated with functions: these are called multivalued functions, with the corresponding term single-valued function for ordinary functions. ### Functions with multiple inputs and outputs[] The concept of function can be extended to an object that takes a combination of two (or more) argument values to a single result. This intuitive concept is formalized by a function whose domain is the Cartesian product of two or more sets. For example, consider the function that associates two integers to their product: f(x, y) = x·y. This function can be defined formally as having domain Z×Z, the set of all integer pairs; codomain Z; and, for graph, the set of all pairs ((x,y), x·y). Note that the first component of any such pair is itself a pair (of integers), while the second component is a single integer. The function value of the pair (x,y) is f((x,y)). However, it is customary to drop one set of parentheses and consider f(x,y) a function of two variables, x and y. Functions of two variables may be plotted on the three-dimensional Cartesian as ordered triples of the form (x,y,f(x,y)). The concept can still further be extended by considering a function that also produces output that is expressed as several variables. For example, consider the integer divide function, with domain Z×N and codomain Z×N. The resultant (quotient, remainder) pair is a single value in the codomain seen as a Cartesian product. #### Binary operations[] The familiar binary operations of arithmetic, addition and multiplication, can be viewed as functions from R×R to R. This view is generalized in abstract algebra, where n-ary functions are used to model the operations of arbitrary algebraic structures. For example, an abstract group is defined as a set X and a function f from X×X to X that satisfies certain properties. Traditionally, addition and multiplication are written in the infix notation: x+y and x×y instead of +(x, y) and ×(x, y). ### Functors[] The idea of structure-preserving functions, or homomorphisms, led to the abstract notion of morphism, the key concept of category theory. In fact, functions f: X → Y are the morphisms in the category of sets, including the empty set: if the domain X is the empty set, then the subset of X × Y describing the function is necessarily empty, too. However, this is still a well-defined function. Such a function is called an empty function. In particular, the identity function of the empty set is defined, a requirement for sets to form a category. The concept of categorification is an attempt to replace set-theoretic notions by category-theoretic ones. In particular, according to this idea, sets are replaced by categories, while functions between sets are replaced by functors.[12] ## History[] Main article: History of the function concept ## Notes[] 1. The words map or mapping, transformation, correspondence, and operator are among the many that are sometimes used as synonyms for function. Halmos 1970, p. 30. 2. Hamilton, A. G. Numbers, sets, and axioms: the apparatus of mathematics. Cambridge University Press. p. 83. ISBN 0-521-24509-5 [Amazon-US | Amazon-UK]. 3. Hartley Rogers, Jr (1987). Theory of Recursive Functions and Effective Computation. MIT Press. pp. 1–2. ISBN 0-262-68052-1 [Amazon-US | Amazon-UK]. 4. ^ a b c Quantities and Units - Part 2: Mathematical signs and symbols to be used in the natural sciences and technology, page 15. ISO 80000-2 (ISO/IEC 2009-12-01) 5. Apostol, Tom (1967). Calculus vol 1. John Wiley. p. 53. ISBN 0-471-00005-1 [Amazon-US | Amazon-UK]. 6. Tarski, Alfred; Givant, Steven (1987). A formalization of set theory without variables. American Mathematical Society. p. 3. ISBN 0-8218-1041-3 [Amazon-US | Amazon-UK]. 7. John C. Baez; James Dolan (1998). Categorification. arXiv:math/9802029. ## References[] • Bartle, Robert (1967). The Elements of Real Analysis. John Wiley & Sons. • Bloch, Ethan D. (2011). Proofs and Fundamentals: A First Course in Abstract Mathematics. Springer. ISBN 978-1-4419-7126-5 [Amazon-US | Amazon-UK]. • Halmos, Paul R. (1970). Naive Set Theory. Springer-Verlag. ISBN 0-387-90092-6 [Amazon-US | Amazon-UK]. • Spivak, Michael (2008). Calculus (4th ed.). Publish or Perish. ISBN 978-0-914098-91-1 [Amazon-US | Amazon-UK]. ## Further reading[] • Anton, Howard (1980). Calculus with Analytical Geometry. Wiley. ISBN 978-0-471-03248-9 [Amazon-US | Amazon-UK]. • Bartle, Robert G. (1976). The Elements of Real Analysis (2nd ed.). Wiley. ISBN 978-0-471-05464-1 [Amazon-US | Amazon-UK]. • Dubinsky, Ed; Harel, Guershon (1992). The Concept of Function: Aspects of Epistemology and Pedagogy. Mathematical Association of America. ISBN 0-88385-081-8 [Amazon-US | Amazon-UK]. • Hammack, Richard (2009). "12. Functions". Book of Proof. Virginia Commonwealth University. Retrieved 2012-08-01. • Husch, Lawrence S. (2001). Visual Calculus. University of Tennessee. Retrieved 2007-09-27. • Katz, Robert (1964). Axiomatic Analysis. D. C. Heath and Company. • Kleiner, Israel (1989). "Evolution of the Function Concept: A Brief Survey". The College Mathematics Journal 20 (4) (Mathematical Association of America). pp. 282–300. doi:10.2307/268684 8. JSTOR 2686848.. • Lützen, Jesper (2003). "Between rigor and applications: Developments in the concept of function in mathematical analysis". In Roy Porter, ed. The Cambridge History of Science: The modern physical and mathematical sciences. Cambridge University Press. ISBN 0521571995 [Amazon-US | Amazon-UK]. An approachable and diverting historical presentation. • Malik, M. A. (1980). "Historical and pedagogical aspects of the definition of function". International Journal of Mathematical Education in Science and Technology 11 (4). pp. 489–492. doi:10.1080/0020739800110404. • Reichenbach, Hans (1947) Elements of Symbolic Logic, Dover Publishing Inc., New York NY, ISBN 0-486-24004-5 [Amazon-US | Amazon-UK]. • Ruthing, D. (1984). "Some definitions of the concept of function from Bernoulli, Joh. to Bourbaki, N.". Mathematical Intelligencer 6 (4). pp. 72–77. • Thomas, George B.; Finney, Ross L. (1995). Calculus and Analytic Geometry (9th ed.). Addison-Wesley. ISBN 978-0-201-53174-9 [Amazon-US | Amazon-UK]. ## Source Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Function (mathematics)", which is available in its original form here: http://en.wikipedia.org/w/index.php?title=Function_(mathematics) • ## Finding More You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page. • ## Questions or Comments? If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content. All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8839802742004395, "perplexity_flag": "head"}
http://www.sciforums.com/showthread.php?113728-String-theory-is-advanced&s=c09954da4b75de3f96466274572633f9&p=2940989
• Forum • New Posts • FAQ • Calendar • Ban List • Community • Forum Actions • Encyclopedia • What's New? 1. If this is your first visit, be sure to check out the FAQ by clicking the link above. You need to register and post an introductory thread before you can post to all subforums: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. # Thread: 1. Originally Posted by Farsight Because my maths isn't that weak Really? You know linear algebra, real and complex analysis, tensor calculus, Riemannian geometry, and at least a smattering of group theory and a bit about matrix Lie algebras? Because that's what mainstream theories are defined in terms of, and that's the math you'll need to understand in order to recognise that some new proposal is capable of reproducing all the results of mainstream theories. and because the papers match the scientific evidence of pair production, electron diffraction, Einstein-de Haas, etc, and tie in with Einstein's E=mc² paper where a radiating body loses mass But all of these are already accounted for by mainstream theories, among many other things that you and the authors you cite have an annoying habit of ignoring completely. Because they match the scientific evidence and have an explanatory power which other theories lack. And the reality here is one of the biggest problems I see with your attitude to science, as well as some of the authors you cite. Suppose I draw a table of what some mainstream theory can explain, like this: Code: ``` | H atom | Double slit | Mandel dip | Bell violation | Pair production | Electron mass -------------|-------------------|-------------------|-------------------|-------------------|-------------------|------------------- Mainstream | X | X | X | X | X |``` This is oversimplified, because the Standard Model can explain a much larger range of observed behaviour than I put in the table. But you get the idea. Now let's say for argument's sake someone comes along and claims that they can explain the electron mass. Then in order to be impressed I need to see it stack up against the mainstream it is competing with like this: Code: ``` | H atom | Double slit | Mandel dip | Bell violation | Pair production | Electron mass -------------|-------------------|-------------------|-------------------|-------------------|------------------|------------------- Mainstream | X | X | X | X | X | New proposal | X | X | X | X | X | X``` But in my experience, what we tend to see from you are a bunch of disparate proposals that stack up like this: Code: ``` | H atom | Double slit | Mandel dip | Bell violation | Pair production | Electron mass -------------|-------------------|-------------------|-------------------|-------------------|------------------|------------------- Mainstream | X | X | X | X | X | New proposal | | | | | x |``` I.e. they recover some result we can already explain, only in less detail, and ignoring everything else we use mainstream theories for. Obviously, that's not impressive. They don't label spin as "intrinsic" with no classical equivalent, or advance the idea that the electron is pointlike and so create mysteries and end up proposing an unfalsifiable unscientific multiverse. I've already addressed this with you in the past: logically, there can be no such thing as a theory that explains everything. It is silly to complain that spin or anything else is intrinsic in a theory, unless you have actual evidence that it is not. At least a few things have to be intrinsic or axiomatic in any theory anyway, and if it is not spin, it will be something else. A lot of people inexperienced with science and physics seem to get this one wrong. people like Alphanumeric boot it psuedoscience without addressing it. AlphaNumeric is actually doing a very thorough job describing things you and some authors you cite are doing wrong. 2. Originally Posted by Farsight Pretty much, though use the phrase logically consistent rather than ad-hoc. You can only say it's logically consistent and not ad hoc if you're working with a set of postulates which you are deriving the implications of. If you're just bolting things together with the argument "They don't contradict one another" that isn't enough. Newtonian gravity and non-relativistic quantum mechanics don't contradict one another but they aren't any more valid when you put them together. Originally Posted by Farsight I didn't. I read that paper along with Qiu-Hong Hu's and understood what they were saying. Remember this thread? What I've done is make such work easy to understand for the layman and offer insight to others. Who don't always want it. Making pseudoscience more accessible to a layperson, which incidentally you are too, isn't contributing to science. Originally Posted by Farsight Oh here we go. Me stating that it's a fact you don't have the mathematical grounding to understand spherical harmonics might not be something you want to hear but it's a relevant criticism. Simply dismissing it as insults isn't going to negate the point I raise. You're telling people "Oh this is good, it uses spherical harmonics" when you don't actually understand spherical harmonics. You don't know how to use them on a working level, you don't understand their applications, your only understanding comes from qualitative summaries other people have had to provide you. You lack the ability to properly evaluate any piece of work which uses them. Me saying this is just fact. Unless you would like us all to believe you have sufficient mathematical grounding to work with Sturm-Liouville operators? Originally Posted by Farsight I don't "lift" other people's work, I publicise it. You have obviously 'lifted' Hu's work. You haven't critically evaluated it for accuracy because you don't have the ability to due to poor grasp of quantitative physics expect of school children. You then incorporate it into your work wholesale. Whether or not you give citations to them since you obviously do not understand how the work works you can do nothing but mindlessly repeat it. The fact you didn't spot that glaring mistake shows you lack such capabilities so it's not like I'm making baseless accusations here. Originally Posted by Farsight I understand spin, and the harmonics enough for the electron. It's really simple, but you aren't listening. Come now Farsight, do you expect us to believe you can do such mathematics? Or are you instead saying you understand their 'barest essence' or whatever phrase it is you like to use? If the former than you're dishonest. If the latter then you're just arm waving, saying you understand some vague qualitative concept which you have no evidence for. Now if you were an experimental physicist working with electrons etc then I might think you have justification for saying you understand something of their behaviour beyond myself or others here but you aren't. You have no practical experience, you have no theoretical knowledge, you have only your opinion of layperson analogies and simplifications. Originally Posted by Farsight No, but others will. It's coming. You've been making such claims for years and you've gotten no closer. Look at people like Sylwester, he's been doing it for decades? Do you really want to be doing this, claiming amazing insight no one else has on forums, in 1, 2, 5, 10 years time? You're exactly where you were 5 years ago, only with a lighter wallet after you paid for your self published book and adverts for it in physics magazines. Even if you aren't going to give up your 'dream' clearly you need a different approach. Originally Posted by Farsight No I don't. Light moves, and we use that to define the second and the metre. Then we use that second and the metre to say the speed of light is 299,792,458 m/s. If the light moves slower we still use it to define the second and the metre, and we still use the second and the metre to say the speed of light is 299,792,458 m/s. If we defined our second and our metre some other way the Compton wavelength of the electron would still work out at 4π / c^1½. It isn't numerology, it's ratios. This just shows how completely non-existent your grasp of such concepts here. This has nothing to do with whether we define a metre by light or by a steel rod, clearly the equation $\frac{m_{p}}{m_{e}} = \frac{\sqrt{c}}{3\pi}$ is nonsense. You obviously need to have it explained again. The left hand side, $\frac{m_{p}}{m_{e}}$ doesn't care what units of mass you use. Kilograms, tons, AMUs, jellybeans. It's a dimensionless quantity and will be around 1850 in any units. The right hand side, $\frac{\sqrt{c}}{3\pi}$, has units of $\sqrt{m/s}$ which in itself is dubious. It depends on the units we use. Now it doesn't matter how you define that length in practice, so the whole "We define metres of light" issue is a completely irrelevant point. I define my units of time to be measured in bobs. I define my units of length to be measured in jeffs. It just so happens that in such units the speed of light is $3 \times 10^{8}$ jeffs per bob. Thus the numerical value of $\sqrt{c}$ is as in SI units. But now I define a dave to be 100 jeffs. In these new units c is measured in daves per bob and the numerical value of $\sqrt{c}$ will be one tenth of what it was in jeffs per bob. So $\sqrt{c}$ has had it's value changed. But obviously I haven't changed anything to do with masses and even if I had $\frac{m_{p}}{m_{e}}$ wouldn't change. So the equation equating these two expressions is meaningless. You've clearly failed to grasp a very basic but essential concept in how quantities are described in physics. The only time you can start saying things like "Wow, these two numbers are really close to one another, maybe that means something?" is when both numbers are dimensionless. There's nothing special about metres, seconds, kilograms etc, they are all convenient scales humans defined to help quantify the universe. Nature doesn't care what we use to quantify things so anything with units includes this arbitrary choice. Only when you compute dimensionless quantities can you remove this bias. This is why physicists developed natural units. You construct combinations of dimensional quantities to define dimensionless quantities. It's a way of avoiding this type of mistake. Again, this is something you'd be all too familiar with if you'd spent some time in the past 5+ years actually learning working physics models. Unfortunately you've been peddling the same mistaken inconsistent nonsense for all that time. I do hope you have something else to try to justify your work because if you don't then you really do have absolutely nothing to show for the last 5 years. Out of interest how many hours and how many pounds have you put into this endeavour of yours? All the forums you posted on, all the "[something] explained" threads, all the books you paid to be printed, all the money for the physic magazine ads. If you'd just invested some time and effort at the start you could have saved so much of that. You could have put it towards a pension, you could have learnt a language, you could have learnt actual science, you could have helped with a child's college fund. You could have avoided squandering it. Originally Posted by Farsight That's what harmonics is all about. Clearly you don't know about harmonics because that isn't what they are about. Do you realise you're just throwing out words you don't know the meaning of or do you really think you're saying something insightful? Originally Posted by Farsight I've got to go. And I do have some work to do. I'll get back to the remaining physics content of your post later. I'll ignore the ad-hominem content. Meanwhile, try to be more like pryzk. As others have pointed out, I'm not throwing a tirade of abuse at you in some fit of rage. I'm going to the trouble of explaining your mistakes at length. The fact you don't like hearing that you've made a mistake a child should know better of doesn't make it an ad hom, it is just an unpleasant statement of fact. I'm demonstrating that your complete avoidance of quantitative models, both in learning mainstream physics and trying to develop your own 'work', has left you with some fundamental and fatal gaps in your understanding. Part of the reason known to be false models are taught in educational establishments is to help people understand the development of ideas, the types of mistakes made in the past and the ways to avoid them. You have missed out on this critical part of science education and unfortunately it's one of things things where not knowing it often makes you not know you don't know. Originally Posted by Farsight Huh? You demand something of me, then you dismiss it as arm-waving pseudoscience? You make it sound like I dismissed it out of hand. I explained at length (as you like complaining about) why it was nonsense. If you think what you provided answers the thing I've been asking you for years you're mistaken. I've been asking you to provide one, just one, phenomenon in reality which you're able to model accurately using your work and to show it's derivation. What you have provided doesn't come close to that. Originally Posted by Farsight I'm not the one being dishonest here. Go look to that scientific evidence I referred to. You can diffract electrons. And annihilate them with positrons to yield photons. How do you think that works? Magic? Utter non-sequitor. Electrons diffracting and electron-positron annihilation are things you haven't been able to show your work can accurately model. Instead the example you gave wasn't really your work and wasn't actually valid physics for reasons I explained. Originally Posted by Farsight Because my maths isn't that weak I simply don't believe you. If you really believe what you're saying I can only conclude that you're one of those people who doesn't realise just how much they don't know. It's sometimes said education is the process of learning you know less and less than you thought. Unless you've been working through dozens of textbooks and lecture notes you simply do not have sufficient mathematical capabilities to grasp the details of things like quantum field theory or relativity. There's a reason GR and QFT are 3rd, 4th, even later year topics at even the top universities, they rely on a huge platform of more fundamental mathematics. You don't do GR unless you know SR, differential geometry, tensor calculus, linear algebra, vector calculus, PDEs, a smattering of electromagnetism, basic group theory. You don't do SR unless you know linear algebra and vector calculus. You can't do differential geometry till you've done vector calculus, non-Euclidean geometry and tensor calculus. You can't do PDEs until you've done some ODEs, linear algebra and a smattering of analysis doesn't go amiss. You can't do group theory until you've done some abstract algebra. You can't do.... well you get the point. Someone only needs to look at the prerequisites university courses in GR or QFT have to see the cascading avalanche of courses they build upon. Most of those courses also allow you to work in things like electrodynamics, fluid mechanics, quantum mechanics, numerical analysis etc. This shows something else you've shown you don't understand. The same sorts of courses required to be able to do string theory, which basically requires a firm grasp of both relativity and quantum sides of physics, also are needed for more 'practical' areas of physics and engineering. The same types of problems needed to solve the Schrodinger equation numerically arise in fluid mechanics. An expert in one can contribute to the other. Quantum mechanics can be used to model, via some rather technical transformations, population dynamics in animals. Some of the more crazy mathematics used to prove fundamental results in GR also allow some clever video analysis things to be done. The type of problem I spent about a year looking at during my thesis can arise in feedback control system design! The web of interconnectivity between various areas of mathematical physics is extremely dense and it's partly the reason a string theory PhD can be viewed as very useful even in the eyes of some employer outside of theoretical physics research. But this interconnectivity also means that there's an awful lot of grunt work which needs to be done before you get a strong enough mathematical capability to tackle the sorts of problems people find 'cool'. Of course this is all a bit of an aside, I felt it was important to highlight just how long a slog it can be to get to what would be considered a 'non-weak' mathematical capability. Even getting a degree doesn't get you very far. And you're not even there. So when you say your maths isn't necessarily weak I think you don't realise the scale which actual mathematical physicists work on. If someone says to me "I'm not weak at PDEs" then I'd expect them to have a working understanding of basic PDE theory, be able to solve many classic type problems, know about common methods and be able to pass a reasonable exam on them, typically at least 2nd, more often 3rd or 4th year level. Since your mathematical capabilities are not even sufficient to notice the glaring mistake you and Hu are peddling I'm sorry (well, not really) to say that your evaluating capabilities for your maths skills is just as rose tinted as for your physics skills (or lack thereof). And I'll close with a piece of advice : Don't try to argue about the units thing. Just put up your hands, say "I was wrong" and walk away. If you don't then you're going to be arguing about something so basic that anyone here who remembers their high school/secondary school physics lessons will know you're wrong. At least when you talk about the stress-energy tensor only a few people have worked with it. In the case of units a much wider group of people will see your mistake and unwillingness to swallow your pride and accept the calamitous mistake you've made. 3. Will nanotechnology and quantum computing allow us to put String Theory to the test? as i am new to this forum, anyone let me know about this. 4. No, they won't. Quantum computing is well within the realms of standard quantum mechanics, the problems are mostly technical rather than theoretical. Nanotechnology has many aspects not yet understood even theoretically, due to issues with things like quantum chemistry calculations, but it would serve as an interesting testing ground for several theoretical concepts string theory also includes. For example, graphene is an essential component of many nanotechnologies and to describe the behaviour of electrons in it you use a 2 dimensional field theory. 2 dimensional field theories come up a lot in string theory, so there's a lot of cross over in terms of mathematical machinary. That's an example of how doing some very abstract mathematical research in string theory can prepare someone for doing more 'down to earth' physics without having to retrain. It's an example of how Farsight's regular claim string theory PhDs are black marks on CVs and useless is patently false. 5. Where was I? Originally Posted by AlphaNumeric ... c is a speed, it has units of length per unit time. Since it isn't dimensionless it's value changes when you change units. For example if you work in units of length 'light seconds' and units of time 'seconds' then c=1. If you work in units of furlongs and weeks then c = $9 \times 10^{11}$ (ask Google, it can change well known constants to be in almost any units you can think of). The whole $c = 3 \times 10^{8}$ is because we use metres and seconds. Neither of them have any fundamental meaning, for example 1 metre was originally defined as the distance from the North Pole to the Equator via Paris. Hardly a universal concept and thus anything which relies on using metres as your units is going to reduce to laughable numerology. This is why real physicists remove all units at the start so not to fall into such a trap. Try working out the fine structure constant $\alpha = \frac{e^2}{(4 \pi \varepsilon_0)\hbar c}$ using furlongs per week. If you change c you have to change everything else to get back to the 1/137 ratio. NB: it's a running constant anyhow, tending to 1/128. It isn't constant. Originally Posted by AlphaNumeric This invalids the notion of c^½ as it has gibberish units. Of course if you took some other speed v and considered (c/v)^½ that would be fine, it really is a dimensionless quantity not dependent on your choice of units. So already what you have said is 'stunning' is invalidated. The paper, and you, equate c^½ to 3pi r. There's no 'n' in that to have any unit cancelling role, you've equated something with 'units' of sqrt(metres) per sqrt(seconds) to something with no units. Obviously a mass ratio doesn't care what units you are in but c^½ does. If I work in furlongs per week then c^½ = $\sqrt{9 \times 10^{11}} \approx 948683$, which is quite different from $\sqrt{3 \times 10^{8}} \approx 1,750$ your numerology needs. You could be similarly dismissive about the e² in the fine structure constant, and say "charge squared is nonsense". It isn't, it's there because of the interaction between one charged particle and another. Ditto for $E = m c^2$. The c cquared isn't nonsense. The power is there in the wavelength expression because in the electron it's light interacting with itself, and the electron has spin half. Inflate the torus to a quasi-spherical apple and thing of it as a 4 pi sphere being swept by two concurrent orthogonal rotations. Originally Posted by AlphaNumeric Then there's the other equation, λ = 4π / n c^1½ . This at least makes some effort to deal with units by saying n is some dimensional quantity with numerical value 1. Except it's value would have to change depending on the units then so if I changed to furlongs and weeks the numerical value of n would have to change, just as the numerical value of the speed of light would. Of course if we weren't working in metres and seconds you could still make λ = 4π / n c^1½ valid but obviously you'd be putting in some unpleasant value for n and then it would be obviously just a twiddle factor. However, you and the author have kidded yourselves it's okay because the value is approximately 1 in metres and seconds. Since n becomes something unpleasant in any other units it is obvious this is not valid. See what I said yesterday. Light moves as fast as it moves, and we use the motion of light to derive our units of length and time. It doesn't matter how fast it moves, we use the motion of light to derive the second and the metre and then use them to measure the speed of light. So we always get the same value. Originally Posted by AlphaNumeric Thus the conclusion is this is nothing but numerology, unless you wish to claim that metres and seconds are somehow fundamental scales? I hope even you aren't that daft. I've already said metres and seconds are derived from the motion of light. That's the "fundamental scale". Think it through. Originally Posted by AlphaNumeric This highlights something you have even had the hypocrisy to say to other people, the quote by Feynman that the easiest person to fool is yourself. You called those results 'stunning' because you believe them to show how the qualitative concepts you find palatable lead to something physically insightful. You have fooled yourself. And the reason you've been able to fool yourself is something I've already commented on, you lack the experience and working understanding of someone who actually does physics. Immediately on seeing something like c^½ = 3pi r I think "Something is wrong there", it's an automatic response... Yes, it is automatic. If you were thinking about this you'd be looking at the evidence I've referred to and saying "Farsight, isn't there an n in r = c^½ / 3π ?". You don't do that, you dismiss everything. Originally Posted by AlphaNumeric As just explained, you are looking for anything to give justification to your whims and personal opinion, including to take pot shots at string theory. You are failing to do the necessary fact checking before hand. Whether you're doing this because you're incapable of doing even the most rudimentary secondary school maths and physics or whether it's because you're just dishonest and intellectually lazy I don't know. The clear fact is that you lack the capabilities and understanding to properly evaluate even very basic physics. Instead you're going with what superficial things sound good to you, lifting equations you don't understand from sources which you haven't checked the accuracy of. And you're justifying your dismissal with abuse. Originally Posted by AlphaNumeric If you're going to be saying "My work produces this" then you really should be checking whether what you've lifted from someone else's work is accurate. If you're unable to understand it and check it yourself you have no business claiming your work produces such outputs. Since you didn't check it you obviously haven't been able to derive it from some set of base postulates your work is built from. By mindlessly lifting someone else's work without checking it you're showing you're not very bright, both in terms of mathematical physics capabilities to do the calculations yourself and in terms of just lifting other work wholesale. Like this. It harms your case. Originally Posted by AlphaNumeric Checking the calculations for yourself is something any scientific researcher would get into the habit of doing. I can spend days working through a paper I don't understand, trying to get from equation 1 to equation 2, then to 3 etc, even if all I'm ultimately going to do is make use of the final result. If I don't understand the conditions on the derivation of the result, the assumptions it relies on, I have no business using it because it'll be building a house on non-existent foundations. That's precisely what you've done. You were peddling precisely that paper 5 years ago and in all that time you haven't checked the calculations? Any honest competent researcher would consider that pretty disgraceful. The foundations are the hard scientific evidence. Pair production, electron diffraction, Einstein-de Haas, and so on. How much evidence do you need before you'll even consider the idea that the electron is a standing wave structure? No evidence will suffice, will it? It's all just "hand waving" and "numerology" to you. Originally Posted by AlphaNumeric No, it doesn't because it offers something more than numerology. String theory research is done precisely because we can't just wave our arms and say "Gravity + quantum mechanics is solved", the devil is in the details. Research? It's "research" that has gone on for decades and consumed thousand of man years of effort for diddly squat. That isn't research, that's intellectual arrogance, and it has cost physics dear. Originally Posted by AlphaNumeric And you know damn well it's provided more than nothing. I know you know because I've told you on many occasions some examples. Earlier in this thread you LIED and said people like Prom and I weren't willing to talk about string theory for the reason it provides nothing. Let this be the final time I need to remind you you're wrong on that. List the testable predictions of string theory. Originally Posted by AlphaNumeric String theory gives first order quantum gravity corrections to general relativity. It provides a single framework for cosmology, gravity and quantum field theory. It allows study and accurate modelling of condensed matter physics like quark-gluon plasmas. It's the origin of MHV methods, which allow a huge reduction in gluon-gluon scattering calculations. It gives meson spectra in a non-perturbative domain, something we can't do in QCD. And all of those are things quantitative and logically derived from the initial starting assumptions of string theory, not just the product of arm waving and laughable numerology. The fact you are so impressed by 2 pieces of numerology but ignore being able to study strongly coupled nucleon plasmas shows that you aren't being honest in your evaluations. You're looking for a reason, any reason, to dismiss things in the mainstream because you need to try to convince people to look at your work. You have lied repeatedly about string theory and string theorists. The number of times you've repeated the same retorted nonsense is ridiculous. The only explanations are either you have some issue with medium term memory or you're knowingly lying. I'm not lying, I'm standing up for physics. Originally Posted by AlphaNumeric I think it's a crime that someone can spend most of a decade talking about physics and not be able to spot a mistake so basic a child could see it. Clearly the work you linked to is worthy of rejection. Unfortunately you couldn't see it, you even trumpeted something fundamentally flawed as 'stunning'. You weave this narrative about how string theory is dying, it'll be a black mark on people's CVs, it doesn't accomplish anything, but the foundations you use to build this story are a mixture of ignorance and lies. You obviously lack the critical evaluation skills to distinguish valid from vapid. How you haven't picked up some knowledge just by pure osmosis I don't know. Perhaps you don't want to, lest you realise how far short of viable your work falls? I've learned an awful lot. Originally Posted by AlphaNumeric This post isn't 'a rage'. I'm perfectly calm and relaxed, I'm just explaining myself fully. If your attention span isn't long enough or you don't want to hear criticism then so be it. I'm perfectly willing to hear criticism, I'm here aren't I? But your criticism is based upon dismissal of scientific evidence and is way too personal. You should try to be more objective. Originally Posted by AlphaNumeric The only thing I find somewhat annoying is how completely unrealistic and warped your views are. For years you've been trumpeting that paper and not seen such a basic mistake. I simply cannot believe you're that bad at basic maths and physics. Rejecting that paper doesn't require rage, it requires basic understanding of physics. I'm far more grounded in reality than you are. I look at the evidence and say What's going on in pair production and annihilation? Why can we diffract an electron? What's the significance of its Compton wavelength and its spin half?. You don't give a fig about any of that, and when I try to get you to look at it you start being abusive. Originally Posted by AlphaNumeric The reason I type long posts to you is that I have a lot to say to you. You've spent so much of your time and money on stroking your ego and accomplished nothing. Someone needs to explain to you the reality of your situation, to explain just how mistaken and poorly informed you are. If you're reading this and imagining me yelling, don't. Imagine me instead saying it in a slightly soft voice as someone trying to explain a mistake to a 5 year old. Slow and patient, having to reiterate so many things. That's how I speak to you. Originally Posted by AlphaNumeric *sigh* See this is the problem, you have this completely skewed view of the world in your head. I don't. You do. You elevate mathematical abstraction above empirical evidence. Originally Posted by AlphaNumeric What do I have to fear? What do I have to be jealous over? You don't have to fear anything. But you sound very much as if you're afraid of being shown to be wrong. Originally Posted by AlphaNumeric I have a degree, masters and PhD in subjects I love. I have a job doing research for a great employer, I'm even head of the research group. I get to work on things I couldn't have dreamed about even 5 years ago, during my PhD. I have respect from coworkers and employers for the novelty and volume of my research. I'm genuinely happy for you. Really. I know too many people with physics qualifications who aren't in physics any more. Originally Posted by AlphaNumeric The group I work in rejects 99+% of applicants, all of whom have doctorates. Clearly others have evaluated my capabilities and found them more than sufficient. So please, what do I have to be jealous of you? My intelligence, and my ability to explain things that you can't. Originally Posted by AlphaNumeric You can't even do GCSE physics. You have not had any work accepted for publication in a reputable journal. You pay to try to bring your work to the attention of actual researchers, while they pay me. But I'm not abusive. Originally Posted by AlphaNumeric I am not dismissing you out of jealousy or anger, I'm giving the reasons why your claims are laughable. Yes, laughable. You don't insight anger or jealousy, you are literally a joke to myself and a few of my friends. We laugh out loud at some of the things you say. LOL, it's incite. But of a Freudian slip eh? Tell your friends about that will you? And tell them I said this: Farsight is my name, insight is my game. LOL! I like that! Oh, and tell your friends I said this too: I'll be having the last laugh. Originally Posted by AlphaNumeric Honestly, there is nothing at all maths and physics related where I have any reason to be jealous or angry about in regards to you. Honestly, I can't think of anything. Please enlighten me as to what you think I have to be jealous or angry about. I think the fact you automatically assume that because I'm able to explain myself in detail that I'm on a 'tirade' says more about you than you realise. Besides, I just explained why what you posted was pseudoscience, nothing but numerology. I didn't make threats beyond the duties I have as a moderator to make people answer questions when they are asked relevant ones and to stop trolling when I see it. I really do think you need to get past this view you have of other people. You aren't perceived as a threat, you're perceived as a joke. Unfortunately that joke has warn thin http://www.physforum.com/index.php?showtopic=11824&st=15]after 5+ years[/url]. How you keep going, despite all the rejections, pointing and laughing and explanations of your mistakes, is a mystery and not in a good way. What explanations of my mistakes? All you've ever done is find some specious reason to dismiss all the scientific evidence and logic. You're fooling yourself that you haven't. Originally Posted by AlphaNumeric A rational person, an intellectually honest and grounded person, wouldn't be acting as you are. I'm rational, honest, and grounded. And I know that shifting conviction is like shifting a tooth. Originally Posted by AlphaNumeric This constant "Oh you are all just jealous/threatened, I'm so farsighted!" thing is, frankly, worrying. No it isn't. My tenacity is worrying. The evidence is worrying. Now stop being abusive and stick to the physics. Originally Posted by AlphaNumeric Originally Posted by Farsight OK. So, what are strings made of? I don't know. I don't pretend to have answers I don't have. My inability to answer that question doesn't negate what I've just said or the numerology nature of what you posted. Even if string theory were killed tomorrow your work would still be unscientific nonsense. Not so. Here's why: What are electrons made of? Originally Posted by AlphaNumeric Vern has already demonstrated he has no problems making claims he can't back up. More arm waving with no substance. If you see in him another kindred spirit fine. Perhaps you can band together and start your own journal. Vern's a good bloke. He's honest, he's sincere, he's selfless, and he doesn't go round being abusive. Whilst he and I don't agree about all the details, I think that in terms of the bigger picture, he's right, and his name will go down in the history books. 6. Originally Posted by AlphaNumeric ...That's an example of how abstract mathematical research in string theory can prepare someone for doing more 'down to earth' physics without having to retrain. It's an example of how Farsight's regular claim string theory PhDs are black marks on CVs and useless is patently false. I didn't say that. A physics PhD isn't a black mark on a CV. What I said is that somebody looking for a research post in physics would be better off doing something other than a string theory PhD. If it was say phenomenology they'd increase their chances of getting a post. That wasn't the situation five years ago. See Woit's blog re hirings. 7. Originally Posted by przyk Really? You know linear algebra, real and complex analysis, tensor calculus, Riemannian geometry, and at least a smattering of group theory and a bit about matrix Lie algebras? Because that's what mainstream theories are defined in terms of, and that's the math you'll need to understand in order to recognise that some new proposal is capable of reproducing all the results of mainstream theories. No, I don't know all that, but I'm no dummy. My formal maths education goes past A level and there's the internet if I need to look something up. My focus however is on the scientific evidence. Originally Posted by przyk Originally Posted by Farsight and because the papers match the scientific evidence of pair production, electron diffraction, Einstein-de Haas, etc, and tie in with Einstein's E=mc² paper where a radiating body loses mass. But all of these are already accounted for by mainstream theories, among many other things that you and the authors you cite have an annoying habit of ignoring completely. Not satisfactorily. If you dispute that, try describing what happens in pair production. How does an electromagnetic wave transform into two particles with mass and charge that can be diffracted? In your own time. Originally Posted by przyk And the reality here is one of the biggest problems I see with your attitude to science, as well as some of the authors you cite. Suppose I draw a table of what some mainstream theory can explain, like this: Code: ``` | H atom | Double slit | Mandel dip | Bell violation | Pair production | Electron mass -------------|-------------------|-------------------|-------------------|-------------------|-------------------|------------------- Mainstream | X | X | X | X | X |``` This is oversimplified, because the Standard Model can explain a much larger range of observed behaviour than I put in the table. But you get the idea. Sure. But can you really explain the double slit experiment? Without resorting to a multiverse or to a godlike act of observation? Magic and mysticism is not allowed. Originally Posted by przyk Now let's say for argument's sake someone comes along and claims that they can explain the electron mass. Then in order to be impressed I need to see it stack up against the mainstream it is competing with like this: Code: ``` | H atom | Double slit | Mandel dip | Bell violation | Pair production | Electron mass -------------|-------------------|-------------------|-------------------|-------------------|------------------|------------------- Mainstream | X | X | X | X | X | New proposal | X | X | X | X | X | X``` But in my experience, what we tend to see from you are a bunch of disparate proposals that stack up like this: Code: ``` | H atom | Double slit | Mandel dip | Bell violation | Pair production | Electron mass -------------|-------------------|-------------------|-------------------|-------------------|------------------|------------------- Mainstream | X | X | X | X | X | New proposal | | | | | x |``` I.e. they recover some result we can already explain, only in less detail, and ignoring everything else we use mainstream theories for. Obviously, that's not impressive. You're applying the wrong logic, przyk. If somebody comes up with something that ticks all the boxes, you'll be impressed, but then you'll be out of a job. Nobody will need you any more. And nobody will trust you any more. Originally Posted by przyk I've already addressed this with you in the past: logically, there can be no such thing as a theory that explains everything. But there can be a theory that combines say optics and gravity. Light bends. Originally Posted by przyk It is silly to complain that spin or anything else is intrinsic in a theory, unless you have actual evidence that it is not. I do. It's the Einstein-de Haas effect. It "demonstrates that spin angular momentum is indeed of the same nature as the angular momentum of rotating bodies as conceived in classical mechanics". Imagine a glass clock. It features a clockwise rotation. But go round the back and it looks anticlockwise. Now spin the clock like a coin using your left hand. You can no longer say whether the first rotation is clockwise or anticlockwise. But you can say that this spin is different to what you get if you spin the clock like a coin with your right hand. Replace the clock with a sphere of light, and then replace the sphere with a fat torus like an apple for the spin ½. Let's say you're representative of the mainstream: you don't need to be impressed by that, you need to be interested in it for physic's sake. Originally Posted by przyk At least a few things have to be intrinsic or axiomatic in any theory anyway, and if it is not spin, it will be something else. The ideal is to minimise the axioms. For example one of the axioms of the original SR was the constant speed of light. A better version of SR would do away with this, and provide a clear explanation of why we always measure the local speed of light to be the same. I like The Other Meaning of Special Relativity in this respect, and the way it links to quantum mechanics via the wave nature of matter. We can diffract electrons, and neutrons. It isn't pseudoscience. Nor is pair production, and nor is the orbital. See this bit from the wiki article: "1.The electrons do not orbit the nucleus in the sense of a planet orbiting the sun, but instead exist as standing waves. The lowest possible energy an electron can take is therefore analogous to the fundamental frequency of a wave on a string. Higher energy states are then similar to harmonics of the fundamental frequency". Originally Posted by pryzk A lot of people inexperienced with science and physics seem to get this one wrong. I'd venture to say that a lot of people in physics don't examine their axioms closely enough. Originally Posted by pryzk AlphaNumeric is actually doing a very thorough job describing things you and some authors you cite are doing wrong. No he isn't. He's dismissing patent evidence and piling on the abuse. I'm the one who refers to scientific evidence and describes things that aren't axiomatic after all. All he's doing is dissing the competititon. I have to go. One last thing: much as I like talking about the things above, we really ought to stay on topic. 8. Originally Posted by Farsight My focus however is on the scientific evidence. And like it or not, this is where the average physicist leaves you in the dust. You boast about referring to a handful of experiments. But I know mainstream theories. I can boast knowing a quantitative summary of countless thousands of experiments. We're not even on the same playing field in this regard. I have orders of magnitude more to work with than you do. Any physicist does. This, by the way, is a tipoff that many of the authors you cite are amateurs who don't really understand what they should be doing. They shouldn't be trying to explain one or two experiments, because there are so many of those that I'll always be able to point out the vast majority of experiments and behaviour that they haven't dealt with at all. What they should be doing is trying to recover mainstream theories in their entirety, as approximations. Then they'd automatically have explained most of the history of experimental physics in one fell swoop. Not satisfactorily. If you dispute that, try describing what happens in pair production. How does an electromagnetic wave transform into two particles with mass and charge that can be diffracted? In your own time. How is that "not satisfactory"? I could criticise any theory you or anyone else could come up with in exactly the same way. Ultimately, every physical theory is just a description of what's going on. The point is just to come up with as condensed a description (i.e. fewest axioms) as possible, and quantum physics does exceedingly well in this regard. Sure. But can you really explain the double slit experiment? To the extent that anything can be explained, quantum physics already does this. You're applying the wrong logic, przyk. If somebody comes up with something that ticks all the boxes, you'll be impressed, but then you'll be out of a job. Nobody will need you any more. And nobody will trust you any more. What? If my job depended on not all the boxes being ticked (it doesn't) and someone ticked all the boxes in one go, then I damn well should be out of a job. It is silly to complain that spin or anything else is intrinsic in a theory, unless you have actual evidence that it is not. I do. No you don't. The Einstein-de Haas effect shows that spin is a type of angular momentum in the sense that it contributes to the angular momentum conservation law. Nothing more. I went through this with you in the last few pages of a previous thread, culminating in this post. The ideal is to minimise the axioms. That's only half the story. The ideal is to minimise the axioms while still being able to explain at least as many phenomena. For example one of the axioms of the original SR was the constant speed of light. A better version of SR would do away with this, and provide a clear explanation of why we always measure the local speed of light to be the same. It's only an improvement if a) the explanation actually works and b) the explanation needs less axioms than the one it is replacing. And in this case you're talking about doing better than one axiom. I like The Other Meaning of Special Relativity in this respect, and the way it links to quantum mechanics via the wave nature of matter. I've already rebutted that paper a couple of times, specifically about half way through this post and in the discussion following this post. If you were referring to the idea of everything being made of waves, then that doesn't imply invariance of c because not all wave equations are Lorentz invariant. I'd venture to say that a lot of people in physics don't examine their axioms closely enough. You can't have your cake and eat it too. I've seen you accuse theoretical physicists of basically being mathematicians before. If there's one thing they're going to grasp well, it's the logical and axiomatic structure of their theories. No he isn't. He's dismissing patent evidence and piling on the abuse. I'm the one who refers to scientific evidence and describes things that aren't axiomatic after all. AlphaNumeric is doing an excellent job explaining why your explanations aren't actually as good as you make them out to be. 9. Originally Posted by Farsight Try working out the fine structure constant $\alpha = \frac{e^2}{(4 \pi \varepsilon_0)\hbar c}$ using furlongs per week. If you change c you have to change everything else to get back to the 1/137 ratio. NB: it's a running constant anyhow, tending to 1/128. It isn't constant. Firstly the fine structure constant is a dimensionless object so it doesn't matter whether you use metres or furlongs. That's why physicists use it, it doesn't matter the units. As for running that's to do with the energy scale and something entirely different. Remember Farsight, you aren't going to be able to just throw out buzzwords and get away with it. Some of us paid attention in school. Besides, none of that negates the fundamental flaw in your 'stunning' result I pointed out. You provided something which is dimensionless, the proton-electron mass ratio, being equal to something which has units, a multiple of the square root of c. This is flat out wrong. The fact you don't grasp this shows just how poor your physics understanding is. For all your talk of understanding fundamental concepts you couldn't grasp something a child is taught! Originally Posted by Farsight You could be similarly dismissive about the e² in the fine structure constant, and say "charge squared is nonsense". No, I couldn't. Well done on not understanding. Originally Posted by Farsight See what I said yesterday. Light moves as fast as it moves, and we use the motion of light to derive our units of length and time. It doesn't matter how fast it moves, we use the motion of light to derive the second and the metre and then use them to measure the speed of light. So we always get the same value. You still don't get it. It doesn't matter how you measure the length scale in practice the fact remains you cannot equate something without units to something with units. It's meaningless. It's like saying 5 seconds = 12. 12 what? To make sense it would have to be 12 units of time. You can say 1 minutes = 60 seconds because they both involve units of time. You cannot say 60 seconds = 60. If I change units of time from seconds to minutes such an equation would become 1 minute = 60. Demonstrably nonsensical. I told you, you should just walk away but you couldn't help yourself. Originally Posted by Farsight I've already said metres and seconds are derived from the motion of light. That's the "fundamental scale". Think it through. Wow, you really think metres and seconds are fundamental scales? Wow.... Originally Posted by Farsight Yes, it is automatic. If you were thinking about this you'd be looking at the evidence I've referred to and saying "Farsight, isn't there an n in r = c^½ / 3π ?". You don't do that, you dismiss everything. Except there isn't an n. And if there is it is just a twiddle factor designed to hide numerology, as Rpenner has explained. Originally Posted by Farsight And you're justifying your dismissal with abuse. No, I justified it by explaining why you were wrong. The fact you want an excuse to avoid facing up to your mistake doesn't negate the mistake. Originally Posted by Farsight Like this. It harms your case. No, it doesn't hurt my case. If all I gave was abuse then it would but I explain why you are wrong and doing nothing but numerology. I note that nowhere in your lengthy reply to you actually address that. Originally Posted by Farsight The foundations are the hard scientific evidence. Pair production, electron diffraction, Einstein-de Haas, and so on. How much evidence do you need before you'll even consider the idea that the electron is a standing wave structure? No evidence will suffice, will it? It's all just "hand waving" and "numerology" to you. It's demonstrably numerology. This isn't opinion, I've shown it! Originally Posted by Farsight Research? It's "research" that has gone on for decades and consumed thousand of man years of effort for diddly squat. That isn't research, that's intellectual arrogance, and it has cost physics dear. What you mean to say is physics isn't going in the direction you deem appropriate. But then you've been shown to be a terrible judge of what is valid physics. Originally Posted by Farsight List the testable predictions of string theory. The entirety of gravity. It's also provided unparalleled tools into the modelling and understanding of meson spectral structures and multi-gluon interactions. I'd compile a list of things your work can do but currently it's utterly empty. In fact if you put forth those equations then you have shown you're doing nothing but numerology, pretty much falsifying your work. Thanks. Originally Posted by Farsight I've learned an awful lot. And yet you appear to be unable to understand stuff children know. Originally Posted by Farsight I'm perfectly willing to hear criticism, I'm here aren't I? But your criticism is based upon dismissal of scientific evidence and is way too personal. You should try to be more objective. I'm able to be both objective and give my opinion. I ask hacks to provide something formal because the evaluation of something formal can be done without needing to have personal opinion input. An equation is consistent or not. A quantitative prediction is either accurate or it is not. A model is either provided or it is not. Unfortunately you provide nothing of any quantitative value, seeing as numerology is not valid science. As such we have nothing to discuss but your opinion on things. Your opinion about the electron. Your opinion about photons. Your opinion about quantum phenomena. How can I be entirely objective when you can't offer anything but opinion? Originally Posted by Farsight I'm far more grounded in reality than you are. Which of us produces viable solutions to real world physics problems and gets paid to do it by large multinational corporations and industries? I forget..... Originally Posted by Farsight I look at the evidence and say What's going on in pair production and annihilation? Why can we diffract an electron? What's the significance of its Compton wavelength and its spin half?. You don't give a fig about any of that, and when I try to get you to look at it you start being abusive. I do give a 'fig' about it. I care whether someone can offer more than just opinion. Consider pair production and annihilation. You have no hands on experience with that experimentally and you have no understanding of any quantitative model which works. As such your understanding can only come through the filter of layperson explanations and simplified analogies others have provided you. You have no access to or understanding of quantitative details. Yet you try to tell those of us who do that you understand it more than anyone else in the world. You claim to understand electromagnetism more than Dirac, the man who got a Nobel Prize for his development of quantum mechanics and was instrumental in developing quantum electrodynamics, the most accurate and tested model of physical phenomena ever devised by human thought. And you claim you are more grounded in reality? You can't even get units right! I make comments about your attitude and your mindset because you offer nothing else to talk about. You offer no working models for me to evaluate. You offer no quantitative predictions derived clearly and logically from postulates. You just have your opinion. Thus when I dismiss your opinion, with explanation, you take it as a personal attack. I'm sorry that you haven't provided anything but opinion and thus a rejection of your 'work' is a rejection of your views but that's your fault, not mine. Originally Posted by Farsight That's how I speak to you. Except I'm able to demonstrate a working understanding and others, independently, verify my capabilities. You simply assume your views are worth listening to. Originally Posted by Farsight I don't. You do. You elevate mathematical abstraction above empirical evidence. Remember how I said you invent this rose tinted view of the world and you seem to live within it? This is an example. The fact I value mathematical abstraction as a way to make precise predictions and models doesn't mean I am blind to all else. You keep complaining string theory makes no testable predictions but it's the formal mathematics which leads to quantitative testable predictions. And that is why you don't have any, you have an extremely poor grasp of mathematics and its application to physics. Seriously, I don't think you could pass an A Level exam in it. Originally Posted by Farsight You don't have to fear anything. But you sound very much as if you're afraid of being shown to be wrong. I like to be shown to be wrong. It's when I learn the most. Intellectual sparing with people is quite enjoyable, it's part of the reason I like my job and respect people I work with, they can stand up to having their ideas challenged and aren't afraid to return the favour. Much as you may wish to think you intimidate people, you don't. I suspect you realise this, deep down, given comments like how you could beat people at arm wrestling you once made. It's a "If I can't intimidate you intellectually I'll do it physically" mentality. I don't find you intellectually intimidating, I find you comical. I really do wonder if you're going through some mid life existential crisis and this "I've done stuff worth 4 Nobel Prizes!" thing is a way of convincing yourself you matter in the grand scheme of things. Originally Posted by Farsight My intelligence, and my ability to explain things that you can't. Sorry, I don't see you as particularly intelligent. You might be slightly more intelligent than the average person on the street but nothing you've ever done makes me think you're particularly smart. Originally Posted by Farsight But I'm not abusive. Do you think I treat you like I treat everyone? You and I have been around the block enough times that I deal with you in a particular way, knowing how you respond (or rather don't) to patient explanations alone. If I need to push a button or two with you to get a response so be it. I'm hardly calling for you to be killed or calling you dirty words, I just don't think you're particularly bright and I don't think you have a rational view of yourself and others. Originally Posted by Farsight LOL, it's incite. But of a Freudian slip eh? Tell your friends about that will you? And tell them I said this: Farsight is my name, insight is my game. LOL! I like that! Oh, and tell your friends I said this too: I'll be having the last laugh. Now the question is, are you trying to put on a brave face, realising that people laugh at you, or do you really believe what you just said? Like I said, people laugh at you. Originally Posted by Farsight No it isn't. My tenacity is worrying. The evidence is worrying. Now stop being abusive and stick to the physics. I think when you come out with "and stick with the physics" it's code for "I can't come up with any retorts, let's change the subject". Originally Posted by Farsight Not so. Here's why: What are electrons made of? This line you're taking reminds me of the theist "Where did everything come from? Can't answer? God did it!" line of argument. The fact you can offer a completely unjustified vapid answer to something someone else is honest enough to say "I don't know" to doesn't make your answer valid. It just makes you dishonest when you pretend someone else's inability to answer somehow elevates your random guess. Originally Posted by Farsight Vern's a good bloke. He's honest, he's sincere, he's selfless, and he doesn't go round being abusive. Whilst he and I don't agree about all the details, I think that in terms of the bigger picture, he's right, and his name will go down in the history books. Is that how you see yourself? Going down in the history books? That's very telling. It's an historical fact that those who go looking for fame, particularly in science, rarely find it. Often the biggest breakthroughs are made by those who come upon them unexpectedly, working quietly and consistently. This shows how you don't really have an agenda of truth, you have an agenda of personal glory. How sad. 10. Wow, dimensional analysis. I can't remember what it was like not to know how to do that, anymore. 11. Originally Posted by Farsight OK. So, what are strings made of? They are made of something extra special. And it too is a reality. 12. Strings are made out of classical objects. 13. Originally Posted by dummy_ Strings are made out of classical objects. Braided fishing line? Or perhaps turtles? 14. Stringium. #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts • • BB code is On • Smilies are On • [IMG] code is On • [VIDEO] code is On • HTML code is Off Forum Rules All times are GMT -5. The time now is 06:13 AM. sciforums.com
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 16, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9671056866645813, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/19011/estimate-gaussian-mixture-density-from-a-set-of-weighted-samples
## Estimate gaussian (mixture) density from a set of weighted samples ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Assume I have a set of weighted samples, where each samples has a corresponding weight between 0 and 1. I'd like to estimate the parameters of a gaussian mixture distribution that is biased towards the samples with higher weight. In the usual non-weighted case gaussian mixture estimation is done via the EM algorithm. Does anyone know how to modify the algorithm to account for the weights? If not, can some one give me a hint on how to incorporate the weights in the initial formula of the maximum-log-likelihood formulation of the problem? Thanks! - ## 1 Answer The usual EM algorithm can be modified for weighted inputs. Following along the Wikipedia presentation, you would use these formulas instead: $a_i = \frac{\sum_{j=1}^N w_j y_{i,j}}{\sum_{j=1}^{N}w_j}$ and $\mu_{i} = \frac{\sum_{j} w_jy_{i,j}x_{j}}{\sum_{j} w_jy_{i,j}}$ where $w_j \ge 0$ are the weights of the data points. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042485356330872, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Solving_Triangles&diff=30992&oldid=23689
# Solving Triangles ### From Math Images (Difference between revisions) | | | | | |----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (10:56, 29 May 2012) (edit) (undo) | | | (15 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | | |ImageName=The Shadow Problem | | |ImageName=The Shadow Problem | | | |Image=Shadows and fog.jpg | | |Image=Shadows and fog.jpg | | - | |ImageIntro= | + | |ImageIntro=In the 1991 film ''Shadows and Fog'', the eerie shadow of a larger-than-life figure appears against the wall as the shady figure lurks around the corner. How tall is the ominous character really? Filmmakers use the geometry of shadows and triangles to make this special effect. | | - | | + | | | - | In the 1991 film ''Shadows and Fog'', the eerie shadow of a larger-than-life figure appears against the wall as the shady figure lurks around the corner. How tall is the ominous character really? Filmmakers use the geometry of shadows and triangles to make this special effect. | + | | | | | | | | | :The shadow problem is a standard type of problem for teaching trigonometry and the geometry of triangles. In the standard shadow problem, several elements of a triangle will be given. The process by which the rest of the elements are found is referred to as '''solving a triangle'''. | | :The shadow problem is a standard type of problem for teaching trigonometry and the geometry of triangles. In the standard shadow problem, several elements of a triangle will be given. The process by which the rest of the elements are found is referred to as '''solving a triangle'''. | | - | | + | |ImageDescElem=A triangle has six total '''elements''': three sides and three angles. Sides are valued by length, and angles are valued by degree or radian measure. According to postulates for [[Congruent triangles|congruent triangles]], given three elements, other elements can always be determined as long as at least one side length is given. Math problems that involve solving triangles, like shadow problems, typically provide certain information about just a few of the elements of a triangle, so that a variety of methods can be used to solve the triangle. | | - | | + | | | - | |ImageDescElem= | + | | | - | | + | | | - | A triangle has six total '''elements''': three sides and three angles. Sides are valued by length, and angles are valued by degree or radian measure. According to postulates for [[Congruent triangles|congruent triangles]], given three elements, other elements can always be determined as long as at least one side length is given. Math problems that involve solving triangles, like shadow problems, typically provide certain information about just a few of the elements of a triangle, so that a variety of methods can be used to solve the triangle. | + | | | | | | | | | Shadow problems normally have a particular format. Some light source, often the sun, shines down at a given '''angle of elevation'''. The angle of elevation is the smallest—always acute—numerical angle measure that can be measured by swinging from the horizon from which the light source shines. Assuming that the horizon is parallel to the surface on which the light is shining, the angle of elevation is always equal to the '''angle of depression'''. The angle of depression is the angle at which the light shines down, compared to the angle of elevation which is the angle at which someone or something must look up to see the light source. Knowing the angle of elevation or depression can be helpful because [[Basic Trigonometric Functions|trigonometry]] can be used to relate angle and side lengths. | | Shadow problems normally have a particular format. Some light source, often the sun, shines down at a given '''angle of elevation'''. The angle of elevation is the smallest—always acute—numerical angle measure that can be measured by swinging from the horizon from which the light source shines. Assuming that the horizon is parallel to the surface on which the light is shining, the angle of elevation is always equal to the '''angle of depression'''. The angle of depression is the angle at which the light shines down, compared to the angle of elevation which is the angle at which someone or something must look up to see the light source. Knowing the angle of elevation or depression can be helpful because [[Basic Trigonometric Functions|trigonometry]] can be used to relate angle and side lengths. | | Line 32: | | Line 26: | | | | | | | | | Ultimately, a shadow problem asks you to solve a triangle given only a few elements of the possible six total. In the case of some shadow problems, like the one that involves two similar triangles, information about one triangle may be given and the question may ask to find elements of another. | | Ultimately, a shadow problem asks you to solve a triangle given only a few elements of the possible six total. In the case of some shadow problems, like the one that involves two similar triangles, information about one triangle may be given and the question may ask to find elements of another. | | - | | + | |ImageDesc===Why Shadows?== | | - | | + | | | - | | + | | | - | |ImageDesc= | + | | | - | ==Why Shadows?== | + | | | | [[Image:Mirror2.jpg|right|250px]] | | [[Image:Mirror2.jpg|right|250px]] | | | Shadows are useful in the set-up of a triangle problems because of the way light works. A shadow is cast when light cannot shine through a solid surface. Light shines in a linear fashion, that is to say it does not bend. Light waves travel forward in the same direction in which the light was shined. | | Shadows are useful in the set-up of a triangle problems because of the way light works. A shadow is cast when light cannot shine through a solid surface. Light shines in a linear fashion, that is to say it does not bend. Light waves travel forward in the same direction in which the light was shined. | | Line 50: | | Line 40: | | | | ==More Than Just Shadows== | | ==More Than Just Shadows== | | | | | | | - | [[Image:Law of sines.jpg|left|175px]] | + | [[Image:Solving_triangle.jpg|left|175px]] | | | Shadow problems are just one type of problem that involves solving triangles. There are numerous other formats and set ups for unsolved triangle problems. Most of these problems are formatted as word problems; they set up a triangle problem in terms of some real life scenario. | | Shadow problems are just one type of problem that involves solving triangles. There are numerous other formats and set ups for unsolved triangle problems. Most of these problems are formatted as word problems; they set up a triangle problem in terms of some real life scenario. | | | | | | | Line 60: | | Line 50: | | | | | | | | | | | | | - | {{{!}} | + | {{{!}}cellpadding=10 cellspacing=10 | | | {{!}}'''Ladder Problems''' | | {{!}}'''Ladder Problems''' | | | | | | | Line 80: | | Line 70: | | | | In the most standard type of problem, a person uses the astrolabe to measure the angle at which he looks up or down at something. In the example at the right, the bear stands in a tower of a given height and uses the astrolabe to measure the angle at which he looks down at the forest fire. The problem asks to find how far away the forest fire is from the base of the tower given the previous information. | | In the most standard type of problem, a person uses the astrolabe to measure the angle at which he looks up or down at something. In the example at the right, the bear stands in a tower of a given height and uses the astrolabe to measure the angle at which he looks down at the forest fire. The problem asks to find how far away the forest fire is from the base of the tower given the previous information. | | | | | | | - | {{!}}{{!}}[[Image:Forest1.jpg|center|500px]] | + | {{!}}{{!}}[[Image:Forest2.jpg|center|500px]] | | | {{!}}} | | {{!}}} | | | | | | | Line 101: | | Line 91: | | | | <div style="float: left; width: 50%">[[Image:306090.jpg|center|175px]] | | <div style="float: left; width: 50%">[[Image:306090.jpg|center|175px]] | | | </div> | | </div> | | | | + | | | | | + | | | | | + | [[Image:Trianglez.jpg|right|180px]] | | | | | | | | :*'''Pythagorean Theorem''': The [[Pythagorean Theorem]] relates the squares of all three side lengths to one another in right triangles. This is useful when a triangle problem provides two side lengths and a third is needed. | | :*'''Pythagorean Theorem''': The [[Pythagorean Theorem]] relates the squares of all three side lengths to one another in right triangles. This is useful when a triangle problem provides two side lengths and a third is needed. | | Line 120: | | Line 113: | | | | '''Example 1: Using Trigonometry''' | | '''Example 1: Using Trigonometry''' | | | | | | | - | A damsel in distress awaits her rescue from the tallest tower of the castle. A brave knight is on the way. He can see the castle in the distance and starts to plan his rescue, but he needs to know the height of the tower so he can plan properly. The knight sits on his horse 500 feet away from the castle. He uses his handy protractor to find the measure of the angle at which he looks up to see the princess in the tower, which is 15&deg;. Sitting on the horse, the knight's eye level is 8 feet above the ground. What is the height of the tower? | + | A damsel in distress awaits her rescue from the tallest tower of the castle. A brave knight is on the way. He can see the castle in the distance and starts to plan his rescue, but he needs to know the height of the tower so he can plan properly. The knight sits on his horse 500 feet away from the castle. He uses his handy protractor to find the measure of the angle at which he looks up to see the princess in the tower, which is 15°. Sitting on the horse, the knight's eye level is 8 feet above the ground. What is the height of the tower? | | | | | | | | [[Image:Castle3.jpg|center]] | | [[Image:Castle3.jpg|center]] | | Line 167: | | Line 160: | | | | '''Example 2: Using Law of Sines''' | | '''Example 2: Using Law of Sines''' | | | | | | | - | A man stands 100 feet above the sea on top of a cliff. The captain of a white-sailed ship looks up at a 45&deg; angle to see the man, and the captain of a black-sailed ship looks up at a 30&deg; angle to see him. How far apart are the two ships? | + | A man stands 100 feet above the sea on top of a cliff. The captain of a white-sailed ship looks up at a 45° angle to see the man, and the captain of a black-sailed ship looks up at a 30° angle to see him. How far apart are the two ships? | | | | | | | | [[Image:Ships_sailing1.jpg|center]] | | [[Image:Ships_sailing1.jpg|center]] | | Line 177: | | Line 170: | | | | | | | | | First, we need to find the third angle for both of the triangles. Then we can use the law of sines. | | First, we need to find the third angle for both of the triangles. Then we can use the law of sines. | | - | <div style="float: left; width: 50%"> | + | {{{!}} | | - | For the white-sailed ship, | + | {{!}}For the black-sailed ship, | | | | | | | - | ::<math>180^\circ - 90^\circ - 45^\circ = 45^\circ</math> | + | ::<math>180^\circ - 90^\circ - 30^\circ =60^\circ</math> | | | | | | | - | Let the distance between this ship and the cliff be denoted by <math>a</math>. | + | Let the distance between this ship and the cliff be denoted by <math>b</math>. | | | | | | | | By the law of sines, | | By the law of sines, | | | | | | | - | ::<math>\frac{100}{\sin 45^\circ} = \frac{a}{\sin 45^\circ}</math> | + | ::<math>\frac{100}{\sin 30^\circ} = \frac{b}{\sin 60^\circ}</math> | | | | | | | - | Multiplying both sides by <math> \sin 45^\circ </math> gives us | + | Clear the fractions to get, | | | | | | | - | ::<math>(\sin 45^\circ)\frac{100}{\sin 45^\circ} = a</math> | + | ::<math>100(\sin 60^\circ) = b(\sin 30^\circ)</math> | | | | + | | | | | + | Compute the sines of the angle to give us | | | | + | | | | | + | ::<math>100\frac{\sqrt{3}}{2} = b\frac{1}{2}</math> | | | | | | | | Simplify for | | Simplify for | | | | | | | - | ::<math>a = 100 \text{ft}</math> | + | ::<math>100(\sqrt{3}) = b</math> | | | | + | {{!}}{{!}} | | | | + | <font color=white>YOU CAN'T SEE THIS! | | | | | | | - | </div> | + | OOOOOOOOOOOOOOOOOOOOOHHH | | | | | | | - | <div style="float: left; width: 50%"> | | | | - | For the black-sailed ship, | | | | | | | | | - | ::<math>180^\circ - 90^\circ - 30^\circ =60^\circ</math> | + | SPOOOOOKY!</font> | | | | | | | - | Let the distance between this ship and the cliff be denoted by <math>b</math>. | + | | | | | + | {{!}}{{!}} | | | | + | For the white-sailed ship, | | | | + | | | | | + | ::<math>180^\circ - 90^\circ - 45^\circ = 45^\circ</math> | | | | + | | | | | + | Let the distance between this ship and the cliff be denoted by <math>a</math>. | | | | | | | | By the law of sines, | | By the law of sines, | | | | | | | - | ::<math>\frac{100}{\sin 30^\circ} = \frac{b}{\sin 60^\circ}</math> | + | ::<math>\frac{100}{\sin 45^\circ} = \frac{a}{\sin 45^\circ}</math> | | | | | | | | Clear the fractions to get, | | Clear the fractions to get, | | | | | | | - | ::<math>100(\sin 60^\circ) = b(\sin 30^\circ)</math> | + | ::<math>100(\sin 45^\circ) = b(\sin 45^\circ)</math> | | | | | | | | Compute the sines of the angle to give us | | Compute the sines of the angle to give us | | | | | | | - | ::<math>100\frac{\sqrt{3}}{2} = b\frac{1}{2}</math> | + | ::<math>100\frac{\sqrt{2}}{2} = b\frac{\sqrt{2}}{2}</math> | | | | | | | | Simplify for | | Simplify for | | | | | | | - | ::<math>100(\sqrt{3}) = b</math> | + | ::<math>a = 100 \text{ft}</math> | | | | + | | | | | + | {{!}}} | | | | | | | | Multiply and round for | | Multiply and round for | | | | | | | | ::<math>b =173 \text{ft}</math> | | ::<math>b =173 \text{ft}</math> | | - | | | | | - | </div> | | | | | | | | | | The distance between the two ships, <math>x</math>, is the positive difference between the lengths of the bases of the triangle. | | The distance between the two ships, <math>x</math>, is the positive difference between the lengths of the bases of the triangle. | | Line 302: | | Line 305: | | | | | | | | | }} | | }} | | - | | | | | | |other=Trigonometry, Geometry | | |other=Trigonometry, Geometry | | | |AuthorName=Orion Pictures | | |AuthorName=Orion Pictures | | | |SiteName=http://en.wikipedia.org/wiki/File:Shadows_and_fog.jpg | | |SiteName=http://en.wikipedia.org/wiki/File:Shadows_and_fog.jpg | | | |Field=Geometry | | |Field=Geometry | | - | |WhyInteresting= | + | |WhyInteresting=[[Image:Grass_shadow3.jpg|right|160px]] | | - | | + | Shadow Problems are one of the most common types of problems used in teaching trigonometry. A shadow problem sets up a scenario that is simple, visual, and easy to remember. Shadow problems are commonly used and highly applicable. | | - | [[Image:Grass_shadow3.jpg|right|160px]] | + | | | - | Shadow Problems are one of the most common types of problem used in teaching trigonometry. A shadow problem sets up a scenario that is simple, visual, and easy to remember. Though an easy method by which to learn trigonometry, shadow problems are commonly used and highly applicable. | + | | | | | | | | | Shadows, while an effective paradigm in a word problem, can even be useful in real life applications. In this section, we can use real life examples of using shadows and triangles to calculate heights and distances. | | Shadows, while an effective paradigm in a word problem, can even be useful in real life applications. In this section, we can use real life examples of using shadows and triangles to calculate heights and distances. | | Line 393: | | Line 393: | | | | In ancient Greece, mathematician Eratosthenes made a name for himself in the history books by calculating the circumference of the Earth by using shadows. Many other mathematicians had attempted the problem before, but Eratosthenes was the first one to actually have any success. His rate of error was less than 2%. | | In ancient Greece, mathematician Eratosthenes made a name for himself in the history books by calculating the circumference of the Earth by using shadows. Many other mathematicians had attempted the problem before, but Eratosthenes was the first one to actually have any success. His rate of error was less than 2%. | | | | | | | - | Eratosthenes used shadows to calculate the distance around the Earth. As an astronomer, he determined the time of the summer solstice when the sun would be directly over the town of Syene in Egypt (now Aswan). On this day, with the sun directly above, there were no shadows, but in Alexandria, which is about 500 miles north of Syene, Eratosthenes saw shadows. He calculated based on the length of the shadow that the angle at which the sun hit the Earth was 7 &deg;. He used this calculation, along with his knowledge of geometry, to determine the circumference of the Earth. | + | Eratosthenes used shadows to calculate the distance around the Earth. As an astronomer, he determined the time of the summer solstice when the sun would be directly over the town of Syene in Egypt (now Aswan). On this day, with the sun directly above, there were no shadows, but in Alexandria, which is about 500 miles north of Syene, Eratosthenes saw shadows. He calculated based on the length of the shadow that the angle at which the sun hit the Earth was 7 °. He used this calculation, along with his knowledge of geometry, to determine the circumference of the Earth. | | | | + | |References=All of the images on this page, unless otherwise stated on their own image page, were made or photographed by the author [[User:Rscott3|Richard]] Scott, Swarthmore College. | | | | + | | | | | + | The information on Eratosthenes can be cited to http://www.math.twsu.edu/history/men/eratosthenes.html. | | | | + | | | | | + | The main image and details about it were found at http://www.imdb.com/title/tt0105378/. | | | | | | | - | |InProgress=Yes | + | Some of the ideas for problems/pictures on this page are based from ideas or concepts in the ''Interactive Mathematics Program'' Textbooks by Fendel, Resek, Alper and Fraser. | | | | + | |InProgress=No | | | }} | | }} | ## Current revision The Shadow Problem Field: Geometry Image Created By: Orion Pictures The Shadow Problem In the 1991 film Shadows and Fog, the eerie shadow of a larger-than-life figure appears against the wall as the shady figure lurks around the corner. How tall is the ominous character really? Filmmakers use the geometry of shadows and triangles to make this special effect. The shadow problem is a standard type of problem for teaching trigonometry and the geometry of triangles. In the standard shadow problem, several elements of a triangle will be given. The process by which the rest of the elements are found is referred to as solving a triangle. # Basic Description A triangle has six total elements: three sides and three angles. Sides are valued by length, and angles are valued by degree or radian measure. According to postulates for congruent triangles, given three elements, other elements can always be determined as long as at least one side length is given. Math problems that involve solving triangles, like shadow problems, typically provide certain information about just a few of the elements of a triangle, so that a variety of methods can be used to solve the triangle. Shadow problems normally have a particular format. Some light source, often the sun, shines down at a given angle of elevation. The angle of elevation is the smallest—always acute—numerical angle measure that can be measured by swinging from the horizon from which the light source shines. Assuming that the horizon is parallel to the surface on which the light is shining, the angle of elevation is always equal to the angle of depression. The angle of depression is the angle at which the light shines down, compared to the angle of elevation which is the angle at which someone or something must look up to see the light source. Knowing the angle of elevation or depression can be helpful because trigonometry can be used to relate angle and side lengths. In the typical shadow problem, the light shines down on an object or person of a given height. It casts a shadow on the ground below, so that the farthest tip of the shadow makes a direct line with the tallest point of the person or object and the light source. The line that directly connects the tip of the shadow and the tallest point of the object that casts the shadow can be viewed as the hypotenuse of a triangle. The length from the tip of the shadow to the point on the surface where the object stands can be viewed as the first leg, or base, of the triangle, and the height of the object can be viewed as the second leg of the triangle. In the most simple shadow problems, the triangle is a right triangle because the object stands perpendicular to the ground. In the picture below, the sun casts a shadow on the man. The length of the shadow is the base of the triangle, the height of the man is the height of the triangle, and the length from the tip of the shadow to top of the man's head is the hypotenuse. The resulting triangle is a right triangle. In another version of the shadow problem, the light source shines from the same surface on which the object or person stands. In this case the shadow is projected onto some wall or vertical surface, which is typically perpendicular to the first surface. In this situation, the line that connects the light source, the top of the object and the tip of the shadow on the wall is the hypotenuse. The height of the triangle is the length of the shadow on the wall, and the distance from the light source to the base of the wall can be viewed as the other leg other leg of the triangle. The picture below diagrams this type of shadow problem, and this page's main picture is an example of one of these types of shadows. More difficult shadow problems will often involve a surface that is not level, like a hill. The person standing on the hill does not stand perpendicular to the surface of the ground, so the resulting triangle is not a right triangle. Other shadow problems may fix the light source, like a street lamp, at a given height. This scenario creates a set of two similar triangles. Ultimately, a shadow problem asks you to solve a triangle given only a few elements of the possible six total. In the case of some shadow problems, like the one that involves two similar triangles, information about one triangle may be given and the question may ask to find elements of another. # A More Mathematical Explanation Note: understanding of this explanation requires: *Trigonometry, Geometry [Click to view A More Mathematical Explanation] ## Why Shadows? Shadows are useful in the set-up of a triangle pro [...] [Click to hide A More Mathematical Explanation] ## Why Shadows? Shadows are useful in the set-up of a triangle problems because of the way light works. A shadow is cast when light cannot shine through a solid surface. Light shines in a linear fashion, that is to say it does not bend. Light waves travel forward in the same direction in which the light was shined. In addition to the linear fashion in which light shines, light has certain angular properties. When light shines on an object that reflects light, it reflects back at the same angle at which it shined. Say a light shines onto a mirror. The angle between the beam of light and the wall that the mirror is the angle of approach. The angle from the wall at which the light reflects off of the mirror is the angle of departure. The angle of approach is equal to the angle of departure. Light behaves the same way a cue ball does when it is bounced off of the wall of a pool table at a certain angle. Just like the way that the cue ball bounces off the wall, light reflects off of the mirror at exactly the same angle at which it shines. The beam of light has the same properties as the cue ball in this case: the angle of departure is the same as the angle of approach. This property will help with certain types of triangle problems, particularly those that involve mirrors. ## More Than Just Shadows Shadow problems are just one type of problem that involves solving triangles. There are numerous other formats and set ups for unsolved triangle problems. Most of these problems are formatted as word problems; they set up a triangle problem in terms of some real life scenario. There are, however, many problems that simply provide numbers that represent angles and side lengths. In this type of problem, angles are denoted with capital letters, ${A, B, C,...}$, and the sides are denoted by lower-case letters,${a,b,c,...}$, where $a$ is the side opposite the angle $A$. Ladder Problems One other common problem in solving triangles is the ladder problem. A ladder of a given length is leaned up against a wall that stands perpendicular to the ground. The ladder can be adjusted so that the top of the ladder sits higher or lower on the wall and the angle that the ladder makes with the ground increases or decreases accordingly. Because the ground and the wall are perpendicular to one another, the triangles that need to be solved in ladder problems always have right angles. Since the right angle is always fixed, many ladder problems require the angle between the ground and the ladder, or the angle of elevation, to to be somehow associated with a fixed length of a ladder and the height of the ladder on the wall. In other words, ladder problems normally ask for the height of the ladder on the wall or the ground distance between the ladder and the wall, and typically require some trigonometric calculation. Mirror Problems Mirror problems are a specific type of triangle problem which involves two people or objects that stand looking into the same mirror. Because of the way a mirror works, light reflects back at the same angle at which it shines in, as explained below in A More Mathematical Explanation. In a mirror problem, the angle at which one person looks into the mirror, or the angle of vision is the same exact angle at which the second person must look into the mirror to make eye contact. Typically, the angle at which one person looks into the mirror is given along with some other piece of information. Once that angle is known, then one angle of the triangle is automatically known since the light reflects back off of the mirror at the same angle, making the angle of the triangle next to the mirror the supplement to twice the angle of vision. Sight Problems Like shadow problems, sight problems include many different scenarios and several forms of triangles. Most sight problems are set up as word problems. They involve a person standing below or above some other person or object. In most of these problems, a person measures an angle with a tool called an astrolabe or a protractor. In the most standard type of problem, a person uses the astrolabe to measure the angle at which he looks up or down at something. In the example at the right, the bear stands in a tower of a given height and uses the astrolabe to measure the angle at which he looks down at the forest fire. The problem asks to find how far away the forest fire is from the base of the tower given the previous information. ## Ways to Solve Triangles In all cases, a triangle problem will only give a few elements of a triangle and will ask to find one or more of the lengths or angle measures that is not given. There are numerous formulas, methods, and operations that can help to solve a triangle depending on the information given in the problem. The first step in any triangle problem is drawing a diagram. A picture can help to show which elements of the triangle are given and which elements are adjacent or opposite one another. By knowing where the elements are in relation to one another, we can use the trigonometric functions to relate angle and side lengths. There are many techniques which can be implemented in solving triangles: • Trigonometry: The basic trigonometric functions relate side lengths to angles. By substituting the appropriate values into the formulas for sine, cosine, or tangent, trigonometry can help to solve for a particular side length or angle measure of a right triangle. This is useful when given a side length and an angle measure. • Inverse Trigonometry: Provided two side lengths, the inverse trig functions use the ratio of the two lengths and output an angle measure in right triangle trigonometry. Inverse trig is particularly useful in finding an angle measure when two side lengths are given in a right triangle. • Special Right Triangles: Special right triangles are right triangles whose side lengths produce a particular ratio in trigonometry. A 30°− 60°− 90° triangle has a hypotenuse that is twice as long as one of its legs. A 45°− 45°− 90° is called an isosceles right triangle since both of its legs are the same length. These special cases can help to quicken the process of solving triangles. • Pythagorean Theorem: The Pythagorean Theorem relates the squares of all three side lengths to one another in right triangles. This is useful when a triangle problem provides two side lengths and a third is needed. $a^{2}+b^{2} = c^{2}$ • Pythagorean Triples: A Pythagorean triple is a set of three positive integers that satisfy the Pythagorean Theorem. The set {3,4,5} is one of the most commonly seen triples. Given a right triangle with legs of length 3 and 4, for example, the hypotenuse is known to be 5 by Pythagorean triples. • Law of Cosines: The law of cosines is a generalization of the Pythagorean Theorem which can be used for solving non-right triangles. The law of cosines relates the squares of the side lengths to the cosine of one of the angle measures. This is particularly useful given a SAS configuration, or when three side lengths are known and no angles for non-right triangles. $c^{2} = a^{2} + b^{2} - 2ab \cos C$ • Law of Sines: The law of sines is a formula that relates the sine of a given angle to the length of its opposite side. The law of sines is useful in any configuration when an angle measure and the length of its opposite side are given. It is also useful given an ASA configuration, and often the ASS configuration . The ASS configuration is known as the ambiguous case since it does not always provide one definite solution to the triangle. $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$ When solving a triangle, one side length must always be given in the problem. Given an AAA configuration, there is no way to prove congruency. According to postulates for congruent triangles, the AAA configuration proves similarity in triangles, but there is no way to find the side lengths of a triangle. Knowing just angle measures is not helpful in solving triangles. ## Example Triangle Problems Example 1: Using Trigonometry A damsel in distress awaits her rescue from the tallest tower of the castle. A brave knight is on the way. He can see the castle in the distance and starts to plan his rescue, but he needs to know the height of the tower so he can plan properly. The knight sits on his horse 500 feet away from the castle. He uses his handy protractor to find the measure of the angle at which he looks up to see the princess in the tower, which is 15°. Sitting on the horse, the knight's eye level is 8 feet above the ground. What is the height of the tower? We can use tangent to solve this problem. For a more in depth look at tangent, see Basic Trigonometric Functions. Use the definition of tangent. $\tan =\frac{\text{opposite}}{\text{adjacent}}$ Plug in the angle and the known side length. $\tan 15^\circ =\frac{x \text{ft}}{500 \text{ft}}$ Clearing the fraction gives us $\tan 15^\circ (500) =x$ Simplify for $(.26795)(500) =x$ Round to get $134 \text{ft} \approx x$ But this is only the height of the triangle and not the height of the tower. We need to add 8 ft to account for the height between the ground and the knight's eye-level which served as the base of the triangle. $134 \text{ft} + 8 \text{ft} = h$ simplifying gives us $142 \text{ft} = h$ The tower is approximately 142 feet tall. Example 2: Using Law of Sines A man stands 100 feet above the sea on top of a cliff. The captain of a white-sailed ship looks up at a 45° angle to see the man, and the captain of a black-sailed ship looks up at a 30° angle to see him. How far apart are the two ships? To solve this problem, we can use the law of sines to solve for the bases of the two triangles since we have an AAS configuration with a known right angle. To find the distance between the two ships, we can take the difference in length between the bases of the two triangles. First, we need to find the third angle for both of the triangles. Then we can use the law of sines. For the black-sailed ship, $180^\circ - 90^\circ - 30^\circ =60^\circ$ Let the distance between this ship and the cliff be denoted by $b$. By the law of sines, $\frac{100}{\sin 30^\circ} = \frac{b}{\sin 60^\circ}$ Clear the fractions to get, $100(\sin 60^\circ) = b(\sin 30^\circ)$ Compute the sines of the angle to give us $100\frac{\sqrt{3}}{2} = b\frac{1}{2}$ Simplify for $100(\sqrt{3}) = b$ YOU CAN'T SEE THIS! OOOOOOOOOOOOOOOOOOOOOHHH SPOOOOOKY! For the white-sailed ship, $180^\circ - 90^\circ - 45^\circ = 45^\circ$ Let the distance between this ship and the cliff be denoted by $a$. By the law of sines, $\frac{100}{\sin 45^\circ} = \frac{a}{\sin 45^\circ}$ Clear the fractions to get, $100(\sin 45^\circ) = b(\sin 45^\circ)$ Compute the sines of the angle to give us $100\frac{\sqrt{2}}{2} = b\frac{\sqrt{2}}{2}$ Simplify for $a = 100 \text{ft}$ Multiply and round for $b =173 \text{ft}$ The distance between the two ships, $x$, is the positive difference between the lengths of the bases of the triangle. $b-a=x$ $173-100 = 73 \text{ft}$ The ships are about 73 feet apart from one another. Example 3: Using Multiple Methods At the park one afternoon, a tree casts a shadow on the lawn. A man stands at the edge of the shadow and wants to know the angle at which the sun shines down on the tree. If the tree is 51 feet tall and if he stands 68 feet away from the tree, what is the angle of elevation? There are several ways to solve this problem. The following solution uses a combination of the methods described above. First, we can use Pythagorean Theorem to find the length of the hypotenuse of the triangle, from the tip of the shadow to the top of the tree. $a^{2}+b^{2} = c^{2}$ Substitute the length of the legs of the triangle for $a, b$ $51^{2}+68^{2} = c^{2}$ Simplifying gives us $2601+4624 = c^{2}$ $7225 = c^{2}$ Take the square root of both sides for $\sqrt{7225} = c$ $85 = c$ Next, we can use the law of cosines to find the measure of the angle of elevation. $a^{2}=b^{2}+c^{2} - 2bc \cos A$ Plugging in the appropriate values gives us $51^{2}=68^{2}+85^{2} - 2(85)(68) \cos A$ Computing the squares gives us $2601= 4624+7225 - 11560 \cos A$ Simplify for $2601= 11849 - 11560 \cos A$ Subtract $11849$ from both sides for $-9248= -11560 \cos A$ Simplify to get $.8 = \cos A$ Use inverse trigonometry to find the angle of elevation. $A = 37^\circ$ # Why It's Interesting Shadow Problems are one of the most common types of problems used in teaching trigonometry. A shadow problem sets up a scenario that is simple, visual, and easy to remember. Shadow problems are commonly used and highly applicable. Shadows, while an effective paradigm in a word problem, can even be useful in real life applications. In this section, we can use real life examples of using shadows and triangles to calculate heights and distances. ## Example: Sizing Up Swarthmore The Clothier Bell Tower is the tallest building on Swarthmore College's campus, yet few people know exactly how tall the tower stands. We can use shadows to determine the height of the tower. Here's how: Step 1) Mark the shadow of the of the tower. Make sure to mark the time of day. The sun is at different heights throughout the day. The shadows are longest earlier in the morning and later in the afternoon. At around midday, the shadows aren't very long, so it might be harder to find a good shadow. When we marked the shadow of the bell tower, it was around 3:40 pm in mid-June. Step 2) After marking the shadow, we can measure the distance from our mark to the bottom of the tower. This length will serve as the base of our triangle. In this case, the length of the shadow was 111 feet. Step 3) Measure the angle of the sun at that time of day. Use a yardstick to make a smaller, more manageable triangle. Because the sun shines down at the same angle as it does on the bell tower, the small triangle and the bell tower's triangle are similar and therefore have the same trigonometric ratios. • Stand the yardstick so it's perpendicular to the ground so that it forms a right angle. The sun will cast a shadow. Mark the end of the shadow with a piece of chalk. • Measure the length of the shadow. This will be considered the length of the base of the triangle. Draw a diagram of the triangle made by connecting the top of the yardstick to the marked tip of the shadow. Use inverse trigonometry to determine the angle of elevation. $\tan X = \frac{36 \text{in}}{27 \text{in}}$ $\arctan \frac{36}{27} = X$ $\arctan \frac{4}{3} = X$ $X = 53^\circ$ Step 4) Now, we can use trigonometry to solve the triangle for the height of the bell tower. $\tan 53^\circ = \frac{h}{111 \text{ft}}$ Clearing the fractions, $111 (\tan 53^\circ) = h$ Plugging in the value of $\tan 53^\circ$ gives us $111 \frac{4}{3} = h$ Simplify for $148 \text{ft} = h$ According to our calculations, the height of the Clothier Bell Tower is 148 feet. ## History: Eratosthenes and the Earth In ancient Greece, mathematician Eratosthenes made a name for himself in the history books by calculating the circumference of the Earth by using shadows. Many other mathematicians had attempted the problem before, but Eratosthenes was the first one to actually have any success. His rate of error was less than 2%. Eratosthenes used shadows to calculate the distance around the Earth. As an astronomer, he determined the time of the summer solstice when the sun would be directly over the town of Syene in Egypt (now Aswan). On this day, with the sun directly above, there were no shadows, but in Alexandria, which is about 500 miles north of Syene, Eratosthenes saw shadows. He calculated based on the length of the shadow that the angle at which the sun hit the Earth was 7 °. He used this calculation, along with his knowledge of geometry, to determine the circumference of the Earth. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References All of the images on this page, unless otherwise stated on their own image page, were made or photographed by the author Richard Scott, Swarthmore College. The information on Eratosthenes can be cited to http://www.math.twsu.edu/history/men/eratosthenes.html. The main image and details about it were found at http://www.imdb.com/title/tt0105378/. Some of the ideas for problems/pictures on this page are based from ideas or concepts in the Interactive Mathematics Program Textbooks by Fendel, Resek, Alper and Fraser. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168004393577576, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/56770/how-equivalent-are-the-theories-of-reduced-and-groupal-infty-groupoids/56775
## How equivalent are the theories of reduced and groupal $\infty$-groupoids? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I hope that my question is sufficiently trivial that someone will be able to give me a pedantic answer, and not so trivial that no one takes the time to give an answer. My motivation for asking this question is that my category number has been hovering somewhere around $2$, and I'd like to increase it, but $\infty$ is often easier than $3$. Suppose that I have some familiarity with the following words (meaning, feel free to "remind" me what the correct definitions are): • Some version (Stasheff associahedra?) of $A_\infty$ monoids. • Kan simplicial sets as $\infty$-groupoids. One can then define the following: A groupal $\infty$-groupoid is an $A_\infty$ monoid $G$ in $\infty$-groupoids, such that the map $(g,h) \mapsto (g,gh)$, $G \times G \to G\times G$ is an equivalence of $\infty$-groupoids. (If this isn't quite the right definition, please let me know.) One could instead talk about $\infty$-groupoids for which the set of $0$-morphisms is a point. I think this is what are called reduced. I'm under the impression that these should be "the same". If I were working not with $\infty$-groupoids but rather at a low categorical level, I would understand how they are the same: a groupoid with one object is "the same" as a group or groupal set (an associative monoid such that the map $(g,h) \mapsto (g,gh)$ is an isomorphism). More precisely, there should be functors $\Omega$ and $\rm B$ between the $(\infty,1)$-categories of reduced $\infty$-groupoids and groupal $\infty$-groupoids, and I would assume that these are an equivalence, in the appropriate sense. I almost understand these functors: • Given a reduced $\infty$-groupoid $G$, I would try to define $\Omega G = \hom(S,G)$, where $S$ is some $\infty$-groupoid version of the circle, say a particular $S = \mathrm B\mathbb Z$ that I might construct by hand. The "$\hom$" is just the hom of $\infty$-groupoids (reduced $\infty$-groupoids are full in $\infty$-groupoids), and in particular it takes values in $\infty$-groupoids; on the other hand, letting $\vee$ denote the coproduct in reduced groupoids, there is a distinguished map $S \to S\vee S$ which winds around the outside of the figure-eight, and pulling back along this map gives the groupal structure on $\Omega G$. Left to check is that this really is a groupal structure, but that should be easy. • Given a groupal $\infty$-groupoid $G$, I should try to define $\mathrm B G$ in the same way that I would if I were just starting with a group. But a priori I only see how to define $\mathrm B G$ as a simplicial object in $\infty$-groupoids. So my biggest difficulty here is that don't know how to collapse what I'm modeling as a "double simplicial set" into a "single simplicial set". Writing $\Delta$ for the category whose objects are finite totally-ordered sets and whose morphisms are non-decreasing maps (so that a simplicial set is a functor $\Delta^{\mathrm{op}} \to \mathrm{Set}$), maybe there is a nice map $\Delta \to \Delta^{\times 2}$ along which I can pull back? If so, then there only remains to check the Kan condition. • Oh, and I'd need to check that $\Omega,\mathrm B$ are inverse (up to ...) to each other. After a rambly introduction, my questions are: Is this all correct? What is the $\mathrm B$ construction? What's the precise statement of the equivalence between groupal and reduced $\infty$-groupoids? I assume that this type of thing is carefully spelled out somewhere in the literature. So maybe my real question is: What is a good reference that will take my hand and walk me through this part of category theory? - There is also the nLab entry "looping and delooping" ncatlab.org/nlab/show/looping meant to survey some of the relevant facts. – Urs Schreiber Aug 9 2011 at 0:20 ## 1 Answer To answer your first set of questions in order (I'm going to use the word "space" for "$\infty$-groupoid"): Yes, this is all correct. You seem to be familiar with how to construct $BG$ as a simplicial space via a bar construction. To turn this into an actual space, you just have to form the geometric realization of this simplicial space. This is just like the geometric realization of a simplicial set: if $X_*$ is a simplicial space, its realization is the quotient of $\bigsqcup X_n \times \Delta^n$ given by gluing together simplices according to the face (and degeneracy) maps. More succinctly, it is the coend of the functor $X:\Delta^{op}\to Spaces$ along the functor $\Delta \to Spaces$ sending $[n]$ to the standard $n$-simplex. More abstractly, it is the (homotopy) colimit of the diagram $X:\Delta^{op}\to Spaces$ in the $(\infty,1)$-category of spaces. If you model spaces as simplicial sets, it can also be described as the pullback of your bisimplicial set along the diagonal functor $\Delta\to\Delta^2$ (to show this, you need to only verify it for single "bisimplices" (ie, representable functors on $\Delta^2$), which amounts to the fact that the product of two representable functors on a category $C$ is the pullback of the associated representable functor on $C^2$ along the diagonal $C\to C^2$). I don't know the best way of precisely formulating and proving this, but one way is to construct an explicit Quillen equivalence between the categories of simplicial groups and the category of reduced simplicial sets (simplicial groups can be used instead of $A_\infty$-groups because $A_\infty$-spaces can be rigidified). For this, the delooping functor is exactly just taking the geometric realization of the bar construction. The looping functor is subtler--one has to be careful to get a functor which actually lands in simplicial groups. Even if you were happy to land in $A_\infty$-groups, just taking the simplicial mapping space $Maps(S^1,X)$ would not work because for non-fibrant objects $X$ this does not have a multiplication. The classical solution to this is called the "Kan loop group", the exact details of which I don't remember but are described on nlab in the generalized setting of a not-necessarily-reduced simplicial set (in which case you get a simplicial groupoid, rather than a simplicial group). In any case, this is just a specific simplicial model for the loopspace of a reduced simplicial set which happens to actually be a group. As for a reference, I learned the Kan loop group and the Quillen equivalence it gives from Goerss-Jardine's book Simplicial Homotopy Theory, but they say essentially nothing about thinking about this from an $\infty$-categorical perspective. - "it is the coend of the functor $X: \Delta^{\rm op} \to \text{Spaces}$ along the functor $\Delta \to \text{Spaces}$ sending $[n]$ to the standard $n$-simplex" is precisely the kind of sentence that I love, because another way to say this is that $X$ is a right $\Delta$-module in $\text{Spaces}$, and there's a canonical left $\Delta$-module in $\text{Spaces}$, and I just need to tensor these modules over $\Delta$. In other news, I should have been smart enough to say "I'm looking for a map $\Delta \to \Delta^{\times 2}$, and the diagonal map must be the right one. Thanks! – Theo Johnson-Freyd Feb 28 2011 at 1:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500296711921692, "perplexity_flag": "head"}
http://physics.stackexchange.com/tags/waves/hot
# Tag Info ## Hot answers tagged waves 5 ### In terms of the Doppler effect, what happens when the source is moving faster than the wave? The first image shows an object traveling at Mach 1 ($v=c$). The second one shows the object traveling at some supersonic velocity ($v>c$). For both the cases, the longitudinal pressure waves pile up. Say the observer is standing in the ground and the object is traveling at $c$. The observer can't hear the pitch of sound because, the waves reach him ... 5 ### De Broglie wavelength, frequency and velocity - interpretation Yes the product $\nu \lambda$ makes sense as a velocity. Defining $E = \hslash \omega$ and $p=\hslash k$ (the Planck constant $h=2\pi \hslash$, where the $2\pi$ is injected into the $\hslash$, since physicists usually prefer to discuss the angular frequency $\omega=2\pi\nu$ and the wave vector $k=2\pi p$ rather than the frequency $\nu$ and the momentum $p$ ... 5 ### Is there a way to create a flickering frequency to be dependent on speed of the person looking at it? You can use a reflector with gaps. Then the light from a car will alternate between reflecting and not reflecting at a rate dependent on their velocity towards the reflector. Please excuse my crude diagram: As the car moves right to left, gaps in the reflector will cause it to appear to flash on an off. 4 ### Standing Waves Energy transfer Waves on strings combine linearly. This means that you can split up a string's motion into two (or more) superimposed waves. The two superimposed waves behave independently, as if the other one was not there. So if you have a standing wave set up on a string, and then you also introduce a travelling pulse, you get something like the following. (The arrows ... 4 ### Boundary conditions on wave equation The second condition is saying that there is no discontinuity in the slope of the rope at the junction. In other words, there is no "kink" in the rope. Imagine if this assumption were to fail in the following way: $$\frac{\partial D_1}{\partial x}(0,t) = -1, \qquad \frac{\partial D_2}{\partial x}(0,t) = 1$$ Then near the origin, the rope would look ... 3 ### The second resonance of string? The first resonant vibrational mode for a string clamped at both ends looks like: You should be able to deduce the wavelength from that diagram. The second mode looks like: Both of the images above are from http://www.clickandlearn.org/Physics/sph3u/Music/Music.htm and that site will spell it out in more detail for you. If your string length is ... 3 ### Why do sound waves travel at the same speed moleculewise? (Same medium) This is a very good question. I'm going to give you a more conceptual answer rather than the quick answer because I find this explanation helps my own students understand this better. First, consider yourself standing in a gymnasium with a thousand people in it. Not a lot of room is there? Naturally, you'd want some personal space, so you push at the people ... 2 ### Lethality of sounds and extreme “loudness” Your question in poorly defined because the concept of sound doesn't extend very nicely to non-atmospheric settings. Are gravity waves sounds? Are the pressure / shock waves in nebula? I don't think there is a unambiguously correct interpretation of sound for your question. Regarding lethal sound here on Earth, the answer depends on what you consider ... 2 ### Lethality of sounds and extreme “loudness” Well, we've classified a whole range of scales for the human hearing (which includes pure tone too). For lethal, we don't use how loud it should be, but instead - we say "how intense it should be" so that it can affect our ears. A quote from Wiki... Loudness, a subjective measure, is often confused with objective measures of sound strength such as sound ... 2 ### Will changing amplitude change the frequency? It's completely possible to change the amplitude (the difference between the maximun value of the wave and the minimun) without changing the frecuency. Think this in AC, where you can have signals with different voltage but the same frecuency. To illustrate it I'll show you this for a harmonic wave: $$x(t)=Acos(\omega t+\phi)$$ You can vary the amplitude ... 2 ### How to determine the direction of medium's displacement vectors of a standing wave? Any material between two nodes is displaced by the same direction. So the direction of B and C has to be the same as well as the direction of A and D due to symmetry. In addition, the direction of A must be the opposite of B since they are across from a node. Similarly the direction of C and D must be opposite. So the two possible configurations are ... 1 ### How to determine the direction of medium's displacement vectors of a standing wave? A standing wave is a wave that has nodes. The points of the wave go up and down in some places, and remain at zero at others (the nodes). The general form of a standing wave is a sine curve that remains at a fixed position, but its amplitude changes in time between $+A_0$ and $-A_0$. Specifially, there is a time where the wave form is completely flat. ... 1 ### Lethality of sounds and extreme “loudness” Sound as we know it is a disturbance of our atmosphere, transmitted as a wave to our ears - and yes, it can absolutely be lethal - shockwaves can hurt people very badly, as anyone who's been to the scene of a large explosion can attest. We typically measure "loudness" on a log scale of the pressure of the sound wave - I admit I'm unsure of how much pressure ... 1 ### EM Waves Energy Loss Each photon leaves its energy in the molecules of the screen. Destructive interference observed at the line x=1mm for example , means that the probability of finding a photon at x=1 is close to zero. Instead, the photon has very high probabiliy of depositing its energy at the construcive interference fringe. 1 ### De Broglie wavelength, frequency and velocity - interpretation Yes, the formulae $p=h/\lambda$ and $E=h\nu$ (the same equations as yours, reverted a bit) are universal – they hold not only for photons but for any particle. Also, these two equations aren't quite independent. Assuming special relativity, they both follow from the de Broglie form of the wave function which is pure phase: \psi(x,t) = C\cdot \exp(2\pi i ... 1 ### Why frequency and tension doesn't change in the two medium? The tension in the two cords is the same because they are tied together. For example if the tension in the thick cord was higher than the thin cord the thick cord would shrink and the thin cord stretch until the tensions were equal again. The frequency has to be the same in both cords because the phase of the wave has to match at the junction between the ... 1 ### Waveguides (in the ocean?) You don't need a sharp discontinuity in the speed of sound to guide the waves. Remember that reflection does not occur right at the interface; rather, the wave always penetrates outside the waveguide to "see" what's going on there. A gradual increase in the speed of sound enforces the wave to reflect as well. Reflection occurs from above due to the ... 1 ### Phase shift of 180 degrees on reflection from optically denser medium The phase change happens because it is how waves behave. An additional link provides lecture notes. I know that u are not satisfied with this answer but you can compare this with mechanical waves in a string which gives better intuition by use of newtons laws. Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444589018821716, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/28665/find-a-matrixs-nullspace-from-submatrix-nullspace/28676
## Find a matrix’s nullspace from submatrix nullspace ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is probably a basic question, but my linear algebra is weak. Suppose I want to compute the nullspace of a matrix A using some iterative method (e.g. Lanczos). Suppose further that I know a priori the nullspace of the first n columns of the matrix, i.e., Av = [0 0 0 ... 0 b_n .. B_N], where b_i are nonzero with high probability. Does starting the iterative method with vector v (instead of a random vector) speed the iterative method (e.g., Lanczos) up at all? - ## 1 Answer Yes. You will need to apply $A$ fewer times to get the linear dependence of the rightmost columns. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8433822393417358, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/161542-bivariate-normal.html
# Thread: 1. ## Linear Function of Random Variables I've got this problem here, but I'm a little concerned about my answer, not sure where I'm going wrong. Making handcrafted pottery generally takes two major steps: wheel throwing and firing. The time of wheel throwing and the time of firing are normally distributed random variables with means of 40 min and 60 min and standard deviations of 2 min and 3 min, respectively Let $X_1$ be the time throwing the wheels. $\mu_X_1 = 40 min$ $\sigma_X_1 = 2 min$ Let $X_2$ be the time firing. $\mu_X_2 = 60 min$ $\sigma_X_2 = 3 min$ $Y=X_1 + X_2$ $E(Y)=E(X_1)+E(X_2)=40min + 60min = 100min$ $V(Y)=\sigma_X_1 ^2 + \sigma_X_2 ^2 = 2^2 + 3^2 = 13min^2$ $\therfore P(Y<=85)=P(z< \frac{85-100}{sqrt(13)})=0.0000159$ This answer seems extremely low, considering 85 is just outside the first standard deviation. Not sure where I messed up here. 2. Originally Posted by Kasper I've got this problem here, but I'm a little concerned about my answer, not sure where I'm going wrong. Let $X_1$ be the time throwing the wheels. $\mu_X_1 = 40 min$ $\sigma_X_1 = 2 min$ Let $X_2$ be the time firing. $\mu_X_2 = 60 min$ $\sigma_X_2 = 3 min$ $Y=X_1 + X_2$ $E(Y)=E(X_1)+E(X_2)=40min + 60min = 100min$ $V(Y)=\sigma_X_1 ^2 + \sigma_X_2 ^2 = 2^2 + 3^2 = 13min^2$ $\therfore P(Y<=85)=P(z< \frac{85-100}{sqrt(13)})=0.0000159$ This answer seems extremely low, considering 85 is just outside the first standard deviation. Not sure where I messed up here. You haven't posted the whole question but, reading between the lines, your answer is correct. 3. Originally Posted by mr fantastic You haven't posted the whole question but, reading between the lines, your answer is correct. Whoops had it posted, must have accidentally deleted it when I edited it. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517873525619507, "perplexity_flag": "middle"}
http://nrich.maths.org/2910/note
### Tangrams Can you make five differently sized squares from the tangram pieces? ### Three Squares What is the greatest number of squares you can make by overlapping three squares? ### Chain of Changes Arrange the shapes in a line so that you change either colour or shape in the next piece along. Can you find several ways to start with a blue triangle and end with a red circle? # Complete the Square ## Complete the Square Can you complete the squares? Use the pencil on the screen to finish the squares. Full Screen Version This text is usually replaced by the Flash movie. If you are working away from the computer, you might like to print off this sheet of the squares to complete. This activity comes from BEAM's Maths of the Month ### Why do this problem? This problem is excellent for helping to reinforce the properties of squares and in particular for highlighting the fact that a square is a square no matter what orientation it is in. ### Possible approach You could introduce this activity by showing the children a square piece of paper. Put the square on the board so that its sides are parallel to the sides of the board and ask the class what shape it is. How do they know? Then, invite one pupil to come up and pin the square on the board in a different way. Is the shape still a square? You might find that an interesting discussion ensues! It is common for children to call a tilted a "diamond" but the earlier we can encourage them to avoid this, the better. Once pupils have tried the problem (whether on paper or using the interactivity), they could show each other their completed squares and discuss the drawings before sharing them with you and/or the whole class. Playing the game Square It would be a good way to end this lesson. ### Key questions What do you know about squares? What do you need to add to this to make it a square? ### Possible extension You could give some learners a grid (for example $3$ by $3$ small squares) and challenge them to draw all possible squares on it, if all corners have to be on the grid. ### Possible support Children may need rulers to convince themselves that the sides of the shape they have drawn are (or are not!) the same. Turning the page also helps! The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391412138938904, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/170692/the-minimal-face-of-a-polytope-containing-a-set
# The minimal face of a polytope containing a set Encountered the following statement while reading a paper where it was stated without proof - am wondering why its true. Suppose $P$ is a polytope, $M$ is a convex subset of $P$. Define $f(M)$ to be the minimal face of $P$ which contains $M$. Then there is a point in $M$ which is in the relative interior of $f(M)$. - You should consider first the case where $M =\{x\}$ is a single point. Then $x$ must lie in the relative interior of $f(M)$ since otherwise it would lie on a proper face of $f(M)$, contradicting the definition of a minimal face (a definition which you might want to state explicitly). Now in considering more general $M$, you should realize that the convexity assumption is essential (consider $P$ a tetrahedron, and $M$ consisting of two incident edges). – yasmar Jul 14 '12 at 12:28 Thanks - yes, I see, $M$ needs to be convex indeed. – atricks Jul 14 '12 at 14:53 Could you add details? I don't see why $x$ has to lie in the relative interior of $f(M)$ even if $M=\{x\}$ is a single point. – atricks Jul 14 '12 at 14:53 @atricks: Because the exterior of a face is formed by smaller faces, if $x$ were to lie in the exterior of $f(M)$, it would also lie in a smaller face, so $f(M)$ wouldn't be minimal. – joriki Jul 14 '12 at 20:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585622549057007, "perplexity_flag": "head"}
http://jeremykun.com/2011/07/20/serial-killers/
# Hunting Serial Killers Posted on July 20, 2011 by ## “Tonight’s the Night” A large volume of research goes into the psychological and behavioral analysis of criminals. In particular, serial criminals hold a special place in the imagination and nightmares of the general public (at least, American public). Those criminals with the opportunity to become serial criminals are logical, cool-tempered, methodical, and, of course, dangerous. They walk among us in crowded city streets, or drive slowly down an avenue looking for their next victim. They are sometimes neurotic sociopaths, and other times amicable, charming models of society and business. But most of all, they know their craft well. They work slowly enough to not make mistakes, but fast enough to get the job done and feel good about it. Their actions literally change lives. In other words, they would be good programmers. If only they all hadn’t given up trying to learn C++! In all seriousness, a serial killer’s rigid methodology sometimes admits itself nicely to mathematical analysis. For an ideal serial criminal (ideal in being analyzable), we have the following two axioms of criminal behavior: 1. A serial criminal will not commit crimes too close to his base of operation. 2. A serial criminal will not travel farther than necessary to find victims. The first axiom is reasonable because a good serial criminal does not want to arouse suspicion from his neighbors. The second axiom roughly describes an effort/reward ratio that keeps serial offenders from travelling too far away from their homes. These axioms have a large amount of criminological research behind them. While there is little unifying evidence (the real world is far too messy), there are many bits and pieces supporting these claims. For example, the frequency of burglaries peak about a block from the offender’s residence, while almost none occur closer than a block (Turner, 1969, “Delinquency and distance”). Further, many serial rapists commit subsequent rapes (or rather, abductions preceding rape) within a half mile from the previous (LeBeau, 1987, “Patterns of stranger and serial rape offending”). There are tons of examples of these axioms in action in criminology literature. On the other hand, there are many types of methodical criminals who do not agree with these axioms. Some killers murder while traveling the country, while others pick victims with such specific characteristics that they must hunt in a single location. So we take the following models with a grain of salt, in that they only apply to a certain class of criminal behavior. With these ideas in mind, if we knew a criminal’s base of operation, we could construct a mathematical model of his “buffer zone,” inside of which he does not commit crime. With high probability, most of his crimes will lie just outside the buffer zone. This in itself is not useful in the grand scheme of crime-fighting. If we know a criminal’s residence, we need not look any further. The key to this model’s usefulness is working in reverse: we want to extrapolate a criminal’s residence from the locations of his crimes. Then, after witnessing a number of crimes we believe to be committed by the same person, we may optimize a search for the offender’s residence. We will use the geographic locations of a criminal’s activity to accurately profile him, hence the name, geoprofiling. ## Murder, She Coded Historically, the first geoprofiling model was crafted by a criminologist named Dr. Kim Rossmo. Initially, he overlaid the crime locations on a sufficiently fine $n \times m$ grid of cells. Then, he uses his model to calculate the probability of the criminal’s residence lying within each cell. Rossmo’s formula is displayed below, and explained subsequently. $\displaystyle P(x) = \sum \limits_{\textup{crime locations } c} \frac{\varphi}{d(x,c)^f} + \frac{(1-\varphi)B^{g-f}}{(2B-d(x,c))^g}$, where $\varphi = 1$ if $d(x,c) > B, 0$ otherwise. Here, $x$ is an arbitrary cell in the grid, $d(x,c)$ is the distance from a cell to a crime location, with some fixed metric $d$. The variable $\varphi$ determines which of the two summands to nullify based on whether the cell in question is in the buffer zone. $B$ is the radius of the buffer zone, and $f,g$ are formal empirically tuned parameters. Variations in $f$ and $g$ change the steepness of the decay curve before and after the buffer radius. We admit to have no idea why they need to be related, and cannot find a good explanation in Rossmo’s novel of a dissertation. Instead, Rossmo claims both parameters should be equal. For the purposes of this blog we find their exact values irrelevant, and put them somewhere between a half and two thirds. This model reflects the inherent symmetry in the problem. If we may say that an offender commits a crime outside a buffer of some radius $B$ surrounding his residence, then we may also say that the residence is likely outside a buffer of the same radius surrounding each crime! For a fixed location, we may compute the probability of the offender’s residence being there with respect to each individual crime, and just sum them up. This equation, while complete, has a better description for programmers, which is decidedly easier to chew in small bites: ```Let d = d(x,c) if d > B: P(x) += 1/d^f else:  P(x) += B^(f-g)/(2B-d)^g``` Then we may simply loop this routine over over all such $c$ for a fixed $x$, and get our probability. Here we see the ideas clearly, that outside the buffer zone of the crime the probability of residence decreases with a power-law, and within the buffer zone it increases approaching the buffer. Now, note that these “probabilities” are not, strictly speaking, probabilities, because they are not normalized in the unit interval $[0,1]$. We may normalize them if we wish, but all we really care about are the relative cell values to guide our search for the perpetrator. So we abuse the term “high probability” to mean “relatively high value.” Finally, the distance metric we actually use in the model is the so-called taxicab metric. Since this model is supposed to be relevant to urban serial criminals (indeed, where the majority of cases occur), the taxicab metric more accurately describes a person’s mental model of distance within a city, because it accounts for roadways. Note that in order for this to work as desired, the map used must be rotated so that its streets lie parallel to the $x,y$ axes. We will assume for the rest of this post that the maps are rotated appropriately, as this is a problem with implementation and not the model itself or our prototype. Rossmo’s model is very easy to implement in any language, but probably easiest to view and animate in Mathematica. As usual, the entire code for the examples presented here is available on this blog’s Google Code page. The decay function is just a direct translation of the pseudocode: ```rossmoDecay[p1_, p2_, bufferLength_, f_, g_, distance_] := With[{d = distance[p1, p2]}, If[d > bufferLength, 1/(d^f), (bufferLength^(g - f))/(2 bufferLength - d)^g]];``` We then construct a function which computes the decay from a fixed cell for each crime site: ```makeRossmoFunction[sites_, buffer_, f_, g_] := Function[{x, y}, Apply[Plus, Map[rossmoDecay[#,{x,y},buffer,f,g,ManhattanDistance] &, sites]]];``` Now we may construct a “Rossmo function,” (initializing the parameters of the model), and map the resulting function over each cell in our grid: `Array[makeRossmoFunction[sites, 14, 1/3, 2/3], {60, 50}];` Here the Array function accepts a function $f$, and a specification of the dimensions of the array. Then each array index tuple is fed to $f$, and the resulting number is stored in the $i,j$ entry of the array. Here $f: \mathbb{Z}_{60} \times \mathbb{Z}_{50} \to \mathbb{R}^+$. We use as a test the following three fake crime sites: `sites = {{20, 25}, {47, 10}, {55, 40}};` Upon plotting the resulting array, we have the following pretty picture: A test of Rossmo's geographic profiling model on three points. Here, the crime locations are at the centers of each of the diamonds, and cells with more reddish colors have higher values. Specifically, the “hot spot” for the criminal’s residence is in the darkest red spot in the bottom center of the image. As usual, in order to better visualize the varying parameters, we have the following two animations: A variation of the "f" parameter from 0.1 and 1.25 in steps of 0.05. "g" is fixed at 2/3. Variation in the "g" parameter from 0.1 to 1.25 in steps of 0.05. "f" is fixed at 1/2. Variation in the $B$ parameter simply increases or decreases the size of the buffer zone. In both animations above we have it fixed at 14 units. Despite the pretty pictures, a mathematical model is nothing without empirical evidence to support it. Now, we turn to an analysis of this model on real cases. ## “Excellent!” I cried. “Elementary mathematics,” said he. Richard Chase The first serial killer we investigate is Richard Chase, also known as the Vampire of Sacramento. One of the creepiest murderers in recent history, Richard Chase believed he had to drink the blood of his victims in order to live. In the month of January 1978, Chase killed five people, dumping their mutilated bodies in locations near his home. Before we continue with the geographic locations of this particular case, we need to determine which locations are admissible. For instance, we could analyze abduction sites, body drop sites, locations of weapons caches or even where the perpetrator’s car was kept. Unfortunately, many of these locations are not known during an investigation. At best only approximate abduction sites can be used, and stash locations are usually uncovered after an offender is caught. For the sake of the Chase case, and subsequent cases, we will stick to the most objective data points: the body drop sites. We found this particular data in Rossmo’s dissertation, page 272 of the pdf document. Overlaid on a 30 by 30 grid, they are: ```richardChaseSites = {{3, 17}, {15, 3}, {19, 27}, {21, 22}, {25, 18}}; richardChaseResidence = {19,17};``` Then, computing the respective maps, we have the following probability map: The Rossmo probability map for the Richard Chase body drop sites. Here B = 5, f = 1/2, g = 1 If we overlay the location of Chase’s residence in purple, we see that it is very close to the hottest cell, and well-within the hot zone. In addition, we compare this with another kind of geoprofile: the center of gravity of the five sites. We color the center of gravity in black, and see that it is farther from Chase’s residence than the hot zone. In addition, we make the crime sites easy to see by coloring them green. Additional data points: center of gravity in black, Chase's residence in purple, and crime sites in green. Albert DeSalvo This is a great result for the model! Let us see how it fares on another case: Albert DeSalvo, the Boston strangler. With a total of 13 murders and being suspected of over 300 sexual assault charges, DeSalvo is a prime specimen for analysis. DeSalvo entered his victim’s homes with a repertoire of lies, including being a maintenance worker, the building plumber, or a motorist with a broken-down car. He then proceeded to tie his victims to a bed, sexually assault them, and then strangle them with articles of clothing. Sometimes he tied a bow to the cords he strangled his victims with. We again use the body drop sites, which in this case are equivalent to encounter sites. They are: ```deSalvoSites = {{10, 48}, {13, 8}, {15, 11}, {17, 8}, {18, 7}, {18, 9}, {19, 4}, {19, 8}, {20, 9}, {20, 10}, {20, 11}, {29, 23}, {33, 28}}; deSalvoResidence = {19,18};``` Running Rossmo’s model again, including the same extra coloring as for the Chase murders, we get the following picture: The Rossmo probability map for Albert DeSalvo's murders. Here B=10, f= 1/2, g = 1. Again, we win. DeSalvo’s residence falls right in the darker of our two main hot zones. With this information, the authorities would certainly apprehend him in a jiffy. On the other hand, the large frequency of murders in the left-hand side pulls the center of gravity too close. In this way we see that the center of gravity is not a good “measure of center” for murder cases. Indeed, it violates the buffer principle, which holds strong in these two cases. Peter Sutcliffe We list his crime locations below. Note that these include body drop sites and the attack sites for non-murders, which were later reported to the police. ```sutcliffeSites = {{5, 1}, {8, 7}, {50, 99}, {53, 68}, {56, 72}, {59, 59}, {62, 57}, {63, 85}, {63, 87}, {64, 83}, {69, 82}, {73, 88}, {80, 88}, {81, 89}, {83, 88}, {83, 87}, {85, 85}, {85, 83}, {90, 90}}; sutcliffeResidences = {{60, 88}, {58, 81}};``` Notice that over the course of his five-year spree, he lived in two residences. One of these he moved to with his wife of three years (he started murdering after marrying his wife). It is unclear whether this changed his choice of body drop locations. Unfortunately, our attempts to pinpoint Sutcliffe’s residence with Rossmo’s model fail miserably. With one static image, guessing at the buffer radius, we have the following probability map: Failure in the form of a probability map. As we see, both the center of gravity and the hot zones are far from either of Sutcliffe’s residences. Indeed, even with a varying buffer radius, we are still led to search in unfruitful locations: An animation of the buffer radius parameter B varying between 1 and 50. Clearly no buffer will give us the desired probability map. Poop. Even with all of the axioms, all of the parameters, all of the gosh-darn work we went through! Our model is useless here. This raises the obvious question, exactly how applicable is Rossmo’s model? ## The Crippling Issues The real world is admittedly more complex than we make it out to be. Whether the criminal is misclassified, bad data is attributed, or the killer has some special, perhaps deranged motivation, there are far too many opportunities for confounding variables to tamper with our results. Rossmo’s model even requires that the killer live in a more or less central urban location, for if he must travel in a specific direction to find victims, he may necessarily produce a skewed distribution of crime locations. Indeed, we have to have some metric by which to judge the accuracy of Rossmo’s model. While one might propose the distance between the offender’s residence and the highest-probability area produced on the map, there are many others. In particular, since the point of geographic profiling is to prioritize the search for a criminal’s residence, the best metric is likely the area searched before finding the residence. We call this metric search area. In other words, search area is the amount of area on the map which has probability greater or equal to the cell containing the actual residence. Indeed Rossmo touts this metric as the only useful metric. However, according to his own tests, the amount of area searched on the Sutcliffe case would be over a hundred square miles! In addition, Rossmo neither provides an idea of what amount of area is feasibly searchable, nor any global statistics on what percentage of cases in his study resulted in an area that was feasibly searchable. We postulate our own analysis here. In a count of Rossmo’s data tables, out of the fifteen individual cases he studied, the average search area was 395 square kilometers, or 152.5 square miles, while the median was about 87 square kilometers, or 33.6 square miles. The maximum is 1829 square kilometers, while the min is 0.2 square kilometers. The complete table is contained in the Mathematica notebook on this blog’s Google Code page. From the 1991 census data for Vancouver, we see that a low density neighborhood has an average population of 2,380 individuals per square kilometer, or about 6,000 per square mile. Applying this to our numbers from the previous paragraph, we have a mean of 940,000 people investigated before the criminal is found, a median of 200,000, a max of four million (!), and a min of 309. Even basing our measurements on the median values, this method appears to be unfeasible as a sole means of search prioritization. Of course, real investigations go on a lot more data, including hunches, to focus search. At best this could be a useful tool for police, but on the median, we believe it would be marginally helpful to authorities prioritize their search efforts. For now, at least, good ol’ experience will likely prevail in hunting serial killers. In addition, other researchers have tested human intuition at doing the same geographic profiling analysis, and they found that with a small bit of training (certainly no more than reading this blog post), humans showed no significant difference from computers at computing this model. (English, 2008) Of course, for the average human the “computing” process (via pencil and paper) was speedy and more variable, but for experienced professionals the margin of error would likely disappear. As interesting as this model may be, it seems the average case is more like Sutcliffe than Chase; Rossmo’s model is effectively a mathematical curiosity. It appears, for now, that our friend Dexter Morgan is safe from the threat of discovery by computer search. ## Alternative Models The idea of a decay function is not limited to Rossmo’s particular equation. Indeed, one might naturally first expect the decay function to be logarithmic, normal, or even exponential. Indeed, such models do exist, and they are all deemed to be roughly equivalent in accuracy for appropriately tuned parameters. (English, 2008) Furthermore, we include an implementation of a normal growth/decay function in the Mathematica notebook on this blog’s Google Code page. After reading that all of these models are roughly equivalent, we did not conduct an explicit analysis of the normal model. We leave that as an exercise to the reader, in order to become familiar with the code provided. In addition, one could augment this model with other kinds of data. If the serial offender targets a specific demographic, then this model could be combined with demographic data to predict the sites of future attacks. It could be (and in some cases has been) weighted according to major roadways and freeways, which reduce a criminals mental model of distance to a hunting ground. In other words, we could use the Google Maps “shortest trip” metric between any two points as its distance metric. To our knowledge, this has not been implemented with any established mapping software. We imagine that such an implementation would be slow; but then again, a distributed network of computers computing the values for each cell in parallel would be quick. ## Other Uses for the Model In addition to profiling serial murders, we have read of other uses for this sort of geographic profiling model. First, there is an established paper on the use of geographic profiling to describe the hunting patterns of great white sharks. Briefly, we recognize that such a model would switch from a taxicab metric to a standard Euclidean metric, since the movement space of the ocean is locally homeomorphic to three-dimensional Euclidean space. Indeed, we might also require a three-dimensional probability map for shark predation, since sharks may swim up or down to find prey. Furthermore, shark swimming patterns are likely not uniformly random in any direction, so this model is weighted to consider that. Finally, we haphazardly propose additional uses for this model: pinpointing the location of stationary artillery, locating terrorist base camps, finding the source of disease outbreaks, and profiling other minor serial-type criminals, like graffiti vandalists. ## Data! Data! My Kingdom for Some Data! As recent as 2000, one researcher noted that the best source of geographic criminal data was newspaper archives. In the age of information, and given the recent popularity of geographic profiling research, this is a sad state of being. As far as we know, there are no publicly available indexes of geographic crime location data. As of the writing of this post, an inquiry to a group of machine learning specialists has produced no results. There doesn’t seem to be such a forum for criminology experts. If any readers have information to crime series data that is publicly available on the internet (likely used in some professor’s research and posted on their website), please don’t hesitate to leave a comment with a link. It would be greatly appreciated. ### Like this: This entry was posted in Models, Probability Theory and tagged crime, geographic profiling, history, mathematica, mathematics, programming, serial killers by j2kun. Bookmark the permalink. ## 6 thoughts on “Hunting Serial Killers” 1. on July 27, 2011 at 1:08 am said: Fascinating post! Thank you for explaining it so well and for giving me more to read up on! • on July 28, 2011 at 1:46 pm said: Any time! I’m proud to have attracted someone from a non-mathematical community to my humble blog 2. on January 27, 2012 at 6:25 am said: What a great post! I had fun reading it, thanks for that. 3. fish on April 4, 2012 at 1:25 pm said: Very interesting, and I was looking for this! Except I don’t know how to use it, is there any way I can find out how to apply this? • on April 4, 2012 at 3:31 pm said: So say that you have a serial case you want to solve, and it’s in some region like a city or county. Take a map and mark the locations of interest on it (murder locations, body drop sites, weapons cache locations, etc.), and then overlay a grid of squares on it. Label the bottom left square (0,0), the square above (0,1), the square to the right (1,0), etc., as with the usual Cartesian plane. Then whichever squares the locations of interest fall into are the coordinates of that location. My program takes as input a list of such locations, and outputs the pictures you see in the article. Is that what you were asking? 4. Pingback: Matemáticas y programación | CyberHades Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373749494552612, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/98070-eigenvectors.html
# Thread: 1. ## Eigenvectors Currently in the DE class I'm taking we're learning how to solve homogeneous linear systems. One of the steps requires you to come up with the associated eigenvector of a system. Unfortunately nowhere in our book does it discuss anything about eigenvectors and all our professor told us was that we'd find it easy if we had already taken linear algebra which I have not taken. I tried to find a good explanation of how to find eigenvectors of 2x2 and 3x3 matrices but didn't find much. For example how would I find the eigenvector of: $\begin{vmatrix}<br /> 0 & -7 & 0\\ <br /> 5 & 8 & 4\\ <br /> 0 & 5 & 0<br /> \end{vmatrix}<br /> \begin{vmatrix}<br /> k_{1}\\<br /> k_{2}\\<br /> k_{3} <br /> \end{vmatrix}\Rightarrow<br /> \begin{matrix}<br /> -7k_{2}=0\\<br /> 5k_{1}+8k_{2}+4k_{3}=0\\<br /> 5k_{2}=0<br /> \end{matrix}$ 2. Very briefly: an eigenvector is a vector which you multiply the matrix by, and you get a vector that's parallel to the original one. Hence: $\mathbf{Ak} = \lambda\mathbf{k}$ That is: $(\mathbf{A} - \lambda\mathbf{I}) \mathbf{k} = 0$ So what you want to do is subtract $\lambda\mathbf{I}$ from $\mathbf{A}$ and find $\lambda$ by solving the determinant. It will be a cubic. Then use that to find what $\mathbf{k}$ then has to be to make the equation zero. 3. Thanks Matt for the quick response. I've already done most of what you've told me to do... I guess I'm just a little confused how exactly to figure out what K has to be for the equation to equal zero. I've attached a PDF of what I have so far. Is there some systematic way of finding K? Thanks. Attached Files • Problem.pdf (93.6 KB, 16 views) 4. Let $k_{3} = \alpha$ then $k_{1} = - \frac{4}{5} \alpha$ so $\begin{bmatrix}k_{1}\\k_{2}\\k_{3}\end{bmatrix} = \begin{bmatrix}-\frac{4}{5}\alpha\\0\\\alpha\end{bmatrix} = \alpha \begin{bmatrix}-\frac{4}{5}\\0\\1\end{bmatrix}$ If you let $\alpha =2$, for example, you see that $\begin{bmatrix}-4\\0\\5\end{bmatrix}$ is an eigenvector associated with the eigenvalue $\lambda =2$. There are infintely many to choose from. 5. You have simultaneous equations in 3 variables, one of which is redundant. The others will give you one value of k in terms of the other. So make one of them an "arbitrary variable" which can be used as a scaling factor. Any vector which is a multiple of this k will be an eigenvector with the given eigenvalue. Hope this helps. Your working in the pdf file is exactly what I would have done. Then Random Variable finishes it off. 6. Thanks Matt and Random Variable for your help... this now makes sense. 7. Just a quick reply, seeing as you're new: feel free to press the "thanks" button against any reply that was particularly helpful.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9637613296508789, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Ellipse
# Ellipse "Elliptical" redirects here. For the exercise machine, see Elliptical trainer. Not to be confused with ellipsis. An ellipse obtained as the intersection of a cone with an inclined plane. The rings of Saturn are circular, but when seen partially edge on, as in this image, they appear to be ellipses. In addition, the planet itself is an ellipsoid, flatter at the poles than the equator. Picture by ESO In mathematics, an ellipse (from Greek ἔλλειψις elleipsis, a "falling short") is a plane curve that results from the intersection of a cone by a plane in a way that produces a closed curve. Circles are special cases of ellipses, obtained when the cutting plane is orthogonal to the cone's axis. An ellipse is also the locus of all points of the plane whose distances to two fixed points add to the same constant. The name ἔλλειψις was given by Apollonius of Perga in his Conics, emphasizing the connection of the curve with "application of areas". Ellipses are closed curves and are the bounded case of the conic sections, the curves that result from the intersection of a circular cone and a plane that does not pass through its apex; the other two (open and unbounded) cases are parabolas and hyperbolas. Ellipses arise from the intersection of a right circular cylinder with a plane that is not parallel to the cylinder's main axis of symmetry. Ellipses also arise as images of a circle under parallel projection and the bounded cases of perspective projection, which are simply intersections of the projective cone with the plane of projection. It is also the simplest Lissajous figure, formed when the horizontal and vertical motions are sinusoids with the same frequency. ## Elements of an ellipse The ellipse and some of its mathematical properties. An ellipse is a smooth closed curve which is symmetric about its horizontal and vertical axes. The distance between antipodal points on the ellipse, or pairs of points whose midpoint is at the center of the ellipse, is maximum along the major axis or transverse diameter, and a minimum along the perpendicular minor axis or conjugate diameter.[1] The semi-major axis (denoted by a in the figure) and the semi-minor axis (denoted by b in the figure) are one half of the major and minor axes, respectively. These are sometimes called (especially in technical fields) the major and minor semi-axes,[2][3] the major and minor semiaxes,[4][5] or major radius and minor radius.[6][7][8][9] The four points where these axes cross the ellipse are the vertices, points where its curvature is minimized or maximized.[10] The foci of an ellipse are two special points F1 and F2 on the ellipse's major axis and are equidistant from the center point. The sum of the distances from any point P on the ellipse to those two foci is constant and equal to the major axis ( PF1 + PF2 = 2a ). Each of these two points is called a focus of the ellipse. Refer to the lower Directrix section of this article for a second equivalent construction of an ellipse. The eccentricity of an ellipse, usually denoted by ε or e, is the ratio of the distance between the two foci, to the length of the major axis or e = 2f/2a = f/a. For an ellipse the eccentricity is between 0 and 1 (0<e<1). When the eccentricity is 0 the foci coincide with the center point and the figure is a circle. As the eccentricity tends toward 1, the ellipse gets a more elongated shape. It tends towards a line segment (see below) if the two foci remain a finite distance apart and a parabola if one focus is kept fixed as the other is allowed to move arbitrarily far away. The distance ae from a focal point to the centre is called the linear eccentricity of the ellipse (f = ae). ## Drawing ellipses ### Pins-and-string method Drawing an ellipse with two pins, a loop, and a pen. The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using two drawing pins, a length of string, and a pencil.[11] In this method, pins are pushed into the paper at two points which will become the ellipse's foci. A string tied at each end to the two pins and the tip of a pen is used to pull the loop taut so as to form a triangle. The tip of the pen will then trace an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, this procedure is traditionally used by gardeners to outline an elliptical flower bed; thus it is called the gardener's ellipse.[12] ### Other methods #### Trammel method Trammel of Archimedes (ellipsograph) animation An ellipse can also be drawn using a ruler, a set square, and a pencil: Draw two perpendicular lines M,N on the paper; these will be the major (M) and minor (N) axes of the ellipse. Mark three points A, B, C on the ruler. A->C being the length of the semi-major axis and B->C the length of the semi-minor axis. With one hand, move the ruler on the paper, turning and sliding it so as to keep point A always on line N, and B on line M. With the other hand, keep the pencil's tip on the paper, following point C of the ruler. The tip will trace out an ellipse. The trammel of Archimedes or ellipsograph is a mechanical device that implements this principle. The ruler is replaced by a rod with a pencil holder (point C) at one end, and two adjustable side pins (points A and B) that slide into two perpendicular slots cut into a metal plate.[13] The mechanism can be used with a router to cut ellipses from board material. The mechanism is also used in a toy called the "nothing grinder". #### Parallelogram method Ellipse construction applying the parallelogram method In the parallelogram method, an ellipse is constructed point by point using equally spaced points on two horizontal lines and equally spaced points on two vertical lines. Similar methods exist for the parabola and hyperbola. ### Approximations to ellipses An ellipse of low eccentricity can be represented reasonably accurately by a circle with its centre offset. To draw the orbit with a pair of compasses the centre of the circle should be offset from the focus by an amount equal to the eccentricity multiplied by the radius. ## Mathematical definitions and properties ### In Euclidean geometry #### Definition In Euclidean geometry, the ellipse is usually defined as the bounded case of a conic section, or as the set of points such that the sum of the distances to two fixed points (the foci) is constant. The ellipse can also be defined as the set of points such that the distance from any point in that set to a given point in the plane (a focus) is a constant positive fraction less than 1 (the eccentricity) of the perpendicular distance of the point in the set to a given line (called the directrix). Yet another equivalent definition of the ellipse is that it is the set of points that are equidistant from one point in the plane (a focus) and a particular circle, the directrix circle (whose center is the other focus). The equivalence of these definitions can be proved using the Dandelin spheres. #### Equations The equation of an ellipse whose major and minor axes coincide with the Cartesian axes is $\left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2 = 1.$ This means any noncircular ellipse is a squashed circle. If we draw an ellipse twice as long as it is wide, and draw the circle centered at the ellipse's center with diameter equal to the ellipse's longer axis, then on any line parallel to the shorter axis the length within the circle is twice the length within the ellipse. So the area enclosed by an ellipse is easy to calculate—it's the lengths of elliptic arcs that are hard. #### Focus The distance from the center C to either focus is f = ae, which can be expressed in terms of the major and minor radii: $f = \sqrt{a^2-b^2}.$ #### Eccentricity The eccentricity of the ellipse (commonly denoted as either e or $\varepsilon$) is $e=\varepsilon=\sqrt{\frac{a^2-b^2}{a^2}} =\sqrt{1-\left(\frac{b}{a}\right)^2} =f/a$ (where again a and b are one-half of the ellipse's major and minor axes respectively, and f is the focal distance) or, as expressed in terms using the flattening factor $g=1-\frac {b}{a}=1-\sqrt{1-e^2},$ $e=\sqrt{g(2-g)}.$ Other formulas for the eccentricity of an ellipse are listed in the article on eccentricity of conic sections. Formulas for the eccentricity of an ellipse that is expressed in the more general quadratic form are described in the article dedicated to conic sections. #### Directrix Each focus F of the ellipse is associated with a line parallel to the minor axis called a directrix. Refer to the illustration on the right. The distance from any point P on the ellipse to the focus F is a constant fraction of that point's perpendicular distance to the directrix resulting in the equality, e=PF/PD. The ratio of these two distances is the eccentricity of the ellipse. This property (which can be proved using the Dandelin spheres) can be taken as another definition of the ellipse. Besides the well–known ratio e=f/a, it is also true that e=a/d. #### Circular directrix The ellipse can also be defined as the set of points that are equidistant from one focus and a circle, the directrix circle, that is centered on the other focus. The radius of the directrix circle equals the ellipse's major axis, so the focus and the entire ellipse are inside the directrix circle. #### Ellipse as hypotrochoid An ellipse (in red) as a special case of the hypotrochoid with R = 2r. The ellipse is a special case of the hypotrochoid when R = 2r. #### Area The area enclosed by an ellipse is πab, where a and b are one-half of the ellipse's major and minor axes respectively. If the ellipse is given by the implicit equation $A x^2+ B x y + C y^2 = 1$, then the area is $\frac{2\pi}{\sqrt{ 4 A C - B^2 }}$. #### Circumference The circumference $C$ of an ellipse is: $C = 4 a E(e)$ where again a is the length of the semi-major axis and e is the eccentricity and where the function $E$ is the complete elliptic integral of the second kind. This may be evaluated directly using the Carlson symmetric form[14] as illustrated by the following python code (this converges quadratically): ```def EllipseCircumference(a, b): """ Compute the circumference of an ellipse with semi-axes a and b. Require a >= 0 and b >= 0. Relative accuracy is about 0.5^53. """ import math x, y = max(a, b), min(a, b) digits = 53; tol = math.sqrt(math.pow(0.5, digits)) if digits * y < tol * x: return 4 * x s = 0; m = 1 while x - y > tol * y: x, y = 0.5 * (x + y), math.sqrt(x * y) m *= 2; s += m * math.pow(x - y, 2) return math.pi * (math.pow(a + b, 2) - s) / (x + y) ``` The exact infinite series is: $C = 2\pi a \left[{1 - \left({1\over 2}\right)^2e^2 - \left({1\cdot 3\over 2\cdot 4}\right)^2{e^4\over 3} - \left({1\cdot 3\cdot 5\over 2\cdot 4\cdot 6}\right)^2{e^6\over5} - \cdots}\right]$ or $C = 2\pi a \left[1 - \sum_{n=1}^\infty \left(\frac{(2n - 1)!!}{2^n n!}\right)^2 \frac{e^{2n}}{2n - 1}\right],$ where $n!!$ is the double factorial. Unfortunately, this series converges rather slowly; however, by expanding in terms of $h = (a-b)^2/(a+b)^2$, Bessel[15] derived an expression which converges much more rapidly, $C = \pi (a + b) \left[1 + \sum_{n=1}^\infty \left(\frac{(2n - 1)!!}{2^n n!}\right)^2 \frac{h^n}{(2n - 1)^2}\right].$ A good approximation is Ramanujan's: $C \approx \pi \left[3(a+b) - \sqrt{(3a+b)(a+3b)}\right]= \pi \left[3(a+b)-\sqrt{10ab+3(a^2+b^2)}\right]$ and a better approximation is $C\approx\pi\left(a+b\right)\left(1+\frac{3h}{10+\sqrt{4-3h}}\right).\!\,$ For the special case where the minor axis is half the major axis, these become: $C \approx \frac{\pi a (9 - \sqrt{35})}{2}$ or, as an estimate of the better approximation, $C \approx \frac{a}{2} \sqrt{93 + \frac{1}{2} \sqrt{3}}$ More generally, the arc length of a portion of the circumference, as a function of the angle subtended, is given by an incomplete elliptic integral. The inverse function, the angle subtended as a function of the arc length, is given by the elliptic functions.[citation needed] #### Chords The midpoints of a set of parallel chords of an ellipse are collinear.[16]:p.147 ### In projective geometry In projective geometry, an ellipse can be defined as the set of all points of intersection between corresponding lines of two pencils of lines which are related by a projective map. By projective duality, an ellipse can be defined also as the envelope of all lines that connect corresponding points of two lines which are related by a projective map. This definition also generates hyperbolae and parabolae. However, in projective geometry every conic section is equivalent to an ellipse. A parabola is an ellipse that is tangent to the line at infinity Ω, and the hyperbola is an ellipse that crosses Ω. An ellipse is also the result of projecting a circle, sphere, or ellipse in three dimensions onto a plane, by parallel lines. It is also the result of conical (perspective) projection of any of those geometric objects from a point O onto a plane P, provided that the plane Q that goes through O and is parallel to P does not cut the object. The image of an ellipse by any affine map is an ellipse, and so is the image of an ellipse by any projective map M such that the line M−1(Ω) does not touch or cross the ellipse. ### In analytic geometry #### General ellipse In analytic geometry, the ellipse is defined as the set of points $(X,Y)$ of the Cartesian plane that, in non-degenerate cases, satisfy the implicit equation[17][18] $~A X^2 + B X Y + C Y^2 + D X + E Y + F = 0$ provided $B^2 - 4AC < 0.$ To distinguish the degenerate cases from the non-degenerate case, let ∆ be the determinant of the 3×3 matrix [A, B/2, D/2 ; B/2, C, E/2 ; D/2, E/2, F ]: that is, ∆ = (AC - B2/4)F + BED/4 - CD2/4 - AE2/4. Then the ellipse is a non-degenerate real ellipse if and only if C∆<0. If C∆>0 we have an imaginary ellipse, and if ∆=0 we have a point ellipse.[19]:p.63 #### Canonical form Let $a>b$. By a proper choice of coordinate system, the ellipse can be described by the canonical implicit equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ Here $(x,y)$ are the point coordinates in the canonical system, whose origin is the center $(X_c,Y_c)$ of the ellipse, whose $x$-axis is the unit vector $(X_a,Y_a)$ coinciding with the major axis, and whose $y$-axis is the perpendicular vector $(-Y_a,X_a)$ coinciding with the minor axis. That is, $x = X_a(X - X_c) + Y_a(Y - Y_c)$ and $y = -Y_a(X - X_c) + X_a(Y - Y_c)$. In this system, the center is the origin $(0,0)$ and the foci are $(-e a, 0)$ and $(+e a, 0)$. Any ellipse can be obtained by rotation and translation of a canonical ellipse with the proper semi-diameters. Translation of an ellipse centered at $(X_c,Y_c)$ is expressed as $\frac{(x - X_c)^2}{a^2}+\frac{(y - Y_c)^2}{b^2}=1$ Moreover, any canonical ellipse can be obtained by scaling the unit circle of $\reals^2$, defined by the equation $X^2+Y^2=1\,$ by factors a and b along the two axes. For an ellipse in canonical form, we have $Y = \pm b\sqrt{1 - (X/a)^2} = \pm \sqrt{(a^2-X^2)(1 - e^2)}$ The distances from a point $(X,Y)$ on the ellipse to the left and right foci are $a + e X$ and $a - e X$, respectively. ### In trigonometry #### General parametric form An ellipse in general position can be expressed parametrically as the path of a point $(X(t),Y(t))$, where $X(t)=X_c + a\,\cos t\,\cos \varphi - b\,\sin t\,\sin\varphi$ $Y(t)=Y_c + a\,\cos t\,\sin \varphi + b\,\sin t\,\cos\varphi$ as the parameter t varies from 0 to 2π. Here $(X_c,Y_c)$ is the center of the ellipse, and $\varphi$ is the angle between the $X$-axis and the major axis of the ellipse. #### Parametric form in canonical position Parametric equation for the ellipse (red) in canonical position. The eccentric anomaly t is the angle of the blue line with the X-axis. Click on image to see animation. For an ellipse in canonical position (center at origin, major axis along the X-axis), the equation simplifies to $X(t)=a\,\cos t$ $Y(t)=b\,\sin t$ Note that the parameter t (called the eccentric anomaly in astronomy) is not the angle of $(X(t),Y(t))$ with the X-axis. Formulæ connecting a tangential angle $\phi$, the angle anchored at the ellipse's center $\phi^\prime$ (called also the polar angle from the ellipse center), and the parametric angle t[20] are:[21][22][23] $\tan \phi=\frac {a}{b} \tan t=\frac {\tan \phi'}{(1-g)^2}=\frac {\tan \phi'}{1-e^2}$ $\tan t=\frac {b}{a} \tan \phi=\sqrt{(1-e^2)} \tan \phi=(1-g) \tan \phi=\frac {\tan \phi'}{\sqrt{(1-e^2)}}=\frac {a}{b} \tan \phi'$ #### Polar form relative to center Polar coordinates centered at the center. In polar coordinates, with the origin at the center of the ellipse and with the angular coordinate $\theta$ measured from the major axis, the ellipse's equation is $r(\theta)=\frac{ab}{\sqrt{(b \cos \theta)^2 + (a\sin \theta)^2}}$ #### Polar form relative to focus Polar coordinates centered at focus. If instead we use polar coordinates with the origin at one focus, with the angular coordinate $\theta = 0$ still measured from the major axis, the ellipse's equation is $r(\theta)=\frac{a (1-e^{2})}{1 \pm e\cos\theta}$ where the sign in the denominator is negative if the reference direction $\theta = 0$ points towards the center (as illustrated on the right), and positive if that direction points away from the center. In the slightly more general case of an ellipse with one focus at the origin and the other focus at angular coordinate $\phi$, the polar form is $r=\frac{a (1-e^{2})}{1 - e\cos(\theta - \phi)}.$ The angle $\theta$ in these formulas is called the true anomaly of the point. The numerator $a (1-e^{2})$ of these formulas is the semi-latus rectum of the ellipse, usually denoted $l$. It is the distance from a focus of the ellipse to the ellipse itself, measured along a line perpendicular to the major axis. Semi-latus rectum. #### General polar form The following equation on the polar coordinates (r, θ) describes a general ellipse with semidiameters a and b, centered at a point (r0, θ0), with the a axis rotated by φ relative to the polar axis:[citation needed] $r(\theta )=\frac{P(\theta )+Q(\theta )}{R(\theta )}$ where $P(\theta )=r_0 \left[\left(b^2-a^2\right) \cos \left(\theta +\theta _0-2 \varphi \right)+\left(a^2+b^2\right) \cos \left(\theta -\theta_0\right)\right]$ $Q(\theta )=\sqrt{2} a b \sqrt{R(\theta )-2 r_0^2 \sin ^2\left(\theta -\theta_0\right)}$ $R(\theta )=\left(b^2-a^2\right) \cos (2 \theta -2 \varphi )+a^2+b^2$ #### Angular eccentricity The angular eccentricity $\alpha$ is the angle whose sine is the eccentricity e; that is, $\alpha=\sin^{-1}(e)=\cos^{-1}\left(\frac{b}{a}\right)=2\tan^{-1}\left(\sqrt{\frac{a-b}{a+b}}\right);\,\!$ ### Degrees of freedom An ellipse in the plane has five degrees of freedom (the same as a general conic section), defining its position, orientation, shape, and scale. In comparison, circles have only three degrees of freedom (position and scale), while parabolae have four. Said another way, the set of all ellipses in the plane, with any natural metric (such as the Hausdorff distance) is a five-dimensional manifold. These degrees can be identified with, for example, the coefficients A,B,C,D,E of the implicit equation, or with the coefficients Xc, Yc, φ, a, b of the general parametric form. ## Ellipses in physics ### Elliptical reflectors and acoustics If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves created by that disturbance, after being reflected by the walls, will converge simultaneously to a single point — the second focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci. Similarly, if a light source is placed at one focus of an elliptic mirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, a prolate spheroid), this property will hold for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners. Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under a vaulted roof shaped as a section of a prolate spheroid. Such a room is called a whisper chamber. The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are the National Statuary Hall at the United States Capitol (where John Quincy Adams is said to have used this property for eavesdropping on political matters); the Mormon Tabernacle at Temple Square in Salt Lake City, Utah; at an exhibit on sound at the Museum of Science and Industry in Chicago; in front of the University of Illinois at Urbana-Champaign Foellinger Auditorium; and also at a side chamber of the Palace of Charles V, in the Alhambra. ### Planetary orbits Main article: Elliptic orbit In the 17th century, Johannes Kepler discovered that the orbits along which the planets travel around the Sun are ellipses with the Sun at one focus, in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation. More generally, in the gravitational two-body problem, if the two bodies are bound to each other (i.e., the total energy is negative), their orbits are similar ellipses with the common barycenter being one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. Interestingly, the orbit of either body in the reference frame of the other is also an ellipse, with the other body at one focus. Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due to electromagnetic radiation and quantum effects which become significant when the particles are moving at high speed.) For elliptical orbits, useful relations involving the eccentricity $e$ are: $e=\frac{r_{a}-r_{p}}{r_{a}+r_{p}}=\frac{r_{a}-r_{p}}{2a}$ $r_{a}=(1+e)a$ $r_{p}=(1-e)a$ where • $r_{a}$ is the radius at apoapsis (the farthest distance) • $r_{p}$ is the radius at periapsis (the closest distance) • $a$ is the length of the semi-major axis Also, in terms of $r_{a}$ and $r_{p}$, the semi-major axis $a$ is their arithmetic mean, the semi-minor axis $b$ is their geometric mean, and the semi-latus rectum $l$ is their harmonic mean. In other words, $a=\frac{r_{a}+r_{p}}{2}$ $b=\sqrt[2]{r_{a}\cdot r_{p}}$ $l=\frac{2}{\frac{1}{r_{a}}+\frac{1}{r_{p}}}=\frac{2r_{a}r_{p}}{r_{a}+r_{p}}$. ### Harmonic oscillators The general solution for a harmonic oscillator in two or more dimensions is also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elastic spring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these "harmonic orbits" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion. ### Phase visualization In electronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of an oscilloscope. If the display is an ellipse, rather than a straight line, the two signals are out of phase. ### Elliptical gears Two non-circular gears with the same elliptical outline, each pivoting around one focus and positioned at the proper angle, will turn smoothly while maintaining contact at all times. Alternatively, they can be connected by a link chain or timing belt, or in the case of a bicycle the main chainring may be elliptical, or an ovoid similar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variable angular speed or torque from a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varying mechanical advantage. Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears.[24] An example gear application would be a device that winds thread onto a conical bobbin on a spinning machine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base.[25] ### Optics In a material that is optically anisotropic (birefringent), the refractive index depends on the direction of the light. The dependency can be described by an index ellipsoid. (If the material is optically isotropic, this ellipsoid is a sphere.) ## Ellipses in statistics and finance In statistics, a bivariate random vector (X, Y) is jointly elliptically distributed if its iso-density contours — loci of equal values of the density function — are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is the multivariate normal distribution. The elliptical distributions are important in finance because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance — that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return.[26][27] ## Ellipses in computer graphics Drawing an ellipse as a graphics primitive is common in standard display libraries, such as the MacIntosh QuickDraw API, and Direct2D on Windows. Jack Bresenham at IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967.[28] Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken.[29] In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties.[30] These algorithms need only a few multiplications and additions to calculate each vector. It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent "jaggedness" of the approximation. ### Drawing with Bezier spline paths Multiple Bezier splines may also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as an affine transformation of a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituent Bezier curves will behave appropriately under such transformations. ## Line segment as a type of degenerate ellipse A line segment is a degenerate ellipse with semi-minor axis = 0 and eccentricity = 1, and with the focal points at the ends.[31] Although the eccentricity is 1 this is not a parabola. A radial elliptic trajectory is a non-trivial special case of an elliptic orbit, where the ellipse is a line segment. ## Ellipses in optimization theory It is sometimes useful to find the minimum bounding ellipse on a set of points. The ellipsoid method is quite useful for attacking this problem. ## See also • Apollonius of Perga, the classical authority • Cartesian oval, a generalization of the ellipse • Circumconic and inconic • Conic section • Ellipsoid, a higher dimensional analog of an ellipse • Elliptic coordinates, an orthogonal coordinate system based on families of ellipses and hyperbolae • Elliptical distribution, in statistics • Elliptic partial differential equation • Great ellipse • Hyperbola • Kepler's laws of planetary motion • Matrix representation of conic sections • n-ellipse, a generalization of the ellipse for n foci • Oval • Parabola • Proofs involving the ellipse • Spheroid, the ellipsoid obtained by rotating an ellipse about its major or minor axis • Steiner circumellipse, the unique ellipse circumscribing a triangle and sharing its centroid • Steiner inellipse, the unique ellipse inscribed in a triangle with tangencies at the sides' midpoints • Superellipse, a generalization of an ellipse that can look more rectangular or more "pointy" • True, eccentric, and mean anomaly ## References • Besant, W.H. (1907). "Chapter III. The Ellipse". Conic Sections. London: George Bell and Sons. p. 50. • Miller, Charles D.; Lial, Margaret L.; Schneider, David I. (1990). Fundamentals of College Algebra (3rd ed.). Scott Foresman/Little. p. 381. ISBN 0-673-38638-4. • Coxeter, H.S.M. (1969). Introduction to Geometry (2nd ed.). New York: Wiley. pp. 115–9. • Ellipse at Planetmath ## Notes 1. Haswell, Charles Haynes (1920). Mechanics' and Engineers' Pocket-book of Tables, Rules, and Formulas. Harper & Brothers. 2. Herschel, Sir John Frederick William (1842). A treatise on astronomy. Lea & Blanchard. p. 256. 3. Lankford, John (1997). History of Astronomy: An Encyclopedia. Taylor & Francis. p. 194. ISBN 978-0-8153-0322-0. 4. Prasolov, Viktor Vasilʹevich; Tikhomirov, Vladimir Mikhaĭlovich (2001). Geometry. American Mathematical Society. p. 80. ISBN 978-0-8218-2038-4. 5. Fenna, Donald (2007). Cartographic Science: A Compendium of Map Projections, With Derivations. CRC Press. p. 24. ISBN 978-0-8493-8169-0. 6. 7. Salomon, David (2006). Curves And Surfaces for Computer Graphics. Birkhäuser. p. 365. ISBN 978-0-387-24196-8. 8. Kreith, Frank; Goswami, D. Yogi (2005). The CRC Handbook Of Mechanical Engineering. CRC Press. pp. 11–8. ISBN 978-0-8493-0866-6. "Circles and Ellipses (11.3.2)" 9. Gibson, C. G. (2001), Elementary Geometry of Differentiable Curves: An Undergraduate Introduction, Cambridge University Press, p. 127, ISBN 9780521011075 . 10. Armengaud, Aîné (1853). "Ovals, Ellipses, Parabolas, Volutes, etc. §53". The Practical Draughtsman's Book of Industrial Design. Longman, Brown, Green, and Longmans. p. 16. 11. 12. Carlson, B. C. (1995). "Computation of real or complex elliptic integrals". Numerical Algorithms 10: 13–26. arXiv:math/9409227. doi:10.1007/BF02198293. 13. Bessel, F. W. (2010). "The calculation of longitude and latitude from geodesic measurements (1825)". Astron. Nachr. 331 (8): 852–861. arXiv:0908.1824. doi:10.1002/asna.201011352. English translation of Astron. Nachr. 4, 241–254 (1825). 14. Chakerian, G. D. "A Distorted View of Geometry." Ch. 7 in Mathematical Plums (R. Honsberger, editor). Washington, DC: Mathematical Association of America, 1979. 15. Larson, Ron; Hostetler, Robert P.; Falvo, David C. (2006). "Chapter 10". Precalculus with Limits. Cengage Learning. p. 767. ISBN 0-618-66089-5. 16. Young, Cynthia Y. (2010). "Chapter 9". Precalculus. John Wiley and Sons. p. 831. ISBN 0-471-75684-9. 17. Lawrence, J. Dennis, A Catalog of Special Plane Curves, Dover Publ., 1972. 18. If the ellipse is illustrated as a meridional one for the earth, the tangential angle is equal to geodetic latitude, the angle $\phi'$ is the geocentric latitude, and parametric angle t is a parametric (or reduced) latitude of auxiliary circle 19. Meeus, J. (1991). "Ch. 10: The Earth's Globe". Astronomical Algorithms. Willmann-Bell. p. 78. ISBN 0-943396-35-2. 20. 21. Chamberlain, G. (February 1983). "A characterization of the distributions that imply mean—Variance utility functions". 29 (1): 185–201. doi:10.1016/0022-0531(83)90129-1. 22. Owen, J.; Rabinovitch, R. (June 1983). "On the class of elliptical distributions and their applications to the theory of portfolio choice". 38: 745–752. JSTOR 2328079. 23. Pitteway, M.L.V. (1967). "Algorithm for drawing ellipses or hyperbolae with a digital plotter". The Computer Journal 10 (3): 282–9. doi:10.1093/comjnl/10.3.282. 24. Van Aken, J.R. (September 1984). "An Efficient Ellipse-Drawing Algorithm". IEEE Computer Graphics and Applications 4 (9): 24–35. doi:10.1109/MCG.1984.275994. 25. Smith, L.B. (1971). "Drawing ellipses, hyperbolae or parabolae with a fixed number of points". The Computer Journal 14 (1): 81–86. doi:10.1093/comjnl/14.1.81. 26. Seligman, Courtney (1993–2010). "Orbital Motions Ellipses and Other Conic Sections". Online Astronomy eText.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 89, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8804581761360168, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/155561-convex-quadrangle-abcd-midpoints-its-sides-diagonals.html
# Thread: 1. ## Convex quadrangle ABCD, midpoints of its sides, diagonals In a convex quadrangle ABCD points M and N are respectively centres of sides AB and CD. E is the point of intersection of the quadrangle's diagonals. Prove that the line containing bisector of $<BEC$ is perpendicular to the line MN only if $AC=BD.$ 2. I find it funny how this is the exact same question except with M, N and X, Y switched. http://www.mathhelpforum.com/math-he...al-155234.html (My reply to this question remains the same as the other one.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8959559798240662, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/69346-integration-problem-calculus-homework.html
# Thread: 1. ## Integration Problem for Calculus Homework I'm having trouble with the following problem from my homework for a Calc. II course: The velocity v of the flow of blood at a distance r from the central axis of an artery of radius R is, $v = k(R^2 - r^2)$ where k is the constant of proportionality. Find the average rate of flow of blood along a radius of the artery. (Use 0 and R as the limits of integration.) Any help would be really appreciated, thanks. EDIT: I should probably clarify: I know how to use integration in order to find the average value of a function over a range. It's integrating the function that I'm stuck on. 2. Originally Posted by justaguy I'm having trouble with the following problem from my homework for a Calc. II course: Any help would be really appreciated, thanks. EDIT: I should probably clarify: I know how to use integration in order to find the average value of a function over a range. It's integrating the function that I'm stuck on. what problems are you having? the integral is: $\frac 1R \int_0^R k(R^2 - r^2)~dr$ it is a matter of using the power rule for integrals, nothing too fancy. remember, R and k are constants, r is the variable 3. Originally Posted by Jhevon what problems are you having? the integral is: $\frac 1R \int_0^R k(R^2 - r^2)~dr$ it is a matter of using the power rule for integrals, nothing too fancy. remember, R and k are constants, r is the variable . . . I was assuming R was a variable. This makes more sense now. Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951659083366394, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/54372-locus-points.html
# Thread: 1. ## Locus of points Q: Describe the locus of points that satisfy the equation: $<br /> r \wedge a = b<br />$ where $<br /> \begin{array}{l}<br /> a = (1,1,0) \\ <br /> b = (1, - 1,0) \\ <br /> \end{array}<br />$. My thinking was that the locus is either a line or a plane, where b is perpendicular to a, but I'm not entirely sure which one it is. Help would be loved. 2. Originally Posted by free_to_fly Q: Describe the locus of points that satisfy the equation: $<br /> r \wedge a = b<br />$ where $<br /> \begin{array}{l}<br /> a = (1,1,0) \\ <br /> b = (1, - 1,0) \\ <br /> \end{array}<br />$. My thinking was that the locus is either a line or a plane, where b is perpendicular to a, but I'm not entirely sure which one it is. Help would be loved. There are probably better ways than this: Let $r = <\alpha, \, \beta, \, \gamma>$. Then $r \wedge a = <-\gamma, \, \gamma, \, (\alpha - \beta)>$. Therefore $r = <\alpha, \, \alpha, \, -1> = <0, \, 0, \, -1> + \alpha <1, 1, 0>$ for all real values of $\alpha$. This is the vector equation of a line.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352591037750244, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/119232-derangement-question.html
# Thread: 1. ## derangement question A brand new standard deck of 52 playing cards comes with the cards arranged in a particular order. Before using them, the cards are shuffled. What is the probability that exactly 10 of the cards will remain in their original position? 2. Clearly it's equal to the number of possible derangements of the remaining 42 cards, divided by $52!$. The number of derangements of $n$ objects is the integer closest to $n!/e$, in this case $51687286544138714389965559095310738479145874768856 1$. So the probability is $\frac{51687286544138714389965559095310738479145874 7688561}{80658175170943878571660636856403766975289 505440883277824000000000000}$. 3. Originally Posted by Bruno J. Clearly it's equal to the number of possible derangements of the remaining 42 cards, divided by $52!$. The number of derangements of $n$ objects is the integer closest to $n!/e$, in this case $51687286544138714389965559095310738479145874768856 1$. So the probability is $\frac{51687286544138714389965559095310738479145874 7688561}{80658175170943878571660636856403766975289 505440883277824000000000000}$. Don't forget to take into account the possible choices of the 10 cards that remain in their original position. 4. Oh yeah, where is my head! That'll teach me to try helping people at 2:00AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412326216697693, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/4157-help-newton.html
# Thread: 1. ## help with Newton Is there some one who would get a kick out of explaining Newton’s method to me? I've read some different things about it but I didn’t understand the calculus Is it possible to explain it without using calculus? dan 2. What seems to be the trouble?. Newton's Method is one of the easiest things in calculus. It's just a matter of iterations. Let's give an example: Use Newton to find the 'real' solutions of $x^{3}-x-1=0$ Let $f(x)=x^{3}-x-1\;\ and\;\ f'(x)=3x^{2}-1$ Graph the function, $x^{3}-x-1$ You can see that y=0 when x is between 1 and 2. Make something in that region your initial guess. Try 1.5 $x_{2}=1.5{-}\frac{(1.5)^{3}-1.5-1}{3(1.5)^{2}-1}=1.34782609$ Now use the result you get from this and sub back into the equation: What is wrong with this website that I keep getting these errors saying the image is too big?. I've never seen this on another site. The code is fine as far as I can tell.. $x_{3}=1.34782609-$ $\frac{(1.34782609)^{3}-(1.34782609)-1}{3(1.34782609)^{2}-1}=1.32520040$ Continue in this fashion until you arrive at approximations which are so close they are virtually unchanged. We end up with: $x_{1}=1.5\;\ x^{2}=1.34782609\;\ x^{3}$ $=1.32520040\;\ x^{4}=$ $1.32471817\;\ x^{5}=1.32471796\;\ x^{6}=1.32471796$ See, the last two are the same. No need to continue. . The solution is $\approx{1.32471796}$ 3. Originally Posted by galactus What is wrong with this website that I keep getting these errors saying the image is too big?. I've never seen this on another site. The code is fine as far as I can tell.. The TeX system seems to not like long (not that long) strings of TeX. Placing pairs of [ /math] and [ math] (without extra spaces) s seems to fix the problems usually (as I have done in your post). RonL 4. Thank you Cap'N. I will keep that in mind. I have not run into that problem before, regardless of the length of the string. Thanks for the fix up. 5. ## help with newton Ok, What seems to be the problem is...I know practically no calculus and my dad wont let me take a course until I finish my college algebra book so... I don't even know what f'(x) means and how you get there from f(x) Dan 6. Originally Posted by dan Ok, What seems to be the problem is...I know practically no calculus and my dad wont let me take a course until I finish my college algebra book so... I don't even know what f'(x) means and how you get there from f(x) Dan I would be more strict and tell you to study "Pre-calculus" however they call the course. 7. Originally Posted by dan Ok, What seems to be the problem is...I know practically no calculus and my dad wont let me take a course until I finish my college algebra book so... I don't even know what f'(x) means and how you get there from f(x) Dan Poor Dan. I didn't realize that. Math is done in steps; You need to finish your algebra or pre-caculus and then go to Calc I. BTW, f'(x) is the derivative of a function f(x). What brought up Newton anyway?. You need to learn differentiation before you can use Newton's Method. Do a Google search. Good luck. How will you explain the working of Newton's Method?I can't figure out why it works!! Keep Smiling Malay 9. Sorry, dan! I don't even know what $f'(x)$ means . . . You're saying, "Can someone explain a $B\flat\text{ minor}$ chord without using Music Theory? . . You see, I don't read music." Answer: $No.$ 10. Most any calc book will explain how it works. It's not that complicated. Off the top of my head. The solutions of f(x)=0 are the values of x where the graph crosses the x-axis. Suppose that x=c is some solution we are looking for. Even if we can't find c exactly, it is usually possible to approximate it by graphing and using the Intermediate Value Theorem. If we let, say, $x_{1}$ be our initial approximation, then we can improve by moving along the tsngent line to y=f(x) at $x_{1}$ until we meet it at a point $x_{2}$. Repeat. One thing we have to do is derive some sort of formula so we can use ol' Newton. $y-f(x_{1})=f'(x_{1})(x-x_{1})$ If $f'(x_{1})\neq{0}$, then this line is not parallel to the x-axis and crosses it at some point $(x_{2},0)$. Sub this in our point-slope form: $-f(x)=f'(x_{1})(x_{2}-x_{1})$ Solve for $x_{2}$ $x_{2}=x_{1}-\frac{f(x_{1})}{f'(x_{2})}$ We keep going until we see that the approximation is $x_{n+1}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}$, n=1,2,3,......
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543519616127014, "perplexity_flag": "middle"}
http://cms.math.ca/10.4153/CJM-2011-058-x
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view # Pointed Torsors Read article [PDF: 203KB] http://dx.doi.org/10.4153/CJM-2011-058-x Canad. J. Math. 63(2011), 1345-1363 Published:2011-09-15 Printed: Dec 2011 • J. F. Jardine, Mathematics Department, University of Western Ontario, London, ON N6A 5B7 Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: LaTeX MathJax ## Abstract This paper gives a characterization of homotopy fibres of inverse image maps on groupoids of torsors that are induced by geometric morphisms, in terms of both pointed torsors and pointed cocycles, suitably defined. Cocycle techniques are used to give a complete description of such fibres, when the underlying geometric morphism is the canonical stalk on the classifying topos of a profinite group $G$. If the torsors in question are defined with respect to a constant group $H$, then the path components of the fibre can be identified with the set of continuous maps from the profinite group $G$ to the group $H$. More generally, when $H$ is not constant, this set of path components is the set of continuous maps from a pro-object in sheaves of groupoids to $H$, which pro-object can be viewed as a Grothendieck fundamental groupoid". Keywords: pointed torsors, pointed cocycles, homotopy fibres MSC Classifications: 18G50 - Nonabelian homological algebra 14F35 - Homotopy theory; fundamental groups [See also 14H30] 55B30 - unknown classification 55B30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8185267448425293, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/87827?sort=oldest
## Sampling uniformly from a sphere ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $B^{n} _p=${$(x_1, \dots, x_n) : |x_1|^p + \dots |x_n|^p = 1$} be the unit ball in $\mathbb{R}^n$ in the $\ell^p$ norm. If $X_1,\dots,X_n$ are iid $\exp(1)$ -distributed random variables, then $(X_1/D,\dots,X_n/D)$, where $D =X_1+ \dots + X_n$ is uniformly distributed in $B^{n}_1$. If $X_1,\dots,X_n$ are iid normally distributed with mean 0, then $(X_1/D,\dots,X_n/D)$, where $D = (X_1^2+\dots+X_n^2)^{1/2}$, is uniformly distributed in $B^{n}_2$. Is there a choice of $X_1,\dots , X_n$ iid such that $( X_1 / D, \dots, X_n/D)$, where $D = (|X_1|^p + \dots + |X_n|^p)^{1/p}$ is uniformly distributed in $B^{n} _p$ for arbitrary $p$? I would be happy with any sensible common generalization of the two statements above. I have no particular reason to believe there is such a generalization - I'm just hoping that two so similar and neat examples have similarly nice generalizations. - you are probably looking for: mathoverflow.net/questions/9185/… – S. Sra Feb 7 2012 at 21:16 1 so it seems that you are looking for uniform distribution on the surface of an $\ell_p$ ball (not in the ball). – S. Sra Feb 7 2012 at 22:10 I think you intend to normalize by $D^{1/p}$ instead of $D$, if I'm not mistaken. Also, $B_1^n$ is not the standard simplex. To generate uniformly on the $\ell_1$ ball you need to do something like multiply each coordinate $X_i$ by iid random variables $\epsilon_i$ uniform on {−1,+1}. – cardinal Feb 8 2012 at 3:22 ## 2 Answers The result you want, I think, is in Stationarity, Isotropy and Sphericity in $l_p^*$. It is behind a pay-wall, but the form of the distribution is stated in the abstract. - This paper fails to do the job since the OP wants independent $X_i$. – Mark Meckes Feb 8 2012 at 17:28 Sure, I misread the question as asking what form of iid distribution leads to particular forms of exchangeability via de Finetti representation theorems. This is the context I'm most used to seeing the normal distribution referred to a "spherically symmetric". The paper I linked to is the natural extension of that idea to the $l_p$ setting. – R Hahn Feb 8 2012 at 17:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If by uniform measure you mean $(n-1)$-dimensional Hausdorff measure on the sphere, the answer is no. As a consequence of the results of this paper by Barthe, Csörnyei, and Naor, under mild regularity assumptions the only measure on the boundary of any convex body which can be generated in this way is the "cone measure" on the $\ell_p$ sphere for $1 \le p < \infty$, which coincides with uniform measure only for $p=1,2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318598508834839, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/174426/an-infinite-series-involving-odd-zeta/174463
an infinite series involving odd zeta I ran across a cool series I have been trying to chip away at. $$\sum_{k=1}^{\infty}\frac{\zeta(2k+1)-1}{k+2}=\frac{-\gamma}{2}-6\ln(A)+\ln(2)+\frac{7}{6}\approx 0.0786\ldots$$ where A = the Glaisher-Kinkelin constant. Numerically, it is approx. $1.282427\ldots$ I began by writing zeta as a sum and switching the summation order $$\sum_{n=2}^\infty \sum_{k=1}^\infty \frac{1}{(k+2)n^{2k+1}}$$ The first sum is the series for $-n^3\ln(1-\frac{1}{n^2})-n-\frac{1}{2n}$ So, we have $-\sum_{n=2}^\infty \left[\ln(1-\frac{1}{n^2})+n+\frac{1}{2n}\right]$ This series numerically checks out, so I am onto something. At first glance the series looks like it should diverge, but it does converge. Another idea I had was to write out the series of the series: $$1/3(1/2)^{3}+1/4(1/2)^{5}+1/5(1/2)^{7}+\cdots +1/3(1/3)^{3}+1/4(1/3)^{5}+1/5(1/3)^{7}+\cdots +1/3(1/4)^{3}+1/4(1/4)^{5}+1/5(1/4)^{7}+\cdots$$ and so on. This can be written as $$1/3x^{3}+1/4x^{5}+1/5x^{7}+\cdots +1/3x^{3}+1/4x^{5}+1/5x^{7}+\cdots + 1/3x^{3}+1/4x^{5}+1/5x^{7}+\cdots$$ where $x=1/2,1/3,1/4,\ldots$ This leads to the series representation for: $$\frac{-\ln(1-x^2)}{x^3}-\frac{1}{x}-\frac{x}{2}$$ Since $x$ is of the form $1/n$, we end up with the same series as before. Now, my quandary. How to finish?. Where in the world does that Glaisher-Kinkelin constant come in, and how can that nice closed from be obtained?. Whether from the series I have above or some other means. As usual, it is probably something I should be seeing but don't at the moment. The GK constant has a closed form of $$e^{\frac{1}{12}-\zeta^{'}(-1)}$$. Which means an equivalent closed form would be $\frac{-\gamma}{2}+\ln(2)+6\zeta^{'}(-1)+\frac{2}{3}$ Thanks all. - 1 Answer We have $$\begin{eqnarray*} &&\sum_{n=2}^\infty\left( -n^3\log(1-\frac{1}{n^2})-n-\frac{1}{2n}\right)\\ &=&\lim_{N\to\infty}\sum_{n=2}^N\left( -n^3\log(1-\frac{1}{n^2})-n-\frac{1}{2n}\right)\\ &=&\lim_{N\to\infty}\left(\sum_{n=2}^N -n^3\log(1-\frac{1}{n^2})-\left(\frac{N^2+N-2}{2}\right)-\left(\frac{\log(N)}{2}-\frac{1}{2}+\frac{\gamma}{2}+O\left(\frac{1}{N}\right)\right)\right)\\ &=&\lim_{N\to\infty}[\sum_{n=2}^N (2n^3\log(n)-n^3\log(n+1)-n^3\log(n-1))-\left(\frac{N^2+N-2}{2}\right)-\left(\frac{\log(N)}{2}-\frac{1}{2}+\frac{\gamma}{2}\right)] \end{eqnarray*}$$ In the sum on the last line, we may gather together the coefficients of each logarithm (terms at the boundary of the sum are a little funny), giving $$\begin{eqnarray*} &&\lim_{N\to\infty}[\sum_{n=2}^N(-6n\log(n))+\log(2)+(N^3+3N^2+3N+1)\log(N)-N^3\log(N+1)-\left(\frac{N^2+N-2}{2}\right)-\left(\frac{\log(N)}{2}-\frac{1}{2}+\frac{\gamma}{2}\right)]\\ &=&\lim_{N\to\infty}[\sum_{n=2}^N(-6n\log(n))+\log(2)-N^3\log\left(1+\frac{1}{N}\right)+(3N^2+3N+1)\log(N)-\left(\frac{N^2+N-2}{2}\right)-\left(\frac{\log(N)}{2}-\frac{1}{2}+\frac{\gamma}{2}\right)]\\ &=&\lim_{N\to\infty}[\sum_{n=2}^N(-6n\log(n))+\log(2)-N^3\left(\frac{1}{N}-\frac{1}{2N^2}+\frac{1}{3N^3}+O\left(\frac{1}{N^4}\right)\right)+(3N^2+3N+1)\log(N)-\left(\frac{N^2+N-2}{2}\right)-\left(\frac{\log(N)}{2}-\frac{1}{2}+\frac{\gamma}{2}\right)]\\ &=&\lim_{N\to\infty}[\sum_{n=2}^N(-6n\log(n))+\left(3N^2+3N+\frac{1}{2}\right)\log(N)+\left(\frac{-3}{2}N^2+\frac{7}{6}-\frac{\gamma}{2}+\log(2)\right)]\\ &=&-6\log\left(\lim_{N\to\infty}\left(\frac{\prod_{n=1}^N n^n}{N^{N^2/2+N/2+1/12}e^{-N^2/4}}\right)\right)+\frac{7}{6}-\frac{\gamma}{2}+\log(2)\\ &=&-6\log(A)+\frac{7}{6}-\frac{\gamma}{2}+\log(2) \end{eqnarray*}$$ Here, I am taking $$A=\lim_{N\to\infty}\frac{\prod_{n=1}^N n^n}{N^{N^2/2+N/2+1/12}e^{-N^2/4}}$$ as the definition of the Glaisher-Kinkelin constant. - Wow, thanks pink elephants. No wonder I did not finish it:). Thanks a lot. – Cody Jul 24 '12 at 15:24 Pink elephants, may I ask another question?. I appreciate your fine response. I follow all of it except for how you got from $\lim_{N\to \infty}\left[\sum_{n=2}^{N}(-2n^{3}\log(n)+n^{3}\log(n+1)+n^{3}\log(n-1))\right]‌​$ to $\lim_{N\to \infty}\left[\sum_{n=2}^{N}(-6\log(n))+\log(2)+(N+1)^{3}\log(N)-N^{3}\log(N+1) \right]$. The part where you mention gathering the coefficients of the logs. Thanks. – Cody Jul 24 '12 at 18:32 @Cody: Suppose $2\leq j\leq N-1$. I want to see what the coefficient of $\log(j)$ in this sum is. We get a contribution of $-2j^3$ from the first term when $n=j$, a contribution of $(j-1)^3$ from the second term when $n=j-1$, and a contribution of $(j+1)^3$ from the third term when $n=j+1$. In total, then, the coefficient of $\log(j)$ is $-2j^3+(j-1)^3+(j+1)^3=6j$ (hmm...above, I said that the coefficient was $-6j$. Maybe this is a mistake?). The coefficients of $\log(2)$, $\log(N)$, and $\log(N+1)$ are a little different, as there are fewer values of $n$ in the summation which contribute. – Pink Elephants Jul 24 '12 at 19:01 @Cody I just noticed that I dropped that minus sign earlier. Fixing it now – Pink Elephants Jul 24 '12 at 19:02 Thanks a lot PE fore the fast response. I was playing around with it by writing out terms and I see what you mean. It was kind of rough to see (at least for me) at first. The whole thing is clever and cool. I see where the 6n and log(2) came from. It was the other two terms I did not see right off. Thanks a bunch. – Cody Jul 24 '12 at 19:19 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.965674102306366, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/105495/what-can-an-algebraic-geometer-do-outside-academia/105500
## What can an algebraic geometer do outside academia? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is inspired by this and this. But it is not a duplicate, read on. Please don't close it: I choose to be anonymous just not to be identified. About to be on the job market, disenchanted with academia, bored by the teaching load in grad school, I have to make ends meet (by, say, finding a job somewhere). I've seen math people finding non-math jobs. But they seem to know some math that can be applied'', e.g. stochastic process or combinatorics. Trained as an algebraic geometer, I don't have a strong background on those stuff or anything that helps job hunting ---- knowing things about the Weil conjecture doesn't seem to be a plus. I don't think I will enjoy a teaching job in college much either. I would guess I'm not the only one in this situation. Question: Can anyone here give some career suggestions? (P.S. I'm in US now but not a US citizen, so National Security Agency or similar jobs appear in answers to similar posts are really not in my list.) - 25 Anything a Banach algebraist can? By which I mean, it may be best to try and make use of the experience you have gained as a graduate student rather than the content you have learned over that time. – Yemon Choi Aug 26 at 5:51 5 Algebraic geometry does have applications to robotics, error-correcting codes and various other areas. Do any of these interest you? Not that this is a guarantee of work by any means, but it might suggest a direction in which to focus. On the other hand, beware of focusing too narrowly. – J W Aug 26 at 7:06 5 An algebraic geometer would, I assume, be someone who is smart, can analyze problems, and articulate their solution in either verbal or written form, or, even better, both. Such a person should be able to make their way in the world without too much difficulty. – Geoff Robinson Aug 26 at 16:49 5 Frankly, I was surprised this question stayed open as long as it did (as I am about a number of recent questions). Nothing to do with algebraic geometry, which is one of the most popular topics at this site. But like many other questions where an anonymous somebody is seeking advice from people who don't know him/her, it's really off-topic (and there have been many precedents for that judgment). But seeing as this is probably going to be reopened, it seems wise to bring the discussion to meta (please vote this up so people can see). – Todd Trimble Aug 26 at 20:46 28 Sorry, meta: meta.mathoverflow.net/discussion/1434/… Please vote up so people can see. – Todd Trimble Aug 26 at 21:09 show 10 more comments ## 10 Answers Although not an algebraic geometer, I recently went through a similar experience (I actually posted one of the questions to which you linked.) One of the reasons I became disenchanted with academia was that I struggled to learn algebraic geometry for many years but failed. So I hope you appreciate that this advice is coming from someone who is not as smart as you! After returning to my own country following an unsatisfactory year teaching at a liberal arts college, I did some part-time teaching at my old university while I researched possible careers. Attracted by the idea of earning a lot of money, my first thought was finance and I read several books about the stock market and the history of money. By the way, it's very hard to find textbooks about finance which don't "take sides". The very best book I found, coming from zero knowledge of economics, was "Introduction to Money" by Honor Croome. We don't have quants in this country, but the career which appealed to me most was actuary, and I contacted some actuaries via the University Careers Service. The main actuarial recruiter in my city was not interested in me, saying that I was "over-qualified" (an expression you will probably soon be hearing a lot) and would find the job boring. Most of the actuaries I talked to were very friendly but were baffled that someone who had a PhD in mathematics would want to start a new career from the beginning. Meanwhile, I realised that any job involving maths was going to require some knowledge of programming, so I began to learn Python. This was quite fun, although I did not get very far before switching to Javascript, because I wanted to be able to share my code more easily. An excellent resource is a free book called "Eloquent Javascript". I really recommend this if you have no programming experience. I didn't get very deeply into programming, but learned enough to do simple calculations and talk about things like "APIs", "dynamic typing" and other jargon. I attempted to get a full-time teaching job at my old university but failed. However, everyone was very impressed by my lecture. I decided that I wanted to make a positive contribution to the world and so began to consider mathematical ecology, especially fisheries. I talked to some people and they recommended that I learn Bayesian Statistics, so I got a book and started to work on this. I rapidly became starstruck by the beauty and power of the Bayesian approach. It was also a source of helpful programming exercises (Gibbs samplers etc.) and motivated me to learn R, which is the industry standard among academically-oriented statisticians. Around this time, I failed to get a job in fisheries. It turns out that people don't really care about whether you are capable of doing the job; they want you to "demonstrate an interest" in it. It's a bit like how, when applying for a liberal arts position from a research university, you have to harp on about how committed you are to the liberal arts philosophy. Otherwise your application goes directly into the trash. I was at a loose end and a new semester was starting, so I sat in on two courses, one on Bayesian stats and the other in data mining. The data mining was helpful for learning R, because it's a very scatty language and there are all kinds of little tricks you need to know. The Bayesian stats was an opportunity to work through more Bayesian stats, with exercises. Around this time, I contacted an ecologist in the statistics department who happened to have a problem to work on, so I started working on this. After a couple of months I was able to amke progress on it and I'm hoping we will eventually get a publication out of it, which will certainly boost my credibility with the ecology crowd. Anyway, at this point (about a year after my search started) I suddenly got a job in the tax department. The reason why I got this was because my boss is a mathematician. It turns out that there are lots of mathematicians in industry who were once in the same position as I was, and you now are. They like to hire mathematicians, just like how people who have emigrated to a different country like to hire people from the same country as them. Another mathematician was hired at the same time as me. I don't view the new job as a permanent thing that will go on forever, but I really like my colleagues. I do have to deal with meetings and people blathering about "going forward", "taking the first cab off the rank", "passing the ball" and all manner of similar phrases. On the other hand, I can bask in the feeling that I am helping to stick it to those nasty finance people who care about nothing but money ... Ultimately I am hoping to return to academia (as a statistician) or become a statistical consultant if I can, and I want to move to Canada one day. I guess the most useful pieces of advice for someone in this position are probably the following: 1. The only way to get a job is via personal connections. 2. Appearances are very important. and (more an obervation than a piece of advice) 1. Every real-world problem is ultimately about maximising some hideously complicated function. I am sorry if you didn't find this too helpful, but I thought it might be useful to hear from someone who has very recently been in a similar position. It took me a year to find a job, so don't lose heart! - 1 Wait, you wanted to used your power for good and you went to the tax department? That sounds ironic... (Oh, I forgot you are not in U.S.) – temp Aug 27 at 6:58 BTW, "use my power for good" seems to be a much better title then this post, which is kind of like "algebraic geometer is useless outside academia". – temp Aug 27 at 7:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think first you have to answer the following question: Do you wish to continue doing some kind of algebraic geometry or are you willing to adapt? If you are willing to adapt, then several choices open up: 1. Search out which jobs might interest you (ok, this question on MO is a start) 2. Learn at least some of the skills needed to get those jobs (e.g., an additional degree or qualification) 3. Become a consultant --- easier said than done! If you are unwilling to adapt, or wish to only partially adapt, then: 1. Maybe do some algebraic statistics --- you'll profitably use your AG skills there 2. Some research lab might be interested 3. Do a postdoc in a related yet different enough field to give you wider perspective 4. Take a month off to discover yourself, and figure out what is it that you really want? The benefit of asking, and perchance answering this last question will be greater clarity, and confidence. Disillusionment might evoke a cynical response, but really, in the long-run, you should answer this question to yourself. I wish you luck and above all clarity - I voted to close this question earlier, but on the principle "if you can't beat 'em, join 'em", I'll say this. I know of any number of former academic mathematicians who decided to leave academe to pursue a career as an actuary. They come from all over: algebraic geometry, number theory, logic, quantum field theory, you name it. The bit about having to have some iron in applied mathematics here just doesn't hold true: what counts to prospective employers is being able to think clearly and precisely and have good mathematical sense. Being able to pass actuarial exams (the first few of which are math-y but doable with little sweat by an ex-mathematician) goes a long way towards establishing your cred. The pay tends to be pretty good, and actuarial work tends to be stable, certainly compared to what struggling academics put up with. (I am not an actuary myself but I am married to one.) A lot of mathematicians I know are very happy with this career choice and have never looked back. Perhaps the real point is that this is merely an example. I support what Angelo said: "a PhD in mathematics is highly regarded in the 'real world', even when it is in pure mathematics." Your prospects are probably very bright. - First I wish you good luck and second I think this is appropriate question for forum. About 3 years ago I transferred from academia to industry, so I can partly understand yours difficulties. Algebraic geometry is of some use in some industry. On the other hand I am not sure it that it is easy (or even possible) to find such jobs. Example 1. Cryptography. Some private companies also research it e.g. for the purpose of cloud computing (popular topic now). Modern schemes to secure data involve Weil pairing. Example 2. Error correcting codes. There are Goppa codes built on algebraic-geometry ideas. There is also current research how to use algebraic number theory in space time codes Perfect Space Time Block Codes. Example 3. There are applications to robotics. They are described in book Ideals, Varieties, and Algorithms by David A. Cox, John B. Little, Don O'Shea, Well, I think the situation with research jobs in US, is better than in Russia, so if I was able find one in Russia, you have more chances there:) As another advise let me tell a bit of my story - I applied to some vacancy and was sure that they will not even interview me, cause there was big list of requirements and I knew nothing about it, except small item at the end about matrix calculations, which also was not my strong point, nevertheless to my great surprise I was hired by them. Actually I was lucky since in Russia it is difficult to find experts which they wanted to find, so they weaken the requirements. The moral of this story - just apply everywhere, does not matter it seems very related to yours experience or not. PS May be it is good to allow math job advertisement on MO ? On MSE there some kind of banners at the right side of the screen, sometimes they inform about existence of arXiv. May be it would be good for community ? People can learn what kind of jobs are of use in industry... I am not sure by my feelings are that there are not so many bridges between math and real world it would be great to built as more as possible... Let me ask this in meta : http://meta.mathoverflow.net/discussion/1433/math-job-advertisements-on-mo-/ - 1 @Alexander: there are already other websites that focus on advertizing jobs such as www.mathjobs.org. – Patricia Hersh Aug 26 at 19:25 2 @Patricia Hersh Well, you are right, on the other hand why not to have one more ? Competition makes everything better. Also currently I visit MO almost everyday and the day when I visited ams site is about 5 years ago... Why not to combine fun (MO) with use (jobs) ? – Alexander Chervov Aug 26 at 19:31 @Alexander: I'm actually in favor of allowing a little bit of career advise on MO, provided it is of general interest, somewhat math specific, not likely to lead to arguments, and on a topic where the community could say something intelligible without specific knowledge of the OP. People who don't want to participate in such things can stay out of such questions. But that said, to me it seems like there are better places for job ads. – Patricia Hersh Aug 26 at 19:38 @Patricia The existence of the question mathoverflow.net/questions/23525/… might be considered as a sign that something can be improved, there is demand on this :) I mean if you have a permanent job - are you often on job hunting site ? But when someone teach students does not he/she should be somewhat responsible for trying to adapt his course for something which might be useful for students ? How can one now this if he/she does not visit job sites ? – Alexander Chervov Aug 26 at 19:51 Example - when people teach alg.geom. do they include applications to cryptography or error-correcting codes or anything else ? CLassical textbooks - Hartshorne, Griffiths Harris, Shafarevich - do not do this... – Alexander Chervov Aug 26 at 19:53 show 1 more comment People not originally from the US (or at least not from an English speaking country) tend to have a little more trouble with this question, because they have more difficulty understanding the following: Traditionally in the United States, higher education is not specific training for a career. Rather, it is training to be a thoughtful contributing citizen. Skills necessary for being a thoughtful contributing citizen tends to be useful in all kinds of intellectual and non-intellectual work. Companies looking for intelligent employees therefore look for intelligent, hard-working people who have done well at studying something, not necessarily anything related to what they do. As a graduate of a liberal arts college, I saw music majors and English majors get all kinds of jobs. What were the most common? 1) Sales. Plenty of companies need people to sell something. One might sell large software packages to businesses. One might sell re-insurance to insurance companies. One might sell medical devices to doctors. Since salespeople generally are mostly paid on commission, hiring a salesperson is low risk. Traditionally, salespeople know almost nothing about what they are selling; they are just good at convincing people to buy whatever they are selling. 2) Technical sales support. Someone has to understand what they are selling. Except technical sales support people are not actually experts in what is being sold. They've just had a little more training so they can answer the 98% of questions that aren't actually that technical. 3) Software. If you've learned how to program, the vast majority of jobs in the software industry don't require that much software engineering knowledge, and you've probably learned enough. We are not talking about making a database or operating system run faster; we're talking about changing the interface of MathOverflow to move the answer box 5 pixels to the right, or just making sure this webpage is displaying the answers to this question and not some other random question. 4) Consulting. There is both software consulting (see software) and management consulting. This involves being in a group that is hired to look at a company's operations, gather qualitative and quantitative data, and make recommendations. The whole point is that you are mostly not experts and therefore (at least supposedly) have a fresh view on what they are doing and can recommend the obvious stupid things they are missing. 5) Investment banking. See Sales, except you're selling financial products to companies. Except that in this industry, sales is frequently done in teams, so your role might be closer to that of a consultant (assembling data to support the sale) than an actual salesperson. - I believe that mathematics training (of any type) is excellent preparation for work as a software engineer and, incidentally, for solving typical software engineering interview problems. If you have a little proficiency in computer programming, or could teach yourself a little, you might land an engineering internship position. An internship working closely with one or two highly experienced engineers would be ideal. In any event the habits of thinking you've practiced as a mathematician would greatly accelerate your development in this field. - 1 Will you please mention the software engineering problems which a mathematician ( who is skilled in algebraic geometry ) may attack them. – Aurora Aug 26 at 9:12 1 I think companies that make math software like wolfram, mathworks, waterloo maple etc. may be interested in such a mathematician.... – S. Sra Aug 26 at 10:06 Should you be interested in applying algebraic geometry, IMA Thematic Year on Applications of Algebraic Geometry may give you some inspiration, as can searching for "algebraic geometry applications" or similar. You may also wish to take a look at the Society for Industrial and Applied Mathematics's Activity Group on Algebraic Geometry SIAM Algebraic Geometry group or visit their wiki SIAM Algebraic Geometry wiki. Of course, you may need to take courses or do self-study in certain application areas, if they appeal to you. You may also need to develop skills in algorithms or computational techniques, assuming your background does not already include these. All advice is perilous, but I suspect that you could increase your employment options if you know at least something of programming, statistics and optimization techniques. Perhaps there are opportunities to work for government institutions in areas such as planning or infrastructure for instance. You may find areas such as complex networks or machine learning quite accessible with your advanced background and ability to cope with abstraction. Whatever you decide to do, the very best of luck! - The "Software Perception Engineer" position could be appropriate depending on your background: http://bostondynamics.com/bd_jobs.html - 1 Steven M. LaValle's "Planning Algorithms" could be worth taking a look at if you're interested in motion planning. It's kindly made available online by the author at planning.cs.uiuc.edu. Some of the material in chaps 3, 4 and 6 touches on (real) algebraic geometry. There's also Jon M. Selig's "Geometric Fundamentals of Robotics," which draws on topics such as Lie groups, algebraic geometry and differential geometry. – J W Aug 26 at 14:53 2 From the ad: "should have a strong software engineering background ....Must have: Experience with perception or motion planning on real-world robots." The job sounds fascinating, but I don't think the OP is who they're looking for. – mt Aug 26 at 17:10 @mt: many mathematicians have a far more diverse background than a vanilla Ph.D on some sanitised, isolated subject that civilians have never heard of. And a lot of companies overlook imperfections in applications if the applicant is well-rounded enough in the general area they're looking for. – Ryan Budney Aug 26 at 18:43 3 He is an algebraic geometrer. What makes you think, the manager is going to pay money to bring him on board for an interview? – unknown (google) Aug 28 at 3:57 I will be honest with you here since I have been in your situation. You have mentioned you do not have US citizenship. Do you have Green Card? If not, it is pretty much hopeless unless you have some extraordinary support (may be an outstanding result in Algebraic Geometry, best of best at a top school, luck out getting a faculty job or something). My advice would be that you should pursue a Master's in Computer Science with Research Assistance under a faculty who can help you gain software skills (gone are the days when you are a mathematician you would be hired in a software job... these days technology hiring is very very skill specific and companies raise eyebrows when they hire a foreigner... even though you could be smart, there are others out there who are nimble and speak the correct lingo). Get a job. You can possibly get a Green Card in 1 to 2 years if you are not from China or India (else say 5-6 years if lucky). Then you can pursue your dreams if you want to in Algebraic Geometry. Or else you have to try your luck outside the country. I have seen many broken dreams, wasted youth, people separated and people from Mathematics or even in engineering landing up outside the US. - 3 While in retrospect it is obvious, it took me some searching to find out that GC should mean Green Card. – quid Aug 26 at 16:57 3 I found this answer a little difficult to read, so I've expanded the abbreviations as far as I can guess, except for "sw." I think the latter stands for "software," but I could be mistaken. – J W Aug 26 at 17:09 Your situation with algebraic geometry is probably more similar to that of humanities graduate students than to that of students of applied math or engineering. " [...] Contrast this situation with that of academia. Professors of Literature or History or Cultural Studies in their professional life find themselves communicating principally with other professors of Literature or History or Cultural Studies. They also, of course, communicate with students, but students don't really count. Graduate students are studying to be professors themselves and so are already part of the in-crowd. Undergraduate students rarely get a chance to close the feedback loop, especially at the so called "better schools" (I once spoke with a Harvard professor who told me that it is quite easy to get a Harvard undergraduate degree without ever once encountering a tenured member of the faculty inside a classroom; I don't know if this is actually true but it's a delightful piece of slander regardless). They publish in peer reviewed journals, which are not only edited by their peers but published for and mainly read by their peers (if they are read at all). Decisions about their career advancement, tenure, promotion, and so on are made by committees of their fellows. They are supervised by deans and other academic officials who themselves used to be professors of Literature or History or Cultural Studies. They rarely have any reason to talk to anybody but themselves -- occasionally a Professor of Literature will collaborate with a Professor of History, but in academic circles this sort of interdisciplinary work is still considered sufficiently daring and risqué as to be newsworthy. What you have is rather like birds on the Galapagos islands -- an isolated population with unique selective pressures resulting in evolutionary divergence from the mainland population. There's no reason you should be able to understand what these academics are saying because, for several generations, comprehensibility to outsiders has not been one of the selective criteria to which they've been subjected. What's more, it's not particularly important that they even be terribly comprehensible to each other, since the quality of academic work, particularly in the humanities, is judged primarily on the basis of politics and cleverness. " - 4 Copied from info.ucl.ac.be/~pvr/decon.html ? – Harald Hanche-Olsen Aug 26 at 8:26 5 Even if true, is this relevant or helpful? – Felipe Voloch Aug 26 at 12:44 13 Of course it is neither relevant nor helpful. What's more, I don't even believe it is true; in my experience, a PhD in mathematics is highly regarded in the "real world", even when it is in pure mathematics. – Angelo Aug 26 at 17:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.970558226108551, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/139699-basis-perpendicular-subspace-spanned-2-vectors.html
# Thread: 1. ## Basis perpendicular to a subspace spanned by 2 vectors Find a basis perpendicular to W. Let $V = \begin{bmatrix}1\\-4\\1\\4\end{bmatrix}$ and $U = \begin{bmatrix}3\\-13\\3\\-4\end{bmatrix}$ and let W be the subspace spanned by V and U. I am pretty confused on this topic right now I just dont seem to know what i need to do to solve problems like these. Any help would be great Thank you 2. Originally Posted by mybrohshi5 Find a basis perpendicular to W. Let $V = \begin{bmatrix}1\\-4\\1\\4\end{bmatrix}$ and $U = \begin{bmatrix}3\\-13\\3\\-4\end{bmatrix}$ and let W be the subspace spanned by V and U. I am pretty confused on this topic right now I just dont seem to know what i need to do to solve problems like these. Any help would be great Thank you Do you want an orthogonal basis or orthonormal basis? 3. The question doesnt specify. It just says....... let W the subspace of ${\mathbb R}^4$ spanned by v and u. Find a basis of $W^{\perp}$. I should probably know how to find both though im sure 4. If you can tell me what the answer is, I can tell you what one it is looking for. 5. Im not sure what the answer is. Its for online homework. There are two blank 4x1 matrixes where i can put my answer in. so it looks like this $\begin{bmatrix}.\\.\\.\\.\end{bmatrix}, \begin{bmatrix}.\\.\\.\\.\end{bmatrix}$ 6. Both would be of that form. Use the Gram-Schmidt process. Do you know it? 7. Originally Posted by dwsmith Both would be of that form. Use the Gram-Schmidt process. Do you know it? Yes i do. Is that all i have to do? Seems easy enough. and will the Gram schmidt process give me an orthogonal basis correct? then from there i could find a orthonormal basis if i needed right? 8. You can normalize it if you would like. 9. Using the gram schmidt process is not getting me the right answer I even normalized the orthogonal basis i got and those are wrong as well. Using the process i got $\begin{bmatrix}1\\-4\\1\\4\end{bmatrix}$ and $\begin{bmatrix}\frac{30}{17}\\-\frac{137}{17}\\\frac{30}{17}\\-\frac{152}{17}\end{bmatrix}$ i plugged those in and they were wrong. so i normalized those to get the orthononormal vectors $\frac{1}{\sqrt(34)}*\begin{bmatrix}1\\-4\\1\\4\end{bmatrix}$ and $\frac{1}{\sqrt(2569/17)}*\begin{bmatrix}\frac{30}{17}\\-\frac{137}{17}\\\frac{30}{17}\\-\frac{152}{17}\end{bmatrix}$ Can you see where i am going wrong? 10. I just checked my dot product and it wasn't zero but your dot product is. What section in your book does this question relate too? 11. This is for online homework and i dont have the book cause i am fairly good at math and the professor said we didnt need it unless we needed an extra source of help (mine is this forum haha). If i look at the professors power points i would think it relates to the section called Orthogonal Vectors in R^n, but i am not positive.... 12. I found this on yahoo answers and it looks similar to my question but i am a little unsure what is done to get the answer. maybe you will know what is going on Find a basis of the subspace of R4 that consists of all vectors perpendicular to both:? - Yahoo! Answers 13. I got these (-68, -16, 0, 1) ( -1, 0, 1, 0). 14. Originally Posted by zzzoak I got these (-68, -16, 0, 1) ( -1, 0, 1, 0). What did you do to find those? 15. When I followed that example, I obtained $x_3\begin{bmatrix}<br /> -1\\ <br /> 0\\ <br /> 1\\ <br /> 0<br /> \end{bmatrix}+x_4\begin{bmatrix}<br /> -68\\ <br /> -16\\ <br /> 0\\ <br /> 1<br /> \end{bmatrix}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358598589897156, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/5616/geometric-progression?answertab=votes
# Geometric Progression If S1, S2, and S are the sums of n terms, 2n terms and to infinity of a G.P. Then, find the value of S1(S1-S). PS:Nothing is given about the common ratio. - 1 Of course no,this again comes from my test paper without any kind of explanation,except the the answer. – Quixotic Sep 28 '10 at 5:43 1 @Deb: You should state the source of the problem in the post. People are resistive to homework-like questions as one is supposed to do their own homework. – KennyTM Sep 28 '10 at 6:33 1 @Debanjan: I mean you can just copy your first comment into the post in your next question (if any). – KennyTM Sep 28 '10 at 8:24 1 @Debanjan: I suppose you have been asked if it is homework for earlier questions (hence your usage of word 'again'). Why don't you just mention that that is the case (from a test) and avoid getting questions like these ('is it homework')? In any case, why don't you also show some working? Test questions are like homework, in a way. – Aryabhata Sep 28 '10 at 16:24 1 – Quixotic Sep 29 '10 at 4:58 show 5 more comments ## 2 Answers I change your notation from S1, S2 and S to $S_{n},S_{2n}$ and $S$. The sum of $n$ terms of a geometric progression of ratio $r$ $u_{1},u_{2},\ldots ,u_{n}$ is given by $S_{n}=u_{1}\times \dfrac{1-r^{n}}{1-r}\qquad (1)$. Therefore the sum of $2n$ terms of the same progression is $S_{2n}=u_{1}\times \dfrac{1-r^{2n}}{1-r}\qquad (2)$. Assuming that the sum $S$ exists, it is given by $S=\lim S_{n}=u_{1}\times \dfrac{1}{1-r}\qquad (3)$. Since the "answer is S(S1-S2)", we have to prove this identity $S_{n}(S_{n}-S)=S(S_{n}-S_{2n})\qquad (4).$ Plugging $(1)$, $(2)$ and $(3)$ into $(4)$ we have to prove the following equivalent algebraic identity: $u_{1}\times \dfrac{1-r^{n}}{1-r}\left( u_{1}\times \dfrac{1-r^{n}}{1-r}% -u_{1}\times \dfrac{1}{1-r}\right)$ $=u_{1}\times \dfrac{1}{1-r}\left( u_{1}\times \dfrac{1-r^{n}}{1-r}-u_{1}\times \dfrac{1-r^{2n}}{1-r}\right) \qquad (5)$, which, after simplifying $u_1$ and the denominator $1-r$, becomes: $\dfrac{1-r^{n}}{1}\left( \dfrac{1-r^{n}}{1}-\dfrac{1}{1}\right) =\left( \dfrac{% 1-r^{n}}{1}-\dfrac{1-r^{2n}}{1}\right) \qquad (6)$. This is equivalent to $\left( 1-r^{n}\right) \left( -r^{n}\right) =-r^{n}+r^{2n}\iff 0=0\qquad (7)$. Given that $(7)$ is true, $(5)$ and $(4)$ are also true. - That's what I have done but let me ask you why are you assuming that `r` is less than 1 ? This satisfies the relation of-course.But in real time I can only spare a mint or so in this problem,so I guess the problem is not well defined ?! – Quixotic Sep 29 '10 at 4:52 – Américo Tavares Sep 29 '10 at 7:53 HINT $\quad\:$ In $\rm\ \ (1-X)\ (1-(1-X))\ =\ 1-X^2-(1-X)\ \ \$ put $\rm\ \ \ X = x^n\$ then multiply both sides by $\rm\ 1/(1-x)^2\ =\ S/(1-x)\:.\ \$ More generally one has $\rm\ \ (1-x^a)\:(1-x^b)\ =\ (1-x^a) + (1-x^b) - (1-x^{a+b})$ $\rm\quad\quad\quad\ \Rightarrow\quad\quad S_a\ S_b\ =\ S\ (S_a + S_b - S_{a+b})\:,\quad S_n = \displaystyle\frac{1-x^n}{1-x},\quad S = S_\infty = \frac{1}{1-x}$ This generalizes to arbitrary products $\rm\: S_{a}\: S_b\: S_c\cdots S_k\:$ using the Inclusion–exclusion principle. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9297415614128113, "perplexity_flag": "middle"}