url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://gilkalai.wordpress.com/2008/12/23/seven-problems-around-tverbergs-theorem/?like=1&source=post_flair&_wpnonce=0619346065 | Gil Kalai’s blog
Seven Problems Around Tverberg’s Theorem
Posted on December 23, 2008 by
Imre Barany, Rade Zivaljevic, Helge Tverberg, and Sinisa Vrecica
Recall the beautiful theorem of Tverberg: (We devoted two posts (I, II) to its background and proof.)
Tverberg Theorem (1965): Let $x_1,x_2,\dots, x_m$ be points in $R^d$, $m \ge (r-1)(d+1)+1$. Then there is a partition $S_1,S_2,\dots, S_r$ of $\{1,2,\dots,m\}$ such that $\cap _{j=1}^rconv (x_i: i \in S_j) \ne \emptyset$.
The (much easier) case $r=2$ of Tverberg’s theorem is Radon’s theorem.
1. Eckhoff’s Partition Conjecture
Eckhoff raised the possibility of finding a purely combinatorial proof of Tverberg’s theorem based on Radon’s theorem. He considered replacing the operation : “taking the convex hull of a set $A$” by an arbitrary closure operation.
Let $X$ be a set endowed with an abstract closure operation $X \to cl(X)$. The only requirements of the closure operation are:
(1) $cl(cl (X))=cl(X)$ and
(2) $A \subset B$ implies $cl(A) \subset cl (B)$.
Define $t_r(X)$ to be the largest size of a (multi)set in $X$ which cannot be partitioned into $r$ parts whose closures have a point in common.
Eckhoff’s Partition Conjecture: For every closure operation $t_r \le t_2 \cdot (r-1).$
If $X$ is the set of subsets of $R^d$ and $cl(A)$ is the convex hull operation then Radon’s theorem asserts that $t_2(X)=d+1$ and Eckhoff’s partition conjecture would imply Tverberg’s theorem. Update (December 2010): Eckhoff’s partition conjecture was refuted by Boris Bukh. Here is the paper.
2. The dimension of Tverberg’s points
For a set $A$, denote by $T_r(A)$ those points in $R^d$ which belong to the convex hull of $r$ pairwise disjoint subsets of $X$. We call these points Tverberg points of order $r$.
Conjecture (Kalai, 1974): For every $A \subset R^d$ , $\sum_{r=1}^{|A|} {\rm dim} T_r(A) \ge 0$.
Note that $\dim \emptyset = -1$.
This conjecture includes Tverberg’s theorem as a special case: if $|A|=(r-1)(d+1)+1$, $dim A =d$, and $T_r (A)=\emptyset$, then the sum in question is at most $(r-1)d + (|A|-r+1)(-1) = -1$.
Akiva Kadari proved this conjecture (around 1980, unpublished) for planar configurations.
Akiva Kadari and Ziva Deutsch (both are my academic brothers).
3. The number of Tverberg’s partitions
Sierksma Conjecture: The number of Tverberg’s $r$-partitions of a set of $(r-1)(d+1)+1$ points in $R^d$ is at least $((r-1)!)^d$.
Gerard Sierksma
4. The Topological Tverberg Conjecture
Let $f$ be a continuous function from the $m$-dimensional simplex $\sigma^m$ to $R ^d$. If $m \ge (d+1)(r-1)$ then there are $r$ pairwise disjoint faces of $\sigma^m$ whose images have a point in common.
If $f$ is a linear function this conjecture reduces to Tverberg’s theorem.
The case $r=2$ was proved by Bajmoczy and Barany using the Borsuk-Ulam theorem. In this case you can replace the simplex by any other polytope of the same dimension. (This can be asked also for the general case.)
The case where $r$ is a prime number was proved in a seminal paper of Barany, Shlosman and Szucs. The prime power case was proved by Ozaydin (unpublished), Volovikov, and Sarkaria. For the prime power case, the proofs are quite difficult and are based on computations of certain characteristic classes.
5. Reay’s Relaxed Tverberg Condition
Moriah Sigron (right) and other participants in a lecture by Endre Szemeredi. (See further comment below.)
Let $t(d,r,k)$ be the smallest integer such that given $m$ points $x_1,x_2,\dots, x_m$ in $R^d$, $m \ge t(d,r,k)$ there exists a partition $S_1,S_2,\dots, S_r$ of $\{1,2,\dots,m\}$ such that every $k$ among the convex hulls $conv (x_i: i \in S_j)$, $j=1,2,\dots,r$ have a point in common.
Reay’s “relaxed Tverberg conjecture“ asserts that that whenever $k >1$, $t(d,r,k)= (d+1)(r-1)+1$.
Micha A. Perles and Moriah Sigron have rather strong results in this direction, but at the same time Perles strongly believes that Reay’s conjecture is false, and he often mentions this special case:
Given 1,000,000 points in $R^{1000}$, Tverberg’s theorem asserts that you can partition them into 1,000 parts whose convex hulls have a point in common. Now given 999,999 points in $R^{1000}$ is it always possible to divide them to 1,000 parts such that the convex hulls of every two of them will have a point in common? It is hard to believe that the answer is negative.
6. Colorful Tverberg theorems
Zivaljevic and Vrecica’s colorful Tverberg’s theorem asserts the following: Let $C_1,\cdots,C_{d+1}$ be disjoint subsets of $R^d$, called colors, each of cardinality at least $t$. A $(d+1)$-subset $S$ of $\bigcup^{d+1}_{i=1}C_i$ is said to be multicolored if $S\cap C_i\not=\emptyset$ for $i=1,\cdots,d+1$. Let $r$ be an integer, and let $T(r,d)$ denote the smallest value $t$ such that for every collection of colors $C_1,\cdots,C_{d+1}$ of size at least $t$ there exist $r$ disjoint multicolored sets $S_1,\cdots,S_r$ such that $\bigcap^r_{i=1}{\rm conv}\,(S_i)\not=\emptyset$. Zivaljevic and Vrecica proved that $T(r,d)\leq 4r-1$ for all $r$, and $T(r,d)\leq 2r-1$ if $r$ is a prime.
This theorem is one of the highlights of discrete geometry and topological combinatorics. The only known proofs for this theorem rely on topological arguments.
The colorful Tverberg conjecture asserts that $T(r,d)= r$.
Update: This conjecture was proved by Blagojecic Matschke and Ziegler
Let me mention another direction of moving from “colorful results” to analogous “matroidal results.” A set whose elements are colored with $r$ colors gives rise to a matroid where the rank of a set is the number of colors of elements in the set. So it is natural to consider an arbitrary matroid structure on the ground set and replace “multicolor set” by “a basis in the matroid”. For example, Barany’s colorful Caratheodory theorem was extended by Meshulam and me to a matroidal theorem. (With Barany and Meshulam we have some preliminary results on matroidal Tverberg theorems.)
7. The computational complexity of Tverberg’s theorem
Problem: Is there a polynomial-time algorithm to find a Tverberg partition when Tverberg’s theorem applies?
A positive answer will follow from a positive answer to:
Problem: Is there a polynomial algorithm for Barany’s colorful Caratheodory theorem?
The picture above (taken by Ofer Arbeli during a lecture by Endre Szemeredi at Hebrew University’s Institute for Advanced Study) shows how encouraging young babies (and even younger) to attend lectures, is instrumental in bringing Israeli mathematics and computer science to its leadership stature.
Like this:
This entry was posted in Combinatorics, Convexity, Open problems and tagged Colorful Tverberg Theorem, topological Tverberg theorem, Tverberg's theorem. Bookmark the permalink.
9 Responses to Seven Problems Around Tverberg’s Theorem
1. D. Eppstein says:
There’s also the Tverberg version of halfspace depth, due to Rouseeuw and Hubert. As far as I remember it, it was: given (d+1)n points in R^d, there exists a hyperplane H, and a partition of the points into n subsets such that H cannot be moved to a vertical position without passing through at least one of the points of each subsets.
My co-authors and I had some partial results on this in arXiv:cs.CG/9809037 (DCG 2000) (showing that a partition into n subsets of this type is possible for sets of cn points for some c that is larger than d+1 but smaller than previously known) but as far as I know the full problem is still open.
2. Gil Kalai says:
Thanks David. Let me mention again also the Tverberg-Vrecica conjecture offering a far reaching common extension of Tverberg’s theorem and Ham-sandwitches theorems.
3. Gil Kalai says:
Regarding Conjecture 2, I wonder if configurations where the sum equals zero has some special role. Perhaps the following is true: Start with any configurtion A such that $\sum T_r(A) >0$, then there is a small perturbation A’ so that $\sum T_r(A')=0$ and $T_r(A') \le T_r(A)$ for every $r$.
Maybe we can even choose A’ to have the property tht for every partition of the set A, the dimension of intersection of the convex hull of the parts does not increase when we move from A to A’.
4. Kristal Cantwell says:
There is a problem which I have heard referred to as a Radon relative in which you are given 8 points the problem is to partition them into three sets which form two triangles and a line segment such that the line segment intersects both triangles. As I recall at the time I saw it I thought I could solve it. I don’t know its history or if it is still open or if there is a class of related problems or who originated it.
5. Gil says:
Kristal, indeed it sounds like a Radon type problem. (There are many, and good references are two survey articles by J Eckhoff.) Where are the 8 points in the plane? in space?
6. Kristal Cantwell says:
I think one survey article by Eckhoff is in the handbook of convex geometry: Helly, Radon, and Caratheodory type theorems. Where is the other one? The points are in the plane.
7. Gil says:
Dear Kristal, here is the other: Eckhoff, Jürgen Radon’s theorem revisited. Contributions to geometry (Proc. Geom. Sympos., Siegen, 1978), pp. 164–185, Birkhäuser, Basel-Boston, Mass., 1979.
8. Gil Kalai says:
Two conjectures in the list have now been solved. The colorful Tverberg conjecture (for prime numbers of parts) conjecture was proved by Pavle Blagojecic, Benjamin Matschke, and Guenter Ziegler using topological methods. A counterexample to Eckhoff’s partition conjecture was found by Boris Bukh.
9. Gil Kalai says:
Let me mention that the following weaker form of the conjecture in question 2 is also open and interesting: For every $A \subset R^d$ , $\sum_{r=1}^{|A|} {\rm dim}~conv (T_r(A)) \ge 0$.
• Blogroll
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 90, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8917739987373352, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/56936/smoothness-of-solution-for-second-order-elliptic-problem/57069 | ## smoothness of solution for second order elliptic problem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello all,
could someone point me to a reference that ties the smoothness of the solution $u$ to the classical elliptic problem
$\nabla \cdot ( q \nabla u ) = f \;,\; x \in \Omega$
$u = g \;,\; x \in \Gamma = \partial \Omega$
to the smoothness of $f$, $q$ and $g$?
$\Omega$ is a convex polygonal domain in $\Re^d$ with $d \in {2,3}$. The boundary $\Gamma$ is piecewise linear (can have corners, e.g., if $\Omega$ is the unit square).
I am particularly interested in the (minimal) smoothness requirements for the forcing and boundary data $f$ and $g$, such that $u \in {\cal H}^2(\Omega)$ (not just locally).
I went through Evans' book on PDEs but he assumes homogeneous boundaries and proves only local smoothness $u \in {\cal H}_{\rm loc}^s(\Omega)$ based on assumptions on the forcing $f$. My $g$ is generally nonzero.
Also, would the smoothness theory for the BVP above extend to a Helmholtz problem with a pure Neumann BC?
Thanks for any good pointers!
Kind regards, -- Mihai
-
Is $\Gamma$ the boundary of $\Omega$? If so, what kind of regularity are you assuming for $\Gamma$? – Yakov Shlapentokh-Rothman Mar 1 2011 at 0:06
3
have you looked at Gilbarg and Trudinger? – Willie Wong Mar 1 2011 at 0:11
@Yakov Shlapentokh-Rothman: I updated the problem definition. @Willie Wong: thanks for the pointer, I will check it out today. – Mihai Mar 1 2011 at 17:26
1
For an example of things that can go wrong since $\Gamma$ is not smooth, see mathoverflow.net/questions/38054/… – Yakov Shlapentokh-Rothman Mar 2 2011 at 0:30
Gilbarg and Trudinger assume smooth boundaries as far as I see. I am going through Grisvard right now to see what I can figure out from there. – Mihai Mar 2 2011 at 17:33
## 2 Answers
Grisvard's book is a standard reference for elliptic problems in domains with corners.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
See http://en.wikipedia.org/wiki/Elliptic_regularity and references therein.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8917365074157715, "perplexity_flag": "middle"} |
http://mathhelpforum.com/pre-calculus/120344-inequality.html | # Thread:
1. ## inequality
Solve the inequality |x+4|+|x-1|>5 .
I know the strategy here is to get rid of the modulus by picking cases .
my questino here is how to pick these cases ?
2. Originally Posted by hooke
Solve the inequality |x+4|+|x-1|>5 .
I know the strategy here is to get rid of the modulus by picking cases .
my questino here is how to pick these cases ?
Use the definition of the absolute value:
$|x|=\left\{ \begin{array}{l}x,\ if \ x \geq 0 \\ -x,\ if \ x<0 \end{array} \right.$
$|x+4|=\left\{ \begin{array}{l}x+4,\ if \ x \geq -4 \\ -(x+4),\ if \ x<-4 \end{array} \right.$
$|x-1|=\left\{ \begin{array}{l}x-1,\ if \ x \geq 1 \\ -(x-1),\ if \ x<1 \end{array} \right.$
You now have 3 intervals with different inequalities:
$\begin{array}{ll}x\geq 1 :& x+4+x-1>5 \\ -4\leq x<1 :& x+4+(-(x-1)) >5 \\ x<-4: &-(x+4)+(-(x-1))>5 \end{array}$
Solve for x.
Draw the graphs of
$f(x)=|x+4|+|x-1|$ and y = 5
Determine those points on the graph of f where the y-coordinate is greater than 5.
Attached Thumbnails
3. Originally Posted by earboth
Use the definition of the absolute value:
$|x|=\left\{ \begin{array}{l}x,\ if \ x \geq 0 \\ -x,\ if \ x<0 \end{array} \right.$
$|x+4|=\left\{ \begin{array}{l}x+4,\ if \ x \geq -4 \\ -(x+4),\ if \ x<-4 \end{array} \right.$
$|x-1|=\left\{ \begin{array}{l}x-1,\ if \ x \geq 1 \\ -(x-1),\ if \ x<1 \end{array} \right.$
You now have 3 intervals with different inequalities:
$\begin{array}{ll}x\geq 1 :& x+4+x-1>5 \\ -4\leq x<1 :& x+4+(-(x-1)) >5 \\ x<-4: &-(x+4)+(-(x-1))>5 \end{array}$
Solve for x.
Draw the graphs of
$f(x)=|x+4|+|x-1|$ and y = 5
Determine those points on the graph of f where the y-coordinate is greater than 5.
hey thank you for clarifying ! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8936328887939453, "perplexity_flag": "middle"} |
http://www.scholarpedia.org/article/Fuzzy_sets | # Fuzzy sets
From Scholarpedia
Milan Mares (2006), Scholarpedia, 1(10):2031.
Curator and Contributors
1.00 - Milan Mares
Figure 1: Bird's-eye view on a forest: Where is the boundary of the forest? Which location is in the forest and which is out of it? (See explanation in the text.)
Fuzzy set is a mathematical model of vague qualitative or quantitative data, frequently generated by means of the natural language. The model is based on the generalization of the classical concepts of set and its characteristic function.
## History
The concept of a fuzzy set was published in 1965 by Lotfi A. Zadeh (see also Zadeh 1965). Since that seminal publication, the fuzzy set theory is widely studied and extended. Its application to the control theory became successful and revolutionary especially in seventies and eighties, the applications to data analysis, artificial intelligence, and computational intelligence are intensively developed, especially, since nineties. The theory is also extended and generalized by means of the theories of triangular norms and conorms, and aggregation operators.
The expansion of the field of mathematical models of real phenomena was influenced by the vagueness of the colloquial language. The attempts to use the computing technology for processing such models have pointed at the fact that the traditional probabilistic processing of uncertainty is not adequate to the properties of vagueness. Meanwhile the probability, roughly speaking, predicts the development of well defined factor (e.g., which side of a coin appears, which harvest we can expect, etc.), the fuzziness analyzes the uncertain classification of already existing and known factors, e.g., is a color "rather violet" or "almost blue"? "Is the patient's temperature a bit higher, or is it a fever?", etc. The models of that type proved to be essential for the solution of problems regarding technical (control), economic (analysis of markets), behavioral (cooperative strategy) and other descriptions of activities influenced by vague human communication.
## Mathematical formalism
The traditional deterministic set in a universum $$\mathcal U$$ can be represented by the characteristic function $$\varphi_A$$ mapping $$\mathcal U$$ into two-element set $$\{0,1\}\ ,$$ namely for $$x\in{\mathcal U}$$ $\varphi_A(x)=0$ if $$x\notin A\ ,$$ and $\varphi_A(x)=1$ if $$x\in A\ .$$
A fuzzy subset $$A$$ of $$\mathcal U$$ is defined by a membership function $$\mu_A$$ mapping $$\mathcal U$$ into a closed unit interval $$[0, 1]\ ,$$ where for $$x\in\mathcal U$$ $\mu_A(x)=0$ if $$x\notin A\ ,$$ $\mu_A(x)=1$ if $$x\in A\ ,$$ and $\mu_A(x)\in(0,1)$ if $$x$$ possibly belongs to $$A$$ but it is not sure.
For the last case - the nearer to 1 the value $$\mu_A(x)$$ is, the higher is the possibility that $$x\in A\ .$$
### Example
Let us consider the bird's-eye view of a forest in Figure 1.
• Is location A in the forest? Certainly yes, $$\mu_{\rm forest}(A) = 1\ .$$
• Is location B in the forest? Certainly not, $$\mu_{\rm forest}(B) = 0\ .$$
• Is location C in the forest? Maybe yes, maybe not. It depends on a subjective (vague) opinion about the sense of the word "forest". Let us put $$\mu_{\rm forest}(C) = 0.6\ .$$
### Operations with fuzzy sets
The processing of fuzzy sets generalizes the processing of the deterministic sets. Namely, if $$A, B$$ are fuzzy sets with membership functions $$\mu_A, \mu_B\ ,$$ respectively, then also the complement $$\overline{A}\ ,$$ union $$A\cup B$$ and intersection $$A\cap B$$ are fuzzy sets, and their membership functions are defined for $$x\in{\mathcal U}$$ by $\mu_{\overline{A}}(x)=1-\mu_A(x)\ ,$ $\mu_{A\cup B}(x)=\max\left(\mu_A(x),\mu_B(x)\right)\ ,$ $\mu_{A\cap B}(x)=\min\left(\mu_A(x),\mu_B(x)\right)\ .$ Moreover, the concept of inclusion of fuzzy sets, $$A\subset B\ ,$$ is defined by $\mu_A(x)\leq\mu_B(x)$ for all $$x\in{\mathcal U}\ ,$$ and the empty and universal fuzzy sets, $$\emptyset$$ and $$\mathcal U\ ,$$ are defined by membership function $\mu_\emptyset(x)=0$ and $$\mu_[[:Template:\mathcal U]](x)=1$$ for all $$x\in{\mathcal U}\ .$$ Even if all above operations and concepts consequently generalize their counterparts in the deterministic set theory, the resulting properties of fuzziness need not be identical with those of the deterministic theory, e.g., for some fuzzy set $$A\ ,$$ the relation $$A\cap\overline{A}\neq\emptyset\ ,$$ or even $$A\subset\overline{A}\ ,$$ may be fulfilled.
## Derived concepts
The basic definition of a fuzzy set can be easily extended to numerous set-based concepts. For example, a relation $$R$$ over the universe $$\mathcal U$$ can be defined by a subset of $${\mathcal U}\times{\mathcal U}\ ,$$ $$\{(x,y):y\in{\mathcal U},\,y\in{\mathcal U},\,x\,R\,y\}\ ,$$ a function $$f$$ over $$\mathcal U$$ can be identified with its graph $$\{(x,r):x\in{\mathcal U},\,r\in{\mathbb R},\,r=f(x)\}\subset {\mathcal U}\times{\mathbb R}$$ (where $$\mathbb R$$ is the set of real numbers). Then their fuzzy counterparts are defined as respective fuzzy set defined over $${\mathcal U}\times{\mathcal U}$$ and $${\mathcal U}\times R\ ,$$ respectively.
Figure 2: Fuzzy quantities.
## Related Theories
As the concept of sets is present at the background of many fields of mathematical and related models, it is applied, e.g., to mathematical logic (where each fuzzy statement is represented by a fuzzy subset of the objects of the relevant theory), or to the computational methods with vague input data (where each fuzzy quantity or fuzzy number is represented by a fuzzy subset of $$\mathbb R$$).
Namely, any fuzzy subset $$\mathbf a$$ of $$\mathbb R$$ is called fuzzy quantity iff there exist $$x_1<x_0<x_2\in\mathbb R$$ such that $$\mu_[[:Template:\mathbf a]](x_0)=1\ ,$$ $$\mu_[[:Template:\mathbf a]](x)=0$$ for $$x\notin[x_1,x_2]\ .$$
• If $$\mu_[[:Template:\mathbf a]]$$ is triangular then $$\mathbf a$$ is called fuzzy number.
• If it is trapezoidal then $$\mathbf a$$ is a fuzzy interval.
Binary algebraic operation $$x\star y$$ is extended to fuzzy quantities by so called extension principle, i.e., $${\mathbf a}\star{\mathbf b}\ ,$$ where $$x\in \mathbb R\ .$$ $\mu_{{\mathbf a}\star{\mathbf b}}(x)=\sup[\min(\mu_[[:Template:\mathbf a]](y),\mu_[[:Template:\mathbf b]](z): y,z\in{\mathbb R},x=y\star z]\ .$
The algebraic properties of extended operations are weaker than those of their patterns over real numbers, where the differences are mostly caused by the vagueness of fuzzy zero (or fuzzy one) and equality relation (see also Mares 1994).
## Applications
The fuzzy set theory and related branches are widely applied in the models of optimal control, decision-making under uncertainty, processing vague econometric or demographic data, behavioral studies, and methods of artificial intelligence. For example, there already exists a functional model of a helicopter controlled from the ground by simple "fuzzy" commands in natural language, like "up", "slowly down" "turn moderately left", "high speed", etc. "Fuzzy" wash-machines, cameras or shavers are common commercial products. Fuzzy sets also can be applied in sociology, political science, and anthropology, as well as in any field of inquiry dealing with complex patterns of causation (Ragin 2000).
## References
• D. Dubois and H. Prade (1988) Fuzzy Sets and Systems. Academic Press, New York.
• P. Klement, R. Mesiar, E. Pap (2000) Triangular Norms. Kluwer Acad. Press, Dordrecht.
• G.J. Klir, T.A. Folder (1988) Fuzzy Sets, Uncertainty and Information. Prentice Hall, Englewood Cliffs.
• G.J. Klir, Bo Yuan (Eds.) (1996) Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems. Selected Papers by Lotfi A. Zadeh. World Scientific, Singapore.
• M. Mares (1994) Computations Over Fuzzy Quantities. CRC--Press, Boca Raton.
• L.A. Zadeh (1965) Fuzzy sets. Information and Control 8 (3) 338--353.
• C.C. Ragin (2000) Fuzzy-Set Social Science. University Of Chicago Press. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 60, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153829216957092, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/67903?sort=votes | ## Complex manifolds in which the exponential map is holomorphic
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a complex manifold and $g$ a hermitian metric on $X$. Consider the Riemannian exponential $\exp_p: T_p X \to X$.
If $\exp_p$ is holomorphic for every $p \in X$, then $(\exp_p)^{-1}$, suitably restricted, provide holomorphic normal coordinates near $p$, with respect to which the metric osculates to order 2 to the standard metric at the origin. This shows that $g$ is a Kähler metric.
However, Kähler is not sufficient to ensure that $\exp_p$ is holomorphic: take $X$ a curve of genus $g \geq 2$. If $\exp_p:T_pX \to X$ is holomorphic, then it lifts to a holomorphic map from $T_pX$ to the universal cover $\widetilde{X} = \Delta$, giving a holomorphic map $T_pX \simeq \mathbb{C} \to \Delta$, which must be constant by Liouville's theorem. In fact, one can see that $\exp$ cannot be holomorphic if $X$ is Kobayashi hyperbolic.
This leaves the question: What are the hermitian manifolds/metrics whose exponential map is holomorphic?
-
8
In the $1$ complex dimension case this forces the metric to be locally isomorphic to the standard metric. The exponential map preserves length in the radial direction, so if it is conformal then it must also preserve length in the other direction. – Tom Goodwillie Jun 16 2011 at 3:52
## 2 Answers
NB: I've had a little time to think about this and can now improve my answer, in particular, removing the real-analytic assumption, which, as I suspected, was not necessary. Here is the improved answer:
If the metric $g$ is Kähler, then having the exponential map from a point $p\in M$ be holomorphic makes it flat in a neighborhood of $p$.
Suppose that $\exp_p:T_pM\to M$ is holomorphic near $0_p\in T_pM$ (where we use the natural holomorphic structure on the complex vector space $T_pM$). Let $z:T_pM\to\mathbb{C}^n$ be a complex linear isometry, so that the hermitian metric on $T_pM$ is just $|z|^2$ in the usual sense. Let $Z$ be the holomorphic 'radial' vector field on $\mathbb{C}^n$, whose real part is the standard radial vector field on $\mathbb{C}^n$.
Then $${\exp_p}^*g = g_{i\bar j}(z)\ dz^i\ d\overline{z}^j$$ for some functions $g_{i\bar j}$ on a neighborhood of $0\in\mathbb{C}^n$. Since $g$ is Kähler, there is a function $f$ defined on a neighborhood of $0\in\mathbb{C}^n$ such that $$g_{i\bar j} = \frac{\partial^2f}{\partial z^i\ \partial\overline{z}^j}.$$
Now, the condition that $z$ furnish Gauss normal coordinates for ${\exp_p}^*g$ is easily seen to be that
$$\mathcal{L}_Z\bigl(\bar\partial f\bigr) = \bar\partial\bigl(|z|^2\bigr).$$ In particular, $\bar\partial\bigl(\mathcal{L}_Z(f - |z|^2)\bigr) = 0$, so $\mathcal{L}_Z(f - |z|^2) = h$ for some holomorphic function $h$ on a neighborhood of $0$. This $h$ must vanish at $0$, so it is easy, by adding the real part of the appropriate holomorphic function to $f$ (which won't change $g$) to arrange that $h\equiv0$ and, moreover, that $f(0) = 0$. But this now implies that the real-valued function $f-|z|^2$ vanishes at the origin and also is constant along the radial vector field. Thus, $f = |z|^2$, and the metric $g$ is flat in these coordinates.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I do not like complex numbers and can make a mistake easily...
Let $L_p$ be a complex line in a tangent space $T_pX$. It is easy to see that $\exp_p$ gives an isometric embedding $L_p\hookrightarrow X$ which is also star-shaped with center at $p$; set $L=\exp_p(L_p)$
Take any other point $q\in L$, and let $L_q\subset T_q$ be the tangent subspace to $L$. Note that the maps $\exp_p$ and $\exp_q$ coinside (up to a shift) on the geodesic $(pq)$. [Here I use that if two holomorphic maps coincide on the real line then they coincide in the complex plane.] It follows that $L=\exp_q(L_q)$. Therefore $L$ is totally geodesic.
In other words: For any complex sectional direction in $X$, there is a tangent totally geodesic surface which is isometric to complex plane.
In particular, the sectional curvature in all complex sectional directions is zero and therefore the curvature of $X$ is identically zero; the later stated in Kobayashi--Nomizu, Foundations of differential geometry, Volume 2 IX, Prop. 7.1. (thanks to RdN).
-
2
I don't quite understand your arguement for exp of a complex line to be totally geodesic. But if so there is a lemma in vol II of Kobayashi Nomizu (Prop. 7.1) that says if two tensors with the symmetry and J invariance of a Kahler curvature tensor agree on complex lines they must be equal. It's clear curves have to be flat by Gauss lemma and totally geodesic would mean the second fundamental vanishes, so we can measure R of the ambient space from such curves. Thus the Prop. says total curvature must vanish. – RdN Jun 25 2011 at 16:43
@RdN, Thank you, I will include it in the answer – Anton Petrunin Jun 28 2011 at 10:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280269145965576, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/13171?sort=oldest | ## How many trial picks expectedly sufficient to cover a sample space?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a sequence of independent events where an $r$ element subset of an $n$ element set is picked uniformly randomly (ie. any of the $\begin{pmatrix}n\newline r\end{pmatrix}$ possibilities being equally likely).
What is the expected number of subsets one has pick to cover the whole set?
Here the terminology means: a sequence of picks $A_1,A_2,\ldots,A_n$ covers the whole set if $|A_1 \cup \cdots \cup A_n| = n$. A sequence $A_1, A_2,\ldots$ succeeds to cover the whole set in $n$ steps, if $A_1,\ldots,A_n$ covers the whole set but $A_1,\ldots, A_{n-1}$ does not.
The expected numbers seems to be much higher than one would imagine. But I could not quite come up with a closed form. But chances are, its always a rational number.
-
2
You should edit your question to not use $n$ for both the cardinality of the set and the number of steps to cover the set. Also, interesting question! I look forward to seeing what people come up with for this one. – Zev Chonoles Jan 27 2010 at 20:15
As you expected, it is always rational. If you let F(n,k,r) denote the expected number of additional sets you need when you already have covered k elements of your n, then you can set up a linear recurrence for F(n,k,r) in terms of F(n, k-1, r), F(n, k-2, r), ..., F(n, k-r, r) by looking at how many elements are covered by your next set. Combined with the boundary condition F(n,0,r)=0, you could in theory solve to get F(n,0,r) as a rational number. This is what is done in the "coupon collector" problem referenced by Tal K (the case r=1), but is impractical for, say, n/r bounded. – Kevin P. Costello Jan 27 2010 at 21:21
## 3 Answers
This process will cover the set faster than making $r$ random selections of a single element at each step ("sampling with replacement", producing a multiset of $r$ not-necessarily-distinct elements instead of a set of $r$ distinct elements). The latter is taking $r$ steps at a time in the Coupon Collector process which takes $n * log(n)$ steps. So we need at least $(n/r) * log(n)$ steps on average. This should be a close approximation when $n/r$ is large and within a bounded (not necessarily constant) factor of the truth when $n/r$ is bounded. The case when $n=2r$ is close to the "20 questions" problem of Erdos and Renyi.
-
Do you mean "at most (n/r) log(n) steps"? – Reid Barton Jan 27 2010 at 20:48
Yes, "at most", meaning that the slower coverage process takes (n/r)*log(n). Thanks for catching that. Also, when I say "a close approximation" I suppose that the asymptotic difference between the with- and without- replacement expected times (in the case when n/r is large) would be an additive difference of O(log n), not a multiplicative difference of a constant factor in the larger main term. In the n/r bounded case there could well be some log-periodic function as the "constant", as in the Erdos-Renyi problem. It would take a more detailed calculation to find out. – Tal K Jan 27 2010 at 20:59
It seems like even when n/r is bounded that n/r log n should be the right answer up to a (1+o(1)) multiplicative factor. If we considered an alternative model where each element is included in a set INDEPENDENTLY with some probability p, then it's easy to see (e.g. by computing the second moment of the number of omitted elements) that the threshold is log n/p sets. But the threshold is monotone in p, and you can sandwich the original problem with r=cn in between p=c-o(1) and p=c+o(1) with high probability. – Kevin P. Costello Jan 27 2010 at 21:26
If you fill cartons of r distinct coupons, it takes an average of (n/n + n/(n-1) + ... + n/(n-r+1)) coupons to fill a carton. So, a random r selection is like taking that many steps in the coupon collector process. – Douglas Zare Jan 27 2010 at 21:42
Kevin, your alternative model with p=1/2 is the Erdos-Renyi "20 Questions" problem, and the expected coverage time in that case involves both the [base 2] log(n) and some function of the fractional part of log(n). That's not necessarily inconsistent with your remark (for instance the log-periodic term could be additive, not multiplicative, I don't have the reference handy to check which it is). – Tal K Jan 27 2010 at 21:47
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
EDIT: While the $r=1$ case is the easiest, I thought it would be helpful to work it out anyway. I get that the expected number of picks necessary for $r=1$ is $nH_n$, where $H_n$ is the $n$th harmonic number, which is in line with Tal K's answer since $H_n\approx\ln(n)$.
Suppose the total number of elements covered by our picks so far is $k$. If we calculate the expected number of picks it will take to get to $k+1$, then we simply take the sum of our result from $k=0$ to $k=n-1$. There are $n-k$ elements we still need to hit, so there is an $\frac{n-k}{n}$ probability of having $k+1$ covered after 1 pick, $\frac{n-k}{n}(\frac{k}{n})$ probability of having $k+1$ covered after exactly 2 picks, and in general $\frac{n-k}{n}(\frac{k}{n})^j$ probability of going to $k+1$ after exactly $j$ picks. Thus, the expected number of picks to go from $k$ covered to $k+1$ covered is $(\frac{n-k}{n})\sum_{j=1}^\infty k(\frac{k}{n})^{k-1}$, which by the standard derivative trick we know is $(\frac{n-k}{n})\frac{1}{(1-\frac{k}{n})^2}=\frac{n}{n-k}$. Thus the expected number of picks of 1 element subsets necessary to cover an $n$ element set is $\sum_{k=0}^{n-1}\frac{n}{n-k}=n\sum_{k=1}^n\frac{1}{k}=nH_n$.
-
1
The r=1 case is just the coupon collector's problem - en.wikipedia.org/wiki/…. Look at the 'calculating the expectation' section of the linked page for a much simpler and more elegant way of getting your result. – Jeff Hussmann Jan 28 2010 at 17:42
The expected number of picks needed equals the sum of the probabilities that at least $t$ picks are needed, which means that $t-1$ subsets left at least one value uncovered. We can use inclusion-exclusion to get the probability that at least one value is uncovered.
The probability that a particular set of $k$ values is uncovered after $t-1$ subsets are chosen is
$$\Bigg(\frac{n-k \choose r}{n \choose r}\Bigg)^{t-1}$$
So, by inclusion-exclusion, the probability that at least one value is uncovered is
$$\sum_{k=1}^n {n \choose k}(-1)^{k-1}\Bigg(\frac{n-k \choose r}{n \choose r}\Bigg) ^{t-1}$$
And then the expected number of subsets needed to cover everything is
$$\sum_{t=1}^\infty \sum_{k=1}^n {n \choose k}(-1)^{k-1} \Bigg(\frac{n-k \choose r}{n \choose r}\Bigg)^{t-1}$$
Change the order of summation and use $s=t-1$:
$$\sum_{k=1}^n {n \choose k}(-1)^{k-1} \sum_{s=0}^\infty \Bigg( \frac{n-k \choose r}{n \choose r}\Bigg)^s$$
The inner sum is a geometric series.
$$\sum_{k=1}^n {n \choose k} (-1)^{k-1}\frac{n \choose r}{{n \choose r}-{n-k \choose r}}$$
$${n \choose r} \sum_{k=1}^n (-1)^{k-1}\frac{n \choose k}{{n \choose r}-{n-k \choose r}}$$
I'm sure that should simplify further, but at least now it's a simple sum. I've checked that this agrees with the coupon collection problem for $r=1$.
Interestingly, Mathematica "simplifies" this sum for particular values of $r$, although what it returns even for the next case is too complicated to repeat, involving EulerGamma, the gamma function at half-integer values, and PolyGamma[0,1+n].
-
Maple doesn't give a simpler form even for r=2, although there's no guarantee that there's not some trick it doesn't see. Also, your answer seems to agree with Tal K's asymptotics below. – Michael Lugo Jan 28 2010 at 1:04
If you plug r=1 into that formula, it still takes some manipulation to convert the alternating sum of (n choose k)/k to (1 + 1/2 + 1/3 + ... + 1/n). Can one express it as a similar sum of positive decreasing terms? – Douglas Zare Jan 28 2010 at 4:47
For r=2, here is what Mathematica reports: n/(4^n (1-2n)^2 Sqrt[Pi]) * (Gamma[1/2-n](-2n^2 Gamma[n] + Gamma[1+n])) + n(n-1)/(1-2n)^2*(1+EulerGamma(-1+2n)+(-1+2n)PolyGamma[0,1+n]). I've tried to simplify this by cancelling a few terms, and I hope I haven't introduced any errors. Note that 2n-1 or n-1/2 shows up in many places. For r=3, the formula involves many occurrences of Sqrt[1+6n-3n^2] which is imaginary for n \ge 3. – Douglas Zare Jan 28 2010 at 11:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344167709350586, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/107884/list | ## Return to Answer
2 added 225 characters in body; added 20 characters in body
First, let me point out that $H^i(\tilde{X}, O_{\tilde{X}}) \cong H^i(X, O_X)$ if $X$ has rational singularities for all $i > 0$.
Indeed, if $X$ has rational singularities if and only if
1. $R^j \pi_* O_{\tilde X} = 0$ for $j > 0$ and
2. $\pi_* O_{\tilde X} = O_X$.
It immediately follows from the Leray spectral sequence that $$H^i(\tilde{X}, O_{\tilde{X}}) \cong H^i(X, O_X)$$ for all $i > \geq 0$.
In fact, for any Cartier divisor $D$ on $X$, the same argument implies that $$H^i(\tilde{X}, O_{\tilde{X}}(\pi^* D) ) = H^i(X, O_X(D))$$ for any $i \geq 0$ since the projection formula can be applied in the cases of 1. and 2. above.
Now, without rational singularities, you can run into trouble. For example, suppose that $X$ is a normal Cohen-Macaulay variety with an isolated singularity $x \in X$ that is not rational. Consider the exact triangle in the derived category: $$O_X \to R \pi_* O_{\tilde X} \to C \xrightarrow{+1}$$ Because $X$ is a normal Cohen-Macaulay, and has an isolated non-rational singularity, we know $C = M[-n+1]$ is a nonzero module supported at $x \in X$ (shifted over by $n-1$). See Lemma 3.3 in Rational, Log Canonical, Du Bois Singularities: On the Conjectures of Kollár and Steenbrink by Sándor Kovács.
Then we have the following exact sequence by taking (hyper)cohomology $$0 \to H^{n-1}(X, O_X) \to {{H}}^{n-1}(\tilde{X}, O_{\tilde{X}}) \to {\mathbb{H}^{n-1}}(X, C) \to H^n(X, O_X) \to H^n(\tilde{X}, O_{\tilde{X}}) \to 0$$ where the two end points are zero since $C = M[-n+1]$ an Artinian module with a shift. On the other hand, $\mathbb{H}^{n-1}(X, C) = H^0(X, M) \neq 0$ for the same reason.
Now, if $\tilde{X}$ is for example Fano and we are in characteristic zero, then $$H^i(\tilde{X}, O_{\tilde{X}}) = H^i(\tilde{X}, O_{\tilde{X}}(K_X-K_X)) = 0$$ by Kodaira vanishing for $i > 0$. But then $H^n(X, O_X) \neq 0$ from the exact sequence.
Beyond the Fano case, you might luck out of course, but I don't see any reason why it would hold in general.
1
First, let me point out that $H^i(\tilde{X}, O_{\tilde{X}}) \cong H^i(X, O_X)$ if $X$ has rational singularities for all $i > 0$.
Indeed, if $X$ has rational singularities if and only if
1. $R^j \pi_* O_{\tilde X} = 0$ for $j > 0$ and
2. $\pi_* O_{\tilde X} = O_X$.
It immediately follows from the Leray spectral sequence that $$H^i(\tilde{X}, O_{\tilde{X}}) \cong H^i(X, O_X)$$ for all $i > 0$.
Now, without rational singularities, you can run into trouble. For example, suppose that $X$ is a normal Cohen-Macaulay variety with an isolated singularity $x \in X$ that is not rational. Consider the exact triangle in the derived category: $$O_X \to R \pi_* O_{\tilde X} \to C \xrightarrow{+1}$$ Because $X$ is a normal Cohen-Macaulay, and has an isolated non-rational singularity, we know $C = M[-n+1]$ is a nonzero module supported at $x \in X$ (shifted over by $n-1$). See Lemma 3.3 in Rational, Log Canonical, Du Bois Singularities: On the Conjectures of Kollár and Steenbrink by Sándor Kovács.
Then we have the following exact sequence by taking (hyper)cohomology $$0 \to H^{n-1}(X, O_X) \to {{H}}^{n-1}(\tilde{X}, O_{\tilde{X}}) \to {\mathbb{H}^{n-1}}(X, C) \to H^n(X, O_X) \to H^n(\tilde{X}, O_{\tilde{X}}) \to 0$$ where the two end points are zero since $C = M[-n+1]$ an Artinian module with a shift. On the other hand, $\mathbb{H}^{n-1}(X, C) = H^0(X, M) \neq 0$ for the same reason.
Now, if $\tilde{X}$ is for example Fano and we are in characteristic zero, then $$H^i(\tilde{X}, O_{\tilde{X}}) = H^i(\tilde{X}, O_{\tilde{X}}(K_X-K_X)) = 0$$ by Kodaira vanishing for $i > 0$. But then $H^n(X, O_X) \neq 0$ from the exact sequence.
Beyond the Fano case, you might luck out of course, but I don't see any reason why it would hold in general. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251900911331177, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/tagged/median+uniform-distribution | # Tagged Questions
1answer
240 views
### median of a uniform distribution [0,1]
I need to find the distribution of the median from the given distribution, where n is known to be odd. The formula given in class for this is: $n=2m+1$ where $m\in\mathbb{N}$ ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9098002314567566, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-equations/137312-solved-verify-solution-2nd-order-eqn-taking-real-part.html | # Thread:
1. ## [SOLVED] Verify solution for 2nd order eqn. by taking real part....
Find the particular solutions of $y_{p}(t)$ for the DEs:
$y''+B^{2}y=cos(wt)$ and
$y''+B^{2}y=sin(wt)$
by taking both the real and imaginary parts of the solution:
$z_{p}=\dfrac{sin((w-B)\dfrac{t}{2})}{(w-B)\dfrac{1}{2}} \dfrac{e^{i(w+B)\dfrac{t}{2}}}{i(w+B)}$
Verify the solution for for: $y''+B^{2}y=cos(wt)$
Just to clarify, all other variables except $i$ are real numbers.
Well, here is what i tried...
I started of by trying to take the Real part of $z_{p}$, as the directions state. I think i can do it fine until i get to the denominator on the right side. Once i take the real part of $i(w+b)$, it makes the demoninator zero, which leaves me stuck.
Any helps would be be GREAT!
2. Take the i out from the denominator by multiplying top and bottom by i to get:
$-i\left(\frac{\sin((w-b)t/2)e^{i(w-b)t/2}}{(w-b)(w+b)}\right)$
Now, when you expand the exponent you'll get an x+iy term which when multiplied by the i gives you a real component.
3. Thanks, I got it figured out now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252359867095947, "perplexity_flag": "middle"} |
http://physics.aps.org/articles/v2/73 | # Viewpoint: For faster magnetic switching—destroy and rebuild
, Institute of Solid State Research, IFF-9, Forschungszentrum Jülich GmbH, D-52425 Jülich, Germany
Published September 8, 2009 | Physics 2, 73 (2009) | DOI: 10.1103/Physics.2.73
Magnetic switching is typically a continuous process, where a field pulse rotates a magnet from up to down, but it is now possible to do this faster — and with all-optical methods — by first quenching the magnetization to zero and then repolarizing it in the opposite direction.
#### Ultrafast Path for Optical Magnetization Reversal via a Strongly Nonequilibrium State
K. Vahaplar, A. M. Kalashnikova, A. V. Kimel, D. Hinzke, U. Nowak, R. Chantrell, A. Tsukamoto, A. Itoh, A. Kirilyuk, and Th. Rasing
Published September 8, 2009 | PDF (free)
Magnetic data storage technology and the ever-increasing speed of information processing have brought enormous changes to our daily life. These developments naturally lead us to ask if there is a physical limit to the speed at which magnetic moments can be switched [1]—a topic that has caused no shortage of controversy in the scientific community. Exploring this limit is complicated, partly because switching the magnetization from one direction to the other can occur in multiple ways and along different paths. For example, magnetic and electric fields, electric currents, and laser pulses can all stimulate magnetic switching and the trajectory of the magnetization vector from its initial to its final state will vary with each of these switching mechanisms.
Kadir Vahaplar and colleagues at Radboud University Nijmegen in The Netherlands, in collaboration with scientists in Germany, the UK, Japan, and Russia have made a dramatic leap forward in exploring the limits to magnetic switching. Writing in Physical Review Letters, they demonstrate a magnetic write-read event that occurs on times as short as $30$ picoseconds (ps), which is the fastest magnetic switching process observed so far [2]. But the work by Vahaplar et al. is much more than the demonstration of high-speed magnetic switching. By combining sophisticated experimental methods with theoretical tools that fully account for the magnetization on many length scales (from the continuum to the atomic and electronic limit), their study leads to important insight and detailed understanding of what fundamental processes allow ultrafast magnetic switching to occur.
So far, groups have mainly looked at ways of turning and redirecting the magnetization continuously, typically by causing it to precess with magnetic field pulses [3]. Using purely optical methods, Vahaplar et al. show that a faster way to switch the magnetization is to temporarily quench it [4], that is, reduce it to zero, and restore it immediately afterwards in the opposite direction, a scheme they aptly call a linear reversal (Fig. 1).
Their experiments are an ingenious combination of the different effects by which light interacts with magnetic moments. These effects are usually categorized as optomagnetic or magneto-optical, depending on whether they describe the influence of the light pulse on the magnetization or vice versa. In their setup, Vahaplar et al. first stimulate the magnetization of amorphous $20nm$ ferromagnetic films made of $GdxFe100-x-yCoy$ with a short and intense circularly polarized (pump) laser pulse and then image the magnetization with a second, equally short but linearly polarized (probe) laser pulse.
The first laser pulse has two effects on the magnetization. First, it rapidly pumps energy into the film, locally heating the material and demagnetizing it [5]. The energy of the laser pulse is primarily absorbed by the electrons, which reach a temperature of about $1200K$ within the first few hundred femtoseconds (fs) after the pulse. Changes in the electronic temperature affect the magnetic properties on sub-ps time scales. Most importantly, the magnitude of the magnetization $M$ decreases as the temperature of the electronic system approaches the Curie temperature $TC$ (the temperature at which the material undergoes a phase transition from a ferromagnet to a paramagnet, at equilibrium). Vahaplar et al. show that the magnetization can in fact be temporarily “destroyed” down to a value of zero about $500fs$ after applying a sufficiently strong laser pulse.
The first laser pulse also affects the magnetization via the inverse Faraday effect [6]: as the circularly polarized electromagnetic field pulse traverses the sample, it acts as an effective magnetic field along the pulse’s propagation direction. This effective magnetic field is proportional to the intensity of the laser pulse and to its degree of circular polarization. The inverse Faraday effect provides outstanding possibilities to control the magnetization, since it can generate locally enormously strong effective magnetic fields of up to about $20T$. It can switch the magnetization as well, since the sign of the field only depends on the pulse’s chirality. This optomagnetic, nonthermal control of the magnetization was first demonstrated by the Nijmegen group in 2007 [7]. Essentially, they showed that laser pulses as short as $40fs$ could induce optomagnetic switching, but it was not clear how much time the magnetization required to complete the switching process after the exposure to such a short pulse.
This information is a central result of the current work by Vahaplar et al. They used the well-known Faraday effect—where the magnetization of a material rotates the polarization of light transmitted through it—to image the magnetization dynamics within a few tens of picoseconds after the pump pulse. By carefully varying the delay between the circularly polarized pump pulse and the linearly polarized probe pulse, the authors could obtain precise information on the spatiotemporal evolution of the magnetization in the film. They found that the switching process completes within a time well below $90ps$, which is very short but still much longer than the duration of the pulse. The reversal is initiated in a small region in which the heating essentially destroys the magnetization for an instant. Subsequently, after about $30ps$, the magnetization nucleates and rebuilds as the temperature of the electrons decreases. This either leads to an expansion of the nucleus to form a larger area with a reversed magnetization or to the restoration of the initial state, depending on the initial magnetization direction and the chirality of the laser pulse.
Reliable switching only occurs within a narrow range of parameters for the laser pulse. The switching probability also depends critically on how high the electrons are heated: too little, and the initial magnetization will not be destroyed, too much and the material will not cool down quickly enough to restore the magnetization in the opposite direction before all information is lost. Vahaplar et al. performed sophisticated simulation studies to build a phase diagram of the suitable combinations of laser pulse duration and intensity, which are confirmed by the experiments.
The technique of locally heating the magnetization above the Curie temperature and restoring it in the opposite direction by applying a magnetic field upon cooling is well known in the magnetic recording industry [8]. A recent application of this concept is heat-assisted magnetic recording in high-density magnetic data storage, where a heating laser is combined with an inductive write head that provides the field along which the magnetization rebuilds as the temperature drops. This idea is also similar to the writing process in magneto-optical drives, a technology that has been popular since the nineties. In both of these cases, however, the desire to increase the storage density more so than increasing the writing speed was the main drive behind the technology.
What has particular aesthetic appeal in the process found by Vahaplar and co-workers is that a single laser pulse does all the work: It provides both the heating via energy transfer and the switching field via the inverse Faraday effect. Since the strength of these effects depends on the intensity and duration of the irradiation, it is clear that the pulse must be properly shaped to achieve the desired effect. Remarkably, the switching appears to occur even if for a short time both the magnetization and the effective field vanish, suggesting that the system partly stores information on the previous state.
Passing through a state of quenched magnetization seems to be the key to obtain ultimate magnetic switching speed. Once the system is in a quenched state, the strong magnetic exchange interaction between electrons will rapidly restore ferromagnetism. Harnessing electron exchange—the strongest force in magnetism—is certainly a very promising way to achieve ultrafast switching. It is, for example, an essential aspect of another switching mechanism that simulations predict should occur on the time scale of a few tens of ps: magnetic vortex core switching [9]. In this case, it is the small perpendicularly magnetized region at the core of a spiralling magnetization (vortex) that is switched by a short magnetic field pulse. Although vortex core switching is very different than the linear switching Vahaplar et al. explore, it is remarkable that both high-speed methods involve the temporary formation of a region with vanishing magnetization [10].
An important remaining question in the work from Vahaplar et al. concerns the transfer and the balance of angular momentum. The magnetization reversal is connected with a change of angular momentum, which must be provided from somewhere. Yet, it is generally agreed that the apparently simple assumption of a direct transfer of the photon spin to the magnetic system is not the solution [11], suggesting that the atomic lattice may play an important role in angular momentum conservation.
### References
1. C. H. Back and D. Pescia, Nature 428, 808 (2004).
2. K. Vahaplar, A. M. Kalashnikova, A. V. Kimel, D. Hinzke, U. Nowak, R. Chantrell, A. Tsukamoto, A. Itoh, A. Kirilyuk, and T. Rasing, Phys. Rev. Lett. 103, 117201 (2009).
3. H. W. Schumacher, C. Chappert, R. C. Sousa, P. P. Freitas, and J. Miltat, Phys. Rev. Lett. 90, 017204 (2003).
4. N. Kazantseva, D. Hinzke, R. W. Chantrell, and U. Nowak, Europhys. Lett. 86, 27006 (2009).
5. E. Beaurepaire, J. C. Merle, A. Daunois, and J.-Y. Bigot, Phys. Rev. Lett. 76, 4250 (1996).
6. J. P. van der Ziel, P. S. Pershan, and L. Malmstrom, Phys. Rev. Lett. 15, 190 (1965).
7. C. D. Stanciu, F. Hansteen, A. V. Kimel, A. Tsukamoto, A. Itoh, A. Kiriklyuk, and T. Rasing, Phys. Rev. Lett. 94, 237601 (2007).
8. J. Hohlfeld, Th. Gerrits, M. Bilderbeek, Th. Rasing, H. Awano, and N. Ohta, Phys. Rev. B 65, 012413 (2001).
9. R. Hertel, S. Gliga, M. Fähnle, and C. M. Schneider, Phys. Rev. Lett. 98, 117201 (2007).
10. E. Feldtkeller, Z. Angew. Phy. 19, 530 (1965).
11. B. Koopmans, M. van Kampen, J. T. Kohlhepp, and W. J. M. de Jonge, Phys. Rev. Lett. 85, 844 (2000).
### About the Author: Riccardo Hertel
Riccardo received his Ph.D. from the University of Stuttgart, Germany, in 1999 and his Habilitation from the University of Halle-Wittenberg, Germany, in 2005. He is a lecturer at the University of Duisburg-Essen and works at the Forschungszentrum Jülich in the Institute of Solid State Research, where he is leading a research group on micro- and nanomagnetism.
## Related Articles
### More Optics
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Wave-Shaping Surfaces
Viewpoint | May 6, 2013
### More Magnetism
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
Invisibility Cloak for Heat
Focus | May 10, 2013
Desirable Defects
Synopsis | May 10, 2013 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90172278881073, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/189621/is-tan-pi-2-undefined-or-infinity?answertab=active | # Is $\tan(\pi/2)$ undefined or infinity?
The way I have understood, $0/0$ is undefined or indeterminate because, if $c=0/0$ then $c\cdot 0=0$, where $c$ can be any finite number including $0$ itself.
If we also observe a fraction $F=a/b$ where $a,b$ are positive real numbers, the value of $F$ increases with the decrements of $b$. Being $0$ the least non-negative integer, if $b$ tends $0$ then $F$ tends to $\infty$ which is greater than all finite numbers.
I have also heard that no number is equal to infinity, a variable can tend to infinity. Any ratio is also a variable or is resolved to a number.
Now,
$$\tan(\pi/2) = \frac{\sin(\pi/2)}{\cos(\pi/2)}=\frac{1}{0},$$
is it undefined or infinity?
Also we know
$$\tan2x= \frac{2\tan x}{1-\tan^2x}$$
Is this formula valid for $x=\pi/2$?
Any rectification is more than welcome.
Some references:
-
2
It is undefined. – Holdsworth88 Sep 1 '12 at 11:56
4
A number does not tend to, a sequence or a function might. – Stefan Walter Sep 1 '12 at 11:59
What is infinity? I don't remember infinity being defined. – Arjang Sep 1 '12 at 12:01
You are not computing limits, and therefore you cannot divide by zero. – Siminore Sep 1 '12 at 12:02
1
@Arjang, do you mean infinity and undefined are synonymous? – lab bhattacharjee Sep 1 '12 at 12:03
show 2 more comments
## 2 Answers
In algebraic sense, it is just an undefined quantity. You have no way to assign a value to $\tan (\pi/2)$ so that its value is compatible with usual algebraic rules together with the trigonometric identities. Thus we have to give up either the algebraic rules or the definedness of $\tan(\pi/2)$. In many cases, we just leave it undefined so that our familiar algebras remain survived.
But in other situation, where some geometric aspects of the tangent function have to be considered, we can put those rules aside and define the value of $\tan(\pi/2)$. For example, where we are motivated by the continuity of the tangent function on $(-\pi/2, \pi/2)$, we may let $\tan(\pm\pi/2) = \pm\infty$ so that it is extended to a 1-1 well-behaving mapping between $[-\pi/2, \pi/2]$ and the extended real line $[-\infty, \infty]$. In another example, anyone who is familiar with complex analysis will find that the tangent function is a holomorphic function from the complex plane $\Bbb{C}$ to the Riemann sphere $\hat{\Bbb{C}}=\Bbb{C}\cup\{\infty\}$ by letting $\tan(\frac{1}{2}+n)\pi = \infty$ the point at infinity.
In conclusion, there is no general rule for assigning a value to $\tan(\pi/2)$, and if needed, it depends on which property you are considering.
-
$\tan \frac{\pi}{2}$ is undefined, $\lim \limits_{x \rightarrow \frac{\pi}{2}+} \tan x = -\infty$ and $\lim \limits_{x \rightarrow \frac{\pi}{2}-} \tan x = +\infty$ (the + and - in the limits indicate whether a point is approached from right or left side).
In different contexts infinity may or may not be considered undefined but in general a thing being undefined does not imply it is infinite. In this case, note that $f(x_0)$ is entirely unrelated to $\lim \limits_{x \rightarrow x_0} f(x)$, unless the function is continuous at $x_0$.
-
Yes but isn't $tan(\theta)$= ratio of the length of the opposite side to the length of the adjacent side of a triangle. So even here some kind of limit is emerging! – PooyaM Sep 1 '12 at 12:24
A limit is indeed emerging - namely the two I wrote down. A triangle with two $90^{\circ}$ angles can hardly be called a triangle, but even if you choose to do so, that only gives you $1 \over 0$. – Karolis Juodelė Sep 1 '12 at 12:30
And if you do go for $\frac{1}{0}$ you may need consider $\frac{1}{\pm 0}$, i.e. $\pm \frac{1}{ 0}$ – Henry Sep 1 '12 at 13:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929801344871521, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/52037/list | ## Return to Answer
1 [made Community Wiki]
From my perspective as an analyst, non-metrizable spaces usually arise for one of the following reasons:
1. Separation axiom failure: the space is not, e.g., normal. This mostly happens when the space is not even Hausdorff (spaces that are Hausdorff but not normal are usually too exotic to arise much). Often this is for simple reasons; for example, the topology is defined by a pseudometric such as a seminorm. In this case we usually mod out by zero-distance pairs and try again.
2. The space is too big: examples like the uncountable ordinal and the long line fall under this heading. But they are still locally metrizable, so we usually allow them if we're trying to prove local theorems, but forbid them for global statements.
3. The topology is too weak, and usually not even first countable, so sequences are not enough to define the topology. Most of these examples are based on a product or pointwise-convergence topology: the weak-* topology on the dual $X^*$ of a Banach space $X$, the spectrum of a $C^*$-algebra, Stone-Čech compactifications, etc. (And actually, in the first case, it is often enough just to look at the unit ball of $X^*$, which is metrizable if $X$ is separable, which in applications it usually is.) As compensation, some corollary of Tychonoff's theorem gives us some compactness, which is probably the only reason we've agreed to put up with such an annoyingly weak topology in the first place.
4. Someone is abusing the language of topology, e.g. Fürstenberg's "topological" proof of the infinitude of the primes.
5. I've wandered into an algebraic geometry seminar by mistake. Grouchier analysts may consider this to fall under the previous heading. ;-) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282891750335693, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/321105/finding-eigenvectors-and-eigenvalues-of-a-matrix-with-complex-numbers | # Finding eigenvectors and eigenvalues of a matrix with complex numbers
I need to find eigenvectors and eigenvalues of $\begin{pmatrix} 1 & i \\ -i & 1 \end{pmatrix}$.
Attempt: When I find the equation which I have to solve for the eigenvalues I get $(\lambda -1)^2 +i=0$. Solving for $\lambda$ I get $\lambda =\pm \frac{1-i}{\sqrt{2}}+1$ using $\sqrt{-i}=\frac{1-i}{\sqrt{2}}$. However, my book lists the following answers: $\lambda =0;2$. Could you explain how to get to these answers. Thank you.
-
1
You have a mistake in your characteristic polynomial. It should be $(\lambda-1)^2-1$. – julien Mar 5 at 4:50
Thanks. I see it now. – Dostre Mar 5 at 4:54
## 2 Answers
The characterisitic polynomial is $|A - \lambda I| = \begin{pmatrix} 1-\lambda & i \\ -i & 1-\lambda \end{pmatrix} = 0$.
This gives: $(1-\lambda^2) + i^2 = (1-\lambda^2) - 1 = \lambda (\lambda - 2) = 0$.
You should get an Eigensystem as follows:
$$\lambda_1 = 2, v_1 = (i, 1)$$
$$\lambda_2 = 0, v_2 = (-i, 1)$$
-
Yeah my book says the same thing. But am I on the right track? Why my lambdas contain complex numbers? – Dostre Mar 5 at 4:50
1
@Dostre: see your error from my solution? Regards – Amzoti Mar 5 at 4:53
Quick question. For the eigenvector corresponding to the second lambda can I have (i,-1). The equation will still be equal to zero. – Dostre Mar 5 at 5:02
1
@Dostre. yes, that is okay. Regards – Amzoti Mar 5 at 5:06
Well-done, as usual ;-) +1 – amWhy Apr 25 at 0:25
You've miscalculated. You should be taking the determinant of $$\left(\begin{array}{cc}1-\lambda & i\\-i & 1-\lambda\end{array}\right),$$ which is $$(1-\lambda)^2-(i)(-i)=(1-\lambda)^2-1=\lambda^2-2\lambda.$$ Setting that equal to $0$ will do the trick.
As an aside, you will occasionally encounter complex eigenvalues. It isn't (necessarly) anything to be concerned about, so long as they're correctly obtained. We'll deal with them in much the same way as with real eigenvalues.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9114752411842346, "perplexity_flag": "middle"} |
http://gmc.yoyogames.com/index.php?showtopic=530915&page=1 | # Rotation Matrices
Started by , Feb 05 2012 03:07 AM
4 replies to this topic
hit172
GMC Member
• New Member
• 189 posts
• Version:GM8
Posted 05 February 2012 - 03:07 AM
In a rotation matrix are the rows of the matrix the 3 axes of the specified rotation such that row 1 is the right/left vector, row 2 is the up vector, and row 3 is the out/look vector?
Can you do it this way or is there a better way to construct a rotation matrix?
When I use my method it results in a matrix that causes the cube to become distorted and as you approach being on directly above the cube, it turns into a line.
cube at 0,0,0 from 2,0,5 looking at 0,0,0
cube at 0,0,0 from 1,0,5 looking at 0,0,0
cube at 0,0,0 from 1,0,5 looking at 1,0,0
*note* I am using the method described here http://www.fastgraph...mes/3drotation/
Edited by hit172, 05 February 2012 - 03:07 AM.
• 0
xshortguy
GMC Member
• Global Moderators
• 4185 posts
• Version:GM:Studio
Posted 05 February 2012 - 03:15 AM
No, matrices do not work like that.
A matrix tells us how to transform an input vector into a corresponding output vector. The nth row of the matrix tells us how to build the nth entry of the output vector from the entries of the input vector.
A rotation matrix is a vector that takes a vector and transforms it in such a way that the length of the output vector is unchanged after the transformation, without causing any of the other vectors to "flip" their directions.
• 0
hit172
GMC Member
• New Member
• 189 posts
• Version:GM8
Posted 05 February 2012 - 03:36 AM
A matrix tells us how to transform an input vector into a corresponding output vector. The nth row of the matrix tells us how to build the nth entry of the output vector from the entries of the input vector.
Yes, so you then multiply the input vector by the matrix to get the corresponding output vector.
Can you then create a rotation matrix if you know the input and output vectors or more specifically from a look vector and the zero vector (origin).
Edited by hit172, 05 February 2012 - 03:39 AM.
• 0
xshortguy
GMC Member
• Global Moderators
• 4185 posts
• Version:GM:Studio
Posted 05 February 2012 - 04:22 PM
Can you then create a rotation matrix if you know the input and output vectors or more specifically from a look vector and the zero vector (origin).
Let $\vec{a_1}, \vec{a_2}, \vec{a_3}$ be three linearly independent vectors, let C be a matrix so that $C\vec{a}_i = \vec{b}_i$. We wish to determine what C is based on the effects of C on those three vectors. The solution is simple:
• Form the augmented matrices $A = [ \vec{a}_1 \quad \vec{a}_2 \quad \vec{a}_3 ], B = [ \vec{b}_1 \quad \vec{b}_2 \quad \vec{b}_3 ]$
• The equation we need to solve for C is CA = B. Since the columns of A are linearly independent, A is invertible, so we can solve for C by computing the inverse of A both sides by it.
• However that can be more computationally expensive, so instead one will form the augmented matrix [ B | A ] and row reduce this matrix until the right block is reduced to the identity matrix. We then have a matrix of the form [ B' | I ], of which C = B'.
• However this most likely will not be a rotation matrix, but one can take each of the columns of C, normalize them, and then reform the augmented matrix of these normalized vectors to get the rotation matrix.
• 0
Gamer3D
Human* me = this;
• GMC Member
• 1587 posts
• Version:GM8.1
Posted 11 February 2012 - 05:32 PM
Can you then create a rotation matrix if you know the input and output vectors or more specifically from a look vector and the zero vector (origin).
Let $\vec{a_1}, \vec{a_2}, \vec{a_3}$ be three linearly independent vectors, let C be a matrix so that $C\vec{a}_i = \vec{b}_i$. We wish to determine what C is based on the effects of C on those three vectors. The solution is simple:
• Form the augmented matrices $A = [ \vec{a}_1 \quad \vec{a}_2 \quad \vec{a}_3 ], B = [ \vec{b}_1 \quad \vec{b}_2 \quad \vec{b}_3 ]$
• The equation we need to solve for C is CA = B. Since the columns of A are linearly independent, A is invertible, so we can solve for C by computing the inverse of A both sides by it.
• However that can be more computationally expensive, so instead one will form the augmented matrix [ B | A ] and row reduce this matrix until the right block is reduced to the identity matrix. We then have a matrix of the form [ B' | I ], of which C = B'.
• However this most likely will not be a rotation matrix, but one can take each of the columns of C, normalize them, and then reform the augmented matrix of these normalized vectors to get the rotation matrix.
The zero vector will not help you in any way, because every matrix maps the zero vector to itself.
What you can do is begin with a look vector and up vector. Take the cross product of the look and up vectors to get a third vector that is linearly independent if the first two are (we'll call it the left vector, whether or not it faces left). Normalize the left and look vectors, and set the up vector to their cross product (This cross product will be of unit length because left and up are normal and perpendicular). These 3 vectors can be used as the columns of an orthonormal matrix.
Check the signs of the cross products (I didn't bother). If the determinant is 1, then it's a rotation matrix. Otherwise, multiply the left vector by -1.
• 0
#### 0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90106600522995, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/253958/how-to-show-that-the-complete-elliptic-integral-of-the-first-kind-increases-in-m/253976 | # How to show that the Complete Elliptic Integral of the First Kind increases in m?
How can you show that the complete elliptic integral of first kind $\displaystyle K(m)=\int_0^\frac{\pi}{2}\frac{\mathrm du}{\sqrt{1-m^2\sin^2 u}}$ that is the same as a series $$K(m)=\frac{\pi}{2} \left(1+\left(\frac{1}{2}\right)^{2}m^2 +\left(\frac{1\cdot 3}{2\cdot 4}\right)^{2}m^4 +...+ \left(\frac{(2n-1)!!}{2n!!} \right )^2m^{2n} + ... \right)$$
increases in m?
Thanks
-
The expansion of the integral you wrote has terms for every odd power of $m$ as well. – Did Dec 8 '12 at 20:56
Im sorry, can you explain again? something wrong on the expansion? – JHughes Dec 8 '12 at 21:58
Yes, something was definitely wrong with the expansion... But it seems you saw the problem since you made the necessary correction. Note that this makes the accepted answer, which addresses (incorrectly) the original version of your question, a little odd. – Did Dec 8 '12 at 23:42
yeah yeah, thx, i already correct it. – JHughes Dec 11 '12 at 1:24
## 1 Answer
You can show that the derivative with respect to $m$ is always positive:
Note that $$K'(m)=\int_0^\frac{\pi}{2}\frac{m \sin^2 u\, du}{(1-m^2\sin^2 u)^{3/2}} \geq 0$$ as the integrand is positive for all $0\leq m \leq 1$.
-
The numerator $m\sin u$ should read $\frac12\sin^2u$. – Did Dec 8 '12 at 23:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618157148361206, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/50712?sort=votes | ## Derivative of Exponential Map
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a Riemannian manifold $M$, let $\gamma: (a,b) \to M$ be a geodesic and $E$ a parallel vector field along $\gamma$. Define $\varphi: (a,b) \to M$ by $t \mapsto \exp_{\gamma(t)}(E(t))$. Is there a "nice" expression for $\varphi'(t)$?
This question originates in an attempt to understand the proof of corollary 1.36 in Cheeger and Ebin's "Comparison Theorems in Riemannian Geometry."
-
1
Yes, use the chain rule together with the expression of the differential of exp in terms of Jacobi fields – Sebastian Dec 30 2010 at 10:19
@Sebastian, I am confused how to compute the differential of exp. I know the result that $(exp_p)_*:TT_pM \to TM$ is $(exp_p)_*|_V(W) = J_W(1)$ where $J$ is the Jacobi field with $J(0) = 0$ and $D_tJ = W$. However, this is only half of the full differential of exp that I think one would need, i.e. $(exp)_*: TTM \to TM$. How do you compute the other half, coming from the fact that $p$ is now varying? – Otis Chodosh Dec 30 2010 at 15:11
2
If $p$ varies, then you just get a Jacobi field $J$ where $J(0) \ne 0$. The point is that the first variation of any family of geodesics is called a Jacobi field and satisfies the Jacobi equation along the original geodesic. Any Jacobi field can be decomposed into three different components, one tangent to the geodesic, two orthogonal. Of the two orthogonal, one vanishes at the origin but has nonzero derivative there and the second is nonzero at the origin but has zero derivative there. I am pretty sure this is all explained in Cheeger-Ebin (since I probably learned it all from there). – Deane Yang Dec 30 2010 at 15:16
Thanks. For some reason I was hoping for an answer that was more concrete than Jacobi fields, in hindsight that was a little silly... – Yakov Shlapentokh-Rothman Dec 30 2010 at 17:42
## 1 Answer
Let $x(u,t) = \exp_{\gamma(t)}(u E(t))$. For fixed $t$, as $u$ ranges from 0 to 1, the curve $x(\cdot, t)$ is a geodesic segment from $\gamma(t)$ to $\exp_{\gamma(t)}(E(t))$. Then $\phi'(t) = J(1)$ where $J$ is the Jacobi field along this geodesic segment, with the initial conditions $J(0) = \gamma'(t)$ and $(\nabla_{d x/d u} J)(0) = 0$.
Why? As $t$ varies, $x$ is a variation through geodesics, so for fixed $t$, $\frac{\partial x}{\partial t}(u,t)$ is a Jacobi field (call it $J$) along the geodesic $x(\cdot, t)$. Then $\phi'(t) = \frac{\partial x}{\partial t}(1,t) = J(1)$ while $\gamma'(t) = \frac{\partial x}{\partial t}(0,t) = J(0)$. And using the fact that $\nabla$ is torsion-free and $E$ is parallel we have $(\nabla_{\partial x/\partial u} \frac{\partial x}{\partial t})(0,t) = (\nabla_{\partial x/\partial t} \frac{\partial x}{\partial u})(0,t)= \nabla_{\partial x/\partial t}E(t) = 0.$
(I guess this does not need $\gamma$ to be a geodesic....?)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456542730331421, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/27929/examples-of-statements-that-provably-cant-be-proved-using-a-promising-looking-me/112549 | ## Examples of statements that provably can’t be proved using a promising looking method
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Motivation: In Razborov and Rudichs article "Natural proofs" they define a class of proofs they call "natural proofs" and show that under certain assumptions you can't prove that $P\neq NP$ using a "natural proof". I know that this kind of results is common in complexity theory, but I don't know any good examples from other fields. This is why I ask:
Question: Can you give an example of a statement S that isn't known to be unprovable (it could be an unsolved problem or but it could also be a theorem), a promising-looking class of proofs and a proof that a proof from this class can't prove S.
I'm interested in both famous unsolved problems and in elementary examples, that can be used to explain this kind of thinking to, say, freshmen.
-
## 13 Answers
The fact that Fermat's Last Theorem is false over the $p$-adics shows that it cannot be proved using arguments using congruences.
The fact that the Steiner-Lehmus Theorem is false over the complexes shows that it cannot be proved using what John Conway calls "equality-chasing" arguments.
This one is probably not explainable at the freshman level but the fact that the Paris-Harrington theorem and the Robertson-Seymour graph minor theorem are not provable in first-order Peano arithmetic shows that some kind of "infinitary" reasoning or sophisticated induction is needed to prove them.
-
1
Very interesting. Can you give a reference for the unprovability of the graph minor theorem in first-order Peano arithmetic? – Tony Huynh Jun 12 2010 at 23:26
See the article "The metamathematics of the graph minor theorem" by Friedman, Robertson, and Seymour" in Logic and Combinatorics (Contemporary Mathematics, Vol. 65, AMS 1987). – John Stillwell Jun 12 2010 at 23:58
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There are examples (possibly due to Hecke?) of zeta-like functions which obey almost all of the known properties of the Riemann zeta function, such as the functional equation, Euler product, and asymptotics at infinity, but which have nontrivial zeroes off the critical line. This strongly indicates that one cannot hope to prove the Riemann hypothesis purely by complex analytic methods; at some point, one needs to use something more about the integers than just the fundamental theorem of arithmetic (encoded here as the Euler product), the Poisson summation formula (encoded here as the functional equation), and asymptotic distribution in the reals (encoded here as asymptotics of zeta).
In a related spirit, there are examples (due to Diamond, Montgomery, and Vorhauer) of Beurling integers (generated by a set of Beurling primes, which have asymptotic distribution similar to that of the rational integers) whose zeta function either has non-trivial zeroes or fails to be analytically continued beyond the classical zero-free region. Admittedly this example does not have the functional equation, but it does seem to indicate that multiplicative number theory methods alone are insufficient to resolve the Riemann hypothesis.
EDIT: It turns out that my first paragraph here is based on outdated information. It is currently possible that all functions in the Selberg class (whose members obey all the axioms above, in addition to the Ramanujan conjecture) could obey the RH. I don't know though if anyone is seriously proposing that this much more general conjecture is the right way to attack RH and its relatives, though.
-
3
Interesting - do you know any good expositions of these examples? – Thomas Bloom Jun 13 2010 at 16:33
1
The examples are Dirichlet series with an analytic continuation and functional equation resembling those of zeta-functions, but provably having zeros in the critical strip that are off the critical line. A key point here is that these series do not have any kind of expected Euler product decomposition; the Euler product should be considered an essential structure for those (interesting) cases where RH ought to hold. A concrete example is based on $Z(s) = \sum_{(x,y)} 1/(x^2 + 5y^2)^s$, the sum running over integer pairs other than (0,0). The series converges for Re($s$) > 1 and (contd.) – KConrad Nov 16 at 6:47
1
the function $V(s) = 20^{s/2}(2\pi)^{-s}\Gamma(s)Z(s)$ has an analytic continuation to ${\mathbf C} - \{0,1\}$ and we have the functional equation $V(1-s) = V(s)$. The function $V(s)$ has zeros in the critical strip that are off the critical line (proved in 1935 by H.S.A. Potter and Titchmarsh; the initial H is short for Harold, so you could consider him to be Harry Potter). – KConrad Nov 16 at 6:50
The inequality $x\left(x-y\right)\left(x-z\right)+y\left(y-z\right)\left(y-x\right)+z\left(z-x\right)\left(z-y\right)\geq 0$ for all nonnegative reals $x$, $y$, $z$ (a particular case of Schur's inequality) cannot be proven by substituting $x=A^2$, $y=B^2$, $z=C^2$ and writing the left hand side as sum of squares of polynomials. This is the reason why Hilbert's 17th problem needs squares of rational functions rather than squares of polynomials.
-
The statement, though unfortunately maybe not the proof, of Wilkie's solution to Tarski's high-school algebra problem, is certainly accessible to freshmen.
-
Very interesting! Makes you wonder about tropical examples along the same lines. – Victor Protsak Jun 13 2010 at 3:56
This is very cool. – Bart Snapp Jun 14 2010 at 0:56
One example is Roth's theorem: that any set of integers with positive density contains an arithmetic progression of length 3. This has been proven now in several different ways (usually using Fourier analysis) but one might hope for a more elementary proof.
That is, just by applying Cauchy-Schwartz and the pigeonhole principle in clever ways one can often prove quite a lot about the additive structure of sets. Behrend's example, however, gives a set in $[1,N]$ of size $$>cNe^{-O(\sqrt{\log N})}$$
for some constant $c>0$ with no arithmetic progressions of length 3. In the words of Tao and Vu (Additive Combinatorics, Exercise 10.1.4),
This rules out a number of elementary approaches to proving Roth's theorem or Szemeredi's theorem (e.g. arguments based entirely on Cauchy-Schwarz and pigeonhole principle type arguments) as these tend to only give polynomial type bounds.
-
The fact that there exist oracles $A,B$ such that $P^A = NP^A$, $P^B \neq NP^B$ (a theorem of Baker, Gill, and Soloway) implies that the $P=NP$ question cannot be resolved using "standard" methods (because they would relativize to any oracle).
Edit: Most of the methods used to prove inequalities of complexity classes (e.g. diagonalization in the proof of the space and time hierarchy theorems) work even if the model of computation is changed to allow for one bit of computation to become "free" (i.e. if the Turing machine can access an oracle). The fact that such standard "relativizing" methods cannot suffice (by this theorem) to prove either $P=NP$ or its negation is taken as evidence that $P=NP$ is a nontrivial, serious question. The same is true for $NP$ versus $coNP$. Incidentally, it is true that $P^A \neq NP^A$ relative to a random oracle (where "random oracle" means that membership of any string in $A$ is decided randomly) with probability 1,* though this does not mean that $P \neq NP$.
An example of a nonrelativizing method is the arithmetization of boolean formula (which converts a boolean formula to a polynomial which is nonzero at an assignment of the variables to zeros and ones iff the formula is satisfiable with those as the corresponding inputs), used in the proof of Shamir's theorem that $IP=PSPACE$, a fact not true relative to arbitrary oracles.
*The probability has to be zero or one by Kolmogorov's zero-one law, incidentally.
There is a very interesting discussion of the Baker-Gill-Soloway theorem on Terence Tao's blog, here.
-
1
As someone who doesn't know much about this field, but finds your statement intriguing, is there any chance you could amplify on your reply for non-specialists. – Dan Piponi Jun 12 2010 at 22:07
1
I'm very much a nonspecialist myself but have expanded on the answer. – Akhil Mathew Jun 13 2010 at 0:08
An elementary example is the fact that unique factorization fails in $Z[\sqrt{-5}]$, which shows that there is no obvious proof of the fundamental theorem of arithmetic.
-
9
But the very same failure does make the infinitude of primes obvious (granting basic commutative algebra, none of which depends on infinitude of primes), since a Dedekind domain with finitely many primes is a PID by Chinese Remainder Theorem! (This incredible argument is due to Larry Washington.) – Boyarsky Jun 20 2010 at 4:03
Another elementary example concerns the fact that the integers are unbounded in R. It surprises many people that the completeness axiom is needed to prove that, but one can show that that is the case by finding ordered fields that extend R (e.g. an appropriate ordering on rational functions does the job) where it fails.
-
4
One possible reason for the surprise is that if one begins with the integers and thinks about the construction of the reals from it then the unboundedness is crystal-clear. That is, the need to appeal to completeness comes about only if we treat the reals as a black box, and those who are surprised may not have been doing this. An analogue is the fact that there are no integers strictly between 0 and 1. By "classical" induction or construction it is crystal-clear, but if we allow the well-ordering principle and not "classical" induction then it becomes a bit more contorted to prove this. – Boyarsky Jun 20 2010 at 4:00
Like the graph minor theorem, which has already been mentioned, Kruskal's tree theorem cannot be proved by finitary methods. Also, it has a somewhat simpler statement (because it involves embedding rather the graph minor relation):
Any infinite sequence of trees contains an infinite monotonic subsequence, under the embedding relation.
This statement can surely be explained to freshmen. The fact that it is unprovable by finitary methods is quite subtle however, because the fact that the statement involves an arbitrary infinite sequence means that the theorem cannot even be stated in Peano arithmetic. There are ways around this, such as "Friedman's finite form" of Kruskal's theorem.
-
When solving an initial value problem for a PDE, with initial data in some function space class, one popular way to proceed is to apply the contraction mapping (or Picard iteration) method in a suitable function space (usually one related to the function space that the data is in). When this works, this usually gives local or global existence of the solution, as well as uniqueness (if one restricts the class of solutions to an appropriate function space), and continuous (in fact, Lipschitz and analytic) dependence on the initial data (in certain metrics).
However, for some equations it is known that (Lipschitz or analytic) continuous dependence or uniqueness fails at certain regularities. As such, this precludes the possibility of using Picard iteration to build solutions at that regularity, even if existence of the solution can be obtained by other means.
This is particularly the case for "supercritical" equations in which the regularity provided by the various a priori controlled quantities is too weak. Unfortunately, this class of equations includes important examples such as three-dimensional Navier-Stokes, which is a large reason as to why the global regularity problem for this equation is considered hard (see my blog post http://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/ for more discussion).
-
3
An example suitable for freshmen is the ODE $y'=y^{1/3}$ that has non-unique solutions. This motivates Peano's existence theorem, whose proof is non-constructive in that it uses Arzela-Ascoli theorem (in view of non-uniqueness, we don't expect to obtain a "canonical" solution in the course of the proof). – Victor Protsak Jun 13 2010 at 12:34
This is an old thread but since it got revived, I like the following completely elementary example:
Statement A: A disk of diameter one cannot be covered by strips of total width less than 1.
Proof: Look at the hemisphere over this disk. The area of the spherical strip is proportional to the width regardless of the position and the trivial area comparison finishes the problem off.
Statement B: An equilateral triangle with altitude of length one cannot be covered by strips of total width less than 1.
One would be certainly tempted to try something similar to the previous approach until he realizes that the proof above also shows that you cannot cover the disk twice by strips of total width less than 2. However it is easy to cover the triangle twice by 5 strips of width 1/3 each.
-
Is statement B true? – Sune Jakobsen Nov 16 at 20:50
Yes. (Bang's theorem: No convex body of width $1$ (in any dimension) can be covered by strips of total width less than $1$). There are many "generalizations" and "applications" but I wouldn't say we know everything there, far from it. It is not even known if making a tiny hole near the center of the disk makes any difference in this covering problem. :-) – fedja Nov 16 at 21:46
Gödel had conjectured that large cardinals might settle the continuum problem. He thought, for example, that perhaps the existence of a measurable cardinal would imply $\neg\text{CH}$. Such a perspective surely gained support with Dana Scott's theorem (1961) that measurable cardinals imply $V\neq L$.
Nevertheless, the Levy-Solovay theorem (1967) shows that measurable cardinals, and all the other large cardinals commonly considered, are preserved by small forcing. Since both $\text{CH}$ and its negation $\neg\text{CH}$ are forceable by small forcing, it follows that measurable cardinals cannot settle the continuum hypothesis.
The general conclusion is that the existence of large cardinals, in principle, cannot settle any set-theoretic question, like CH, whose truth value can be changed by small forcing.
-
Terry Tao has a great post on his blog explaining the limitations of the circle method with respect to the binary Goldbach conjecture.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402307271957397, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/education?page=2&sort=votes&pagesize=15 | # Tagged Questions
How is physics taught and learned. Teaching strategies, class examples and demonstrations; learning resources, career advice, etc. For explicit problems, use the 'homework' tag instead.
2answers
320 views
### How can some-one independently do research in particle physics?
I'm not affiliated with a physics department and I want to do independent research. I'm working my way through Peskin et. al. QFT now. Let's say that I've finished Peskin et. al. and Weinberg QFT ...
2answers
202 views
### Is the proper interpretation of temperature missing in this book?
In Randall T. Knight’s textbook “Physics for Scientists and Engineers” in the first chapter on thermodynamics (Ch. 16: A Macroscopic Description of Matter) one of the first conceptual questions is ...
4answers
225 views
### Applications of recoil principle in classical physics
Are there any interesting, important or (for the non physicist) astonishing examples where the recoil principle (as special case of conservation of linear momentum) is applied beside rockets and guns? ...
3answers
286 views
### Is the historical method of teaching physics a “legitimate, sure and fruitful method of preparing a student to receive a physical hypothesis”? [closed]
The French physicist, historian, and philosopher of physics, Pierre Duhem, wrote:The legitimate, sure and fruitful method of preparing a student to receive a physical hypothesis is the historical ...
4answers
283 views
### 'Getting in' to research physics?
I'm going to be choosing a university course soon, and I want to go into a branch of physics. A dream job for me would be to work in research, however, I do realise that this isn't for everyone and is ...
3answers
426 views
### Virtual images in (plane) mirrors?
The following image is taken from teaching physics lecture Was man aus virtuellen Bildern lernen kann (in German): Now the cited paper claims that the left hand side is the correct picture to ...
2answers
344 views
### Undergraduate-friendly reading material on the multiverse?
I'll be teaching a seminar for first-year undergraduates next year. The idea of my university's first-year seminar program is to expose students to exciting ideas and important texts in a somewhat ...
1answer
181 views
### How deep can my knowledge of particle physics go without the maths?
Successfully just got my first question answered on here, and now time for the second. So I recently gained interest in particle physics and was wondering. By no means do I have the mathematical ...
2answers
529 views
### How to choose a suitable topic for PhD in Physics? [closed]
After completion of graduate courses when a student is supposed to start real research in Physics, (to be more specific, suppose in high energy physics), how does one select the problem to work on? ...
2answers
344 views
### Prerequisites to start the study of noncommutative geometry in physics
What are prerequisites (in mathematics and physics), that one should know about for getting into use of ideas from noncommutative geometry in physics?
2answers
68 views
### Why distinguish between row and column vectors?
Mathematically, a vector is an element of a vector space. Sometimes, it's just an n-tuple $(a,b,c)$. In physics, one often demands that the tuple has certain transformation properties to be called a ...
2answers
127 views
### How to learn celestial mechanics?
I'm a PhD student in math and am really excited about celestial mechanics. I was wondering if anyone could give me a roadmap for learning this subject. The amount of information about it on the ...
2answers
173 views
### Physics talk with an emphasis on Mathematics [closed]
I have to give a 10 minute physics talk that have to involve a fair bit of mathematics -- i.e. not just qualitative/handwaving material to some undergrads. I have wasted the last 3 hours looking for ...
7answers
525 views
### For a theoretical (not mathematical) physicist, is there a need to learn pure mathematics?
For a theoretical physicist (not a mathematical physicist), is there a need to learn pure mathematics ?
7answers
2k views
### Beginner Physics Resources?
I'm interested in learning physics. I do realize that the subject is large and that it would be easier if I had a specific area of interest. However, I do not. I suppose I want to learn about the ...
2answers
205 views
### Quantum Fluctuations as a model for the Big Bang?
I have quite often heard (and even used) the idea that quantum fluctuations are a way to explain the whole "something from nothing" intuitive leap. I am about to give a talk at a local school on ...
4answers
211 views
### List of Physical Toys [closed]
There should be a list of toys considered "physical", which demonstrate or make you think over certain physical principles/phenomena. And of course which could just amaze. Related question at MSE is ...
4answers
616 views
### Advantage of doing research in theoretical high energy over other fields?
I am undecided about the field I want to do my PhD in, in graduate school. I am asking because the applications that I am filling ask me to write the intended field of study. I found the people who ...
3answers
160 views
### What are some creative illustrations of the nature of dissipative forces?
I'm teaching a conceptual introduction to physics for American 13-15 year old students this summer. One of the main ideas I want to hit on is the relationship between energy conservation, ...
1answer
382 views
### Phenomena which are incorrectly declared as resonance phenomena?
In standard college physics text books, high-school books and popular level physics books, the collapse of the Tacoma Narrows Bridge is often taken as an example of resonance. However a more detailed ...
5answers
221 views
### Usefullness of an only qualitative understanding of momentum?
A few days ago I had a discussion with a friend who wants to become a physics teacher (in Germany). He told me that from a pedagogical/didactial point of view it seems to be a good idea to introduce ...
3answers
1k views
### Does conservation of momentum really imply Newton's third law?
I often heard that conservation of momentum is nothing else than Newton's third law. Ok, If you have only two interacting particles in the universe, this seems to be quite obvious. However if you ...
2answers
947 views
### What is 2 $\theta$ in X-ray powder diffraction (XRD)?
Why are we taking 2$\theta$ instead of $\theta$ in X-ray powder diffraction (XRD). I have found the forum post 2 theta in X-ray Powder Diffraction (XRD), but there is no answer. What is the ...
5answers
628 views
### Doppler effect of sound waves
I am looking for interesting ways to introduce the Doppler effect to students. I want some situations in nature or every day life, where a student is possibly surprised and may ask "how could it be"? ...
3answers
368 views
### What should a physics undergrad aspiring to be a string theorist learn before grad school?
The question I guess is pretty clear. I am a physics undergrad wishing to pursue research in quantum gravity(string theory?). What are the subjects I should learn other than the usual compulsory ...
1answer
52 views
### Phases of the moon video
I am an educator, and I am looking for a specific video. In the video, they ask some middle school students and some college graduates about why the moon has phases. Most of the students in both the ...
1answer
196 views
### A loop quantum gravity toy inspired by an Aharonov-Bohm ring
Comparing my question to Give a description of Loop Quantum Gravity your grandmother could understand what I'm looking for here is a toy for a toddler ($\approx$ a pre-QFT graduate student). I seek ...
1answer
187 views
### Some questions about the logics of the principles of independence of motion and composition of motion
In high-school level textbooks* one encounters often the principles of independence of motion and that of composition (or superpositions) of motions. In this context this is used as "independence of ...
2answers
468 views
### References for real life applications on advanced EM
For EM (freshman level physics) and advanced EM (Junior/Senior level) to help students appreciate the material, I am looking for books/websites that contain: 1-applications of electricity and ...
1answer
293 views
### Substance like quanties and conserved quantities, Karlsruhe physics course
In the Karlsruhe physics course one defines the term "substance-like" quantity: Let my cite the definition from a paper by Falk, Herrmann and Schmid: "There is a class of physical quantities whose ...
0answers
253 views
### What are the mathematical prerequisites to understand this paper? [closed]
What are the mathematical prerequisites to understand this paper? Blumenhagen et al. Four-dimensional String Compactifications with D-Branes, Orientifolds and Fluxes. Phys. Rept. 445 no. 1-6, pp. ...
0answers
109 views
### Course advice for someone interested in strings and mathematical physics [closed]
I'll be doing Introductory General Relativity and Graduate Quantum Mechanics II next semester. I still need to choose 2 (or maybe 3, but I don't want to overload too much) from the following: ...
0answers
101 views
### Dirac action and conventions
I have a (possibly) fundamental question, which is driving me crazy. Notation When considering the Dirac action (say reading Peskin's book), one have \$\int ...
1answer
396 views
### How do I learn higher level physics? [closed]
I'm a chemical engineering student (just completed BS and am started the PhD program), but I'm very interested in particle physics as a hobby. I'm dismayed though with the sheer amount of information ...
6answers
1k views
### How does one build up intuition in physics? [closed]
How does one build up an intuitive gut feeling for physics that some people naturally have? Physics seems to be a hodgepodge of random facts. Is that a sign to quit physics and take up something ...
3answers
2k views
### What is the math knowledge necessary for starting Quantum Mechanics?
Could someone experienced in the field tell me what the minimal math knowledge one must obtain in order to grasp the introductory Quantum Mechanics book/course? I do have math knowledge but I must ...
4answers
268 views
### How are we able to view an object in a room with bulb..?
This is a very basic question on optics. How are we able to view an object kept in a room with a bulb? From what I understand, light rays from bulb will hit the object and some colour will be ...
8answers
1k views
### Real world examples for projectile thrown upwards or downwards
I am preparing a physics course for high school about projectile motions. If a projectile moves with initial velocity $v_0$ in the gravitational field of the earth, the equation s(t) = 1/2 g t^2 ...
2answers
491 views
### Is it safe to study from MIT and Berkeley course series, or they contain wrong information?
After surveying most of the universities introductory physics courses, I found none is using Berkeley physics books or MIT physics books as textbooks. All are using Halliday, or Serway and the like. ...
5answers
3k views
### Mathematical background for Quantum Mechanics
What are some good sources to learn the mathematical background of Quantum Mechanics? I am talking functional analysis, operator theory etc etc...
1answer
155 views
### Is it possible to take a QFT class knowing only basic quantum mechanics?
I'm in grad school and notice there are no prerequisites required for QFT in the physics department. In fact, the system allows me to sign up for the course just fine as a technical elective. But... ...
2answers
3k views
### Difference between momentum and kinetic energy
From a mathematical point of view it seems to be clear what's the difference between momentum and $mv$ and kinetic energy $\frac{1}{2} m v^2$. Now my problem is the following: Suppose you want to ...
2answers
202 views
### German book on introductory quantum mechanics
I'm looking for an originally German introduction to quantum mechanics. Is there such a canonical book used in German QM undergraduate courses?
1answer
166 views
### Extra help in Optics
I am in an optics class, and we are using the text "Introduction to Optics" third edition by Pedrotti. The book is completely useless in the course. The questions in the review section of the chapter ...
2answers
527 views
### Please recommend a good book about physics for young child (elementary school aged)
I'm looking for a book that would be appropriate for an advanced elementary school aged kids (say, 6-11 YO) describing the basics of physics (or sciences in general) in entertaining way. The ...
2answers
562 views
### Studying electrodynamics problems
Suppose an advanced undergraduate student has reached a moderate level of understanding on electrodynamics. Where should he focus on, to sharpen his problem-solving skills? Practicing integrals ...
2answers
144 views
### Applying $\nabla\times\mathbf{B} = \mu_0\mathbf{J}$ in the presence of magnetic shielding
2012-06-13 - Revised question in experimental format (This is a thought experiment for which RF experts may have an immediate answer.) I'll assume (I could be wrong) the possibility of creating a ...
1answer
289 views
### How to set up a very simple experiment in optics?
This might come across as a very rudimentary question. My fundamentals of Optics are weak. In the optics chapter of my physics text book I saw diagrams each depicting an object on the left and a lens ...
2answers
150 views
### Introducing emf of a chemical cell as a hint towards quantum mechanics
Today I had a discussion with a colleague who teaches electricity and magnetism to 2nd year undergraduate physics students. He is seeking the best way to explain how is the emf generated inside a ...
2answers
428 views
### What math is needed to understand the Schrödinger equation?
If I now see the Schrödinger equation, I just see a bunch of weird symbols, but I want to know what it actually means. So I'm taking a course of Linear Algebra and I'm planning on starting with PDE's ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392954111099243, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=115499&page=3 | Physics Forums
Thread Closed
Page 2 of 2 < 1 2
Recognitions:
Gold Member
Science Advisor
## Basic problems
Quote by EL
Thank you El, certainly there is much ongoing research and discussion needed. My main point was to counter the accusation that the PA was caused by some kind of incompetence or "questionable methods".
It is interesting from your last link paper, (which also contains my quote from the earlier Turyshev et al. paper confirming an unexplained PA) that more data is available and only now being analysed, some narrowly missed being destroyed!
All transmissions of the Pioneer 10/11 spacecraft, including all engineering telemetry, were archived [8] in the form of files containing Master Data Records (MDRs.) Originally, MDRs were scheduled for limited retention. Fortunately, the Pioneers’ mission records avoided this fate: with the exception of a few gaps in the data (mainly due to deteriorating media) the entire mission record has been saved and is available for study.
(Emphasis mine)
What do they hope to achieve from this rescued data?
Therefore, we expect that our analysis of the early data, from a period of time when the spacecraft were much closer to the Earth and the Sun, may help us to unambiguously determine whether the direction of the acceleration is • sunward-pointing, indicating a force originating from the Sun; • Earth-pointing that would be due to an anomaly in the frequency standards; • Along the direction of the velocity vector, indicating an inertial or drag force; or • Along the spin axis direction that would indicate an on-board systematic force. It should be pointed out that there are some obstacles along the way towards this goal. A difficult problem [3, 14, 15] in deep space navigation is precise 3-dimensional orbit determination. The line-of-sight component of a velocity is much more easily determined than motion in the orthogonal directions. Unfortunately there is no range observable for the Pioneer spacecraft, which complicates the analysis. Furthermore, earlier parts of the trajectory were dominated by solar radiation pressure and frequent attitude control maneuvers, which significantly affect the accuracy of orbit determination. Nevertheless, there is hope that these difficulties can be overcome and the analysis will yield the true direction of the anomaly.
Note that in your earlier link papers the problem is reconciling any gravitational effect with the orbits of the other planets, they therefore tend to deny the PA assuming we know all about solar system orbits. However all is not so clear, even in the near solar system! Secular increase of astronomical unit from analysis of the major planet motions, and its interpretation
$\frac{dAU}{dt} = 15 +/- 4$ m/cy Excluding other explanations that seem exotic (such as secular decrease of the gravitational constant) at present there is no satisfactory explanation of the detected secular increase of AU, at least in the frame of the considered uniform models of the Universe.
Also G may be varying: Relativistic effects from planetary & lunar observations of the XVIII-XX Centuries
In any case the observation should drive the theory, not the other way round!
Why should there be this discrepency between PA & orbital theory? Perhaps as the PA is of the order of CH it is cosmological in origin and there is a discrepency between the applicability of the Schwarzschild and Cosmological solutions of GR?
BTW wolram, marcus was complementing you.
Garth
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Garth The Pioneer Anomaly is certainly not built on questionable methods. Turyshev et al. The Study of the Pioneer Anomaly:New Data and Objectives for New Investigation
I beg to differ. "Measuring" something from an instrument not designed to make that measurement is always questionable...and I think EL's papers covered the theoretical concerns pretty well.
and whether the reasoning behind the Axis of Evil is questionable or not is a matter of current debate. Land and Magueijo The axis of evil
Did you even read the WMAP paper or the argument I had with you in the associated thread?
1. The model continues to depend wholly on two pieces of undiscovered physics, namely dark energy and cold dark matter.
Agreed, but it's not a problem with matching the data, it's a problem of not having enough of it.
2. The implied dark energy density is so small that it is unstable to quantum correction and its size is fine-tuned to the almost impossible level of one part in ~ 10102. 3. It is difficult to explain the coincidence between the dark energy, dark matter and baryon densities at the present day.
I referred to these, the "fine-tuning" and "cosmic coincidence" problems already.
The rest of your references refer to the "cuspy cores" problem, "small-scale structure" problem, and something that we understand too poorly to really be called a problem. The first two are genuine concerns with $\Lambda CDM$ and have been discussed at length in other threads.
If the first isn't just an issue of numerical resolution (which is looking increasingly likely), then it's an issue of our lack of knowledge about the dark matter, which falls back into what I was saying about needing more measurements of dark matter properties. The second is, again, a possible issue with the simulations and may have nothing to do with fundamental theory. The third is another example of how we're data-starved. We don't have nearly enough high-z data to understand how the feedback process works.
They are fair concerns, however. I wouldn't yet call them "weak points" with the model because it's still not clear to me that they're not problems with the simulations, but it's still worth keeping in mind.
As for MOND, it comes as absolutely no surprise to me that a theory designed to fit rotation curves can do so with fewer parameters than the standard model. It also came as no surprise to me that relativistic MOND was completely inconsistent with the CMB.
Recognitions:
Gold Member
Science Advisor
Quote by SpaceTiger
Quote by Garth and whether the reasoning behind the Axis of Evil is questionable or not is a matter of current debate. Land and Magueijo The axis of evil
Did you even read the WMAP paper or the argument I had with you in the associated thread?
Yes - that was the current debate! And I am not the only one who is willing to go with increasing the chance of a false negative. Not everybody agrees with you ST - that is what makes it interesting.
As for MOND, it comes as absolutely no surprise to me that a theory designed to fit rotation curves can do so with fewer parameters than the standard model. It also came as no surprise to me that relativistic MOND was completely inconsistent with the CMB.
Agreed, but it will be interesting to see how Bekenstein responds.
Garth
Blog Entries: 1 Recognitions: Gold Member By Garth, BTW wolram, marcus was complementing you. Marcus, sorry if my post seemed gruff, it was not meant to be
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Garth Yes - that was the current debate! And I am not the only one who is willing to go with increasing the chance of a false negative. Not everybody agrees with you ST - that is what makes it interesting.
What you said:
and whether the reasoning behind the Axis of Evil is questionable or not is a matter of current debate.
You said there was no debate. I'm saying there most certainly is!
Recognitions:
Gold Member
Science Advisor
Quote by SpaceTiger
What you said:
Quote by Garth and whether the reasoning behind the Axis of Evil is questionable or not is a matter of current debate.
You said there was no debate. I'm saying there most certainly is!
ST I think you are reading me wrong, on this we most certainly do agree!
Perhaps if I had emphasised?:
whether the reasoning behind the Axis of Evil is questionable or not is a matter of current debate
BTW in today's Times newspaper Hidden CJD is new threat to thousands Having eaten British beef during the 1980's I live in dread of suddenly finding my mind going......
The curse of the fear of the false positive - ("Our meat is unsafe" - when it was safe) - you are left with the consequences of the false negative. ("Our meat is safe - when it was not)
Garth
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Garth Perhaps if I had emphasised?:
Oy. I need to renew my eyeglass prescription.
Blog Entries: 1 Recognitions: Gold Member By Marcus. So, for instance, he is critical of his fellow cosmologists, at least of the run-of-the-mill university cosmologist, because they often simply ASSUME that k = 0 exactly. That is, they favor the EXACTLY FLAT case so much that they oftentimes just take it for granted, according to Ellis. Thanks for your help so far Marcus. At the moment i can not see an alternative to making some assumptions, if cosmologists have no data to work with, what else can they do. I would like a laymans review of the search for matter in QG theories, from what i have read string theories are all most out of the picture for now, i may have the wrong take on that, is there anything new ? Thankyou, Garth, Space Tiger, Can you shed any light on the (particle) supersymetry, higs, side of cosmology?
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by marcus • Determining the sign of the curvature k, showing whether the universe has closed spatial sections and also whether it is possible for it to recollapse in the future or not. Analyses of the observations should always attempt to determine this sign, and not assume that k = 0 (as is often done).
Just to clarify this point a bit, the assumption of a flat universe (in CMB fits) is made to simplify the analysis. The data are fit with Markov Chain Monte Carlo simulations and the more free parameters there are, the longer it takes to cover the space. You'll see that in the third release, the WMAP team fixes the curvature to flatness for many of their simulations for exactly this reason. They do, however, run a few separate fits that allow the curvature to vary and this is the section you've been referring to in other posts. In other analyses, you'll see the assumption made simply because the flatness has already been measured (in the CMB) to more precision than they could reach.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by wolram Thankyou, Garth, Space Tiger, Can you shed any light on the (particle) supersymetry, higs, side of cosmology?
The LHC (Large Hadron Collider) is expected to find the Higgs. It also has some chance of detecting a signature (energy deficit) indicative of the dark matter particle. According to Michael Peskin, this won't be enough, and we'll need the ILC (International Linear Collider) to measure the dark matter particle properties. These are all maybes and will depend on which (if any) of the theories of supersymmetry is correct.
However, I don't think a detection or non-detection of the Higgs boson will have much of an impact on cosmology. Inflation is more of a phenomonological model at this point and doesn't rely on the existence of a particular particle species.
Recognitions: Gold Member Science Advisor (Edit crossed with ST's post) Big Bang studies has been a forced marriage between two incompatible partners: GR and QM, nevertheless the marriage has been, or has the potential of being, very fruitful. On the one hand GR cosmology has required Inflation, DM and DE, on the other fundamental particle physics has required higher and higher energy accelerators to test the Standard Theory, and you cannot get any higher energy than the BB itself! The Standard Theory has thrown up countless hypothetical particles that need experimental verification, if they cannot be found in a large accelerator, such as the LHC being built at present, then an alternative is to try to find them in the BB. Cosmological constraints have filtered out possible candidates, although there are many still to go. QM requires the Higgs Boson to impart inertial mass to particles and GR requires the energy of its scalar field, or of another hypothetical inflaton particle, to impart a massive exponential expansion in the first 10-35sec or so of the universe's history. It was predicted to be detectable in present particle accelerators but so far without success, perhaps the LHC will deliver. DM requires a particle with all the right properties to explain the large scale features of the universe including the rotation rates of spiral galaxies. One likely candidate at the moment is the LSP or lightest supersymmetric particle. We will only know what we are talking about when these particles have actually been discovered, their properties measured and been found concordant with the cosmological constraints. Until then GR and the Standard Model must remain, well, provisional. Garth
Blog Entries: 1 Recognitions: Gold Member ST, this may be a stupid question, but what evidence do we have that the CMB is only observed as it is from our view, could observation from another galaxy give different results ?
Recognitions:
Gold Member
Science Advisor
Quote by wolram ST, this may be a stupid question, but what evidence do we have that the CMB is only observed as it is from our view, could observation from another galaxy give different results ?
ST will certainly answer this, but if I may butt in, the answer to your question depends on what the CMB actually is. It is most certainly (IMHO) the radation emitted by the surface of last scattering (SLS) when the universe emerging from the Big Bang cooled enough for atomic hydrogen to form and the universe become transparent.
As such it would look more or less the same from any galaxy at this present epoch. Its temperature will depend on the epoch of observation.
However the largest anisotropy of the CMB, 100X larger than the rest, is the dipole caused by our peculiar motion relative to that SLS. As each galaxy has its own peculiar motion, and planets their own trajectories within those galaxies it will be this dipole that will differ from planet to planet, star to star and galaxy to galaxy.
Garth
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by wolram ST, this may be a stupid question, but what evidence do we have that the CMB is only observed as it is from our view, could observation from another galaxy give different results ?
If inflation is correct, then the CMB should look different from other locations, but have the same statistical properties. Sometimes, we can remotely infer the properties of the CMB by looking at gas that is coupled to it. Here's one such example:
In situ measure of the cosmic microwave background temperature at a distance of 7.5 kiloparsecs
It's not a very powerful technique, however, and can only give us very crude measurements of the temperature. Measuring the anisotropies remotely is probably a long way off, if possible at all.
Blog Entries: 1 Recognitions: Gold Member A question i have with held for lack of research, but, how can it be shown that the CMB is a relic of the BB, every one seems to assume it is, but i guess, there is not a unique signature for this radiation.
Recognitions: Gold Member Science Advisor There have been sugestions from the Steady State 'school' in the 1970s that the CMB might be either the sum total of background galaxies - Olber's paradox red-shifted into the micro-wave region. Or, in Fred Hoyle's mass field theory fundamental particle masses decrease as you out in space until you reach the membrane where m = 0, which we interpret as the BB. The CMB was then claimed to be the smoothed out radiation from galaxies beyond that membrane, (presumeably with negaitve masses.) These suggestions were long shots, which did not explain the CMB isotropy and would not have survived the discovery of the anisotropies at the 10-5 level. Garth
Blog Entries: 1 Recognitions: Gold Member Thankyou , Garth, i admit i am learning a lot about the fundamental reasons for the SM, but i am cursed with the ability to see other views, i think it is like a retail outlook, one sees an opening and tries to fill it, but this is wrong for cosmology, one should not try to fill in the gaps untill the market ressearch has been done.
Thread Closed
Page 2 of 2 < 1 2
Thread Tools
| | | |
|-------------------------------------|----------------------------------------------|---------|
| Similar Threads for: Basic problems | | |
| Thread | Forum | Replies |
| | Engineering, Comp Sci, & Technology Homework | 6 |
| | Introductory Physics Homework | 22 |
| | Calculus & Beyond Homework | 18 |
| | Introductory Physics Homework | 1 |
| | Classical Physics | 1 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564808011054993, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/209566-polynomial-reduction-help.html | # Thread:
1. ## Polynomial Reduction Help
Reduce to lowest terms:
$\frac {x^3+x^2+x+1}{x^3+3x^3+3x+1}$
I don't see how this can be factored to be reduced. Thanks.
The answer is $\frac{x^2+1}{x^2+2x+1}$
2. ## Re: Polynomial Reduction Help
Just to be sure, is that really $3x^3$ for the second term in the denominator?
3. ## Re: Polynomial Reduction Help
For the numerator:
$f(x)=x^3+x^2+x+1$
You could use the rational roots theorem, which states that if the given polynomial has a rational root, it will come from the list $x=\pm1$. We can see that $x=+1$ cannot work, hence we find:
$f(-1)=-1+1-1+1=0$
So, we know $x+1$ is a factor of $f(x)$. Use of division finds:
$f(x)=(x+1)(x^2+1)$
Now, for the denominator, we should recognize the binomial coefficients arising from the cube of a binomial, i.e.:
$x^3+3x^2+3x+1=(x+1)^3$ and so we may state:
$\frac{x^3+x^2+x+1}{x^3+3x^2+3x+1}=\frac{(x+1)(x^2+ 1)}{(x+1)^3}=\frac{x^2+1}{(x+1)^2}=\frac{x^2+1}{x^ 2+2x+1}$
4. ## Re: Polynomial Reduction Help
grillage - you are correct it is $3x^2$. Thanks for the help markFL2, I didn't know about the rational roots theorem, that will make working with polynomials a whole lot easier. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256000518798828, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/333918/discrete-mathematics | # Discrete Mathematics
1.) Determine whether each of the following relations is a function with domain {1,2,3,4}. For any relation that is not a function, explain why it isn't.
a.) f={1,1), (2,1), (3,1), (4,1), (3,3)}. - The answer in back of the book states the following: "Not a function; f contains two different pairs of the form (3,-)." What does the dash mean? b.) f={(1,2), (2,3), (4,2)} - ? d.) f={(1,1), (1,2), (1,3), (1,4) - ? e.) f={(1,4), (2,3), (3,2), (4,1)} - ?
-
## 4 Answers
A function is well defined, that means $x=y\implies f(x)=f(y)$ but you have once $f(3)=1$ and $f(3)=3$ try for the rst. for your control, b is no function, d is no function but e is a function.
In b) you have no function because $f(3)$ does not exist, but a function assigns to every object of your domain, an object of your codomain.
-
The dash means "unbound" or "not fixed," so $(3,-)$ matches $(3,1)$ and $(3,3)$
-
A function is defined in this context to be a relation (= a set of pairs) such that "each source has only one image", that is, since $(3,1)$ is interpreted as $f(3)=1$, it's impossible for $(3,3)$ to be in the same relation, as that would imply $f(3)=3$.
The dash in the book's answer attempts to represent a general pair $(3,-)$ should be read "3, something", meaning an ordered pair where the first entry is $3$. This is not standard notation, but is clear from the context.
As for the other relations, check them against the same criterion: $f$ is a function if there are no contradicting $(a,b), (a,c)$, because that would imply $f(a)=b$ and $f(a)=c$ at the same time.
-
A function is a rule that assigns to every $x$ value in its domain exactly one $y$ value in its range. We write this as $f(x)=y$. By definition of a function, it is not possible to have $f(3)=1$ and $f(3)=3$, because then the value $3$ in the domain is assigned to two values $y$ in the range.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355849623680115, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/28673/coulombs-law-why-is-k-dfrac14-pi-epsilon-0/28682 | # Coulomb's Law: why is $k = \dfrac{1}{4\pi\epsilon_0}$
This was supposed to be a long question but something went wrong and everything I typed was lost. Here goes.
1. Why is $k = \dfrac{1}{4\pi\epsilon_0}$ in Coulomb's law?
2. Is this an experimental fact?
3. If not, what is the significance of this definition?
-
4
It's a feature of the choice of units (i.e. in other systems of units the constant can be 1 or $1/4\pi$). There are a number of existing questions that relate to this matter, and it may be a duplicate. Looking for a link... – dmckee♦ May 20 '12 at 18:51
2
Here we go: physics.stackexchange.com/questions/24505/… , physics.stackexchange.com/questions/1673/…, and maybe others. Let me know if those fail to answer your question. – dmckee♦ May 20 '12 at 18:57
I don't think it's as simple as that. Unlike the gravitational constant G which has a numerical value and is a feature of the choice of units, k uses a constant $\epsilon_0$ which has a physical meaning. – Ron May 21 '12 at 12:09
2
Tell that to the Gaussian units people. You can fold those values into the charge if you want. I don't, but it made sense to some people. – dmckee♦ May 21 '12 at 13:57
1
@Ron The gravitational constant $G$ involves as much a choice of units as does Coulomb's law (in this case, setting gravitational mass strictly equal - rather than simply proportional - to inertial mass). $G$ can also be written as $1/4\pi\gamma_0$, and if you could ever make a gravitational capacitor then $\gamma_0$ would be the "permittivity" of the vacuum. Since $k$ and $\epsilon_0$ are (so rigidly) proportional, they share all their physical meaning. – Emilio Pisanty May 21 '12 at 23:42
show 2 more comments
## 3 Answers
Defining the symbol $k$ in Coulomb's law, $$F=k\frac{q_1q_2}{r^2},$$ to be $k=1/4\pi\epsilon_0$, is perfectly allowed when one understands it simply as a definition of $\epsilon_0$. The motivation for this definition is that when you work out the forces between two oppositely charged plates of area $A$ and charge $Q$ a distance $d$ apart, they come out as $F=\frac{2\pi kQ^2}{d}=\frac{Q^2}{2\epsilon_0 d}$, where the factor of $4\pi$ comes from judicious application of Gauss's law.
When you develop this further into a theory of capacitance, you find that it implies the voltage between the plates is $V=Q/C$, where $C=\epsilon_0 A/d$. Further, if you want to insert a dielectric in between the plates (as you often do), then the capacitance changes to $$C=\epsilon A/d$$ where $\epsilon$ is known as the dielectric's electric permittivity. $\epsilon_0$ is then naturally understood as "the permittivity of free space" (which of course simply defines what we mean by permittivity).
The question is then, of course, why is this "derived" unit, $\epsilon_0$, treated as more "fundamental" than the original $k$? The answer is that it isn't since they are equivalent, but the permittivity of free space is far easier to measure (and certainly was so during the late 19th and early 20th centuries when electrical research was very much geared towards circuit-based technologies), so that it came out the winner, and why have two symbols for equivalent quantities?
-
The unit of the second is defined is the time duration of a certain number of periods of radiation emitted from a particular type of electron transition between energy levels in an isotype of Cesium (see here).
It is an assumption that light travels at a constant speed $c$ independent of one's reference frame, so now that we have fixed a unit of time, we may define a unit of length: the meter is the distance light travels in $1/299792548\, \mathrm{s}$.
We also define the SI unit of current (the Ampere) so that the permeability of free space takes on a desired value in SI units ($4\pi \times 10^{-7}$).
We may then define $$\varepsilon _0=\frac{1}{\mu _0c^2}$$ as well as $$k=\frac{1}{4\pi \varepsilon _0}.$$
Now, keep in mind, you do not have to fix a system of units to do this (as I did before). As the above are definitions, they will hold in any system of units. However, to see that these definitions do not wind up being circular, it helps to see that we can define $\mu _0$ and $c$ in terms of purely physical phenomena. In other words, for the above definitions to even make sense, we had to know that we could define $c$ and $\mu _0$ independent of $\varepsilon _0$ and $k$ first. The above definition of SI units helps you see that this can be done.
-
If the question is why the "$4\pi$" in the Coulomb constant (k=$\frac{1}{4\pi\epsilon_{0}}$), then an equally valid question could be why the "4$\pi$" in the magnetic permeability of vacuum, $\mu_{0}=4\pi\times10^{-7}H/m$ ?
Perhaps a clue can be found in the Maxwell's equation for the speed of electromagnetic wave (light) in a vacuum, $c=\frac{1}{\sqrt{\epsilon_{0}\mu_{0}}}$.
Of course, Maxwell derived this relationship much later than Coulomb.
Maxwell relates the electric permitivity to magnetic permeability in the vacuum, $\mu_{0}=\frac{1}{\epsilon_{0}c^{2}}$ which is given a value of $\mu_{0}=4\pi\times10^{-7}H/m$ in SI units.
The 'reason' for the "$4\pi$" appearing here and in Coulomb's constant (believe it or not) so that Maxwell's equations can be written without any $4\pi$' factors!
In order to understand this, consider how electrostatic phenomena are expressed in Coulombs law as "field intensity at a distance squared", compared to (the equivalent) Gauss' law, which describes the "flux through a closed surface enclosing the charge".
The total flux is the flux density multiplied by the surface area, which for a sphere of radius $r$ is given by $S=4\pi r^{2}$, so the ratio $S/r^{2}$ = $4\pi$ is simply the result of geometry of space and spherical symmetry.
The SI system of units (unlike the Gauss units) is said to be 'rationalized' because it allows the expression of Maxwell's equations without the $4\pi$ factors. To do this, the $4\pi$ factor has simply been "built into" the (SI unit) definition of the universal constant for permeability of the vacuum, $\mu_{0}=4\pi\times10^{-7}H/m$, from which we can express Coulomb's constant as k=$\frac{1}{4\pi\epsilon_{0}}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960111677646637, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/60026-proof-derivation-product-rule.html | # Thread:
1. ## proof of derivation product rule
I have question regarding to product rule.
We know that derivative of y with respect to x is:
$<br /> \frac{dy}{dx} = f'(x) = \lim_{h\rightarrow 0} \: \frac{f(x+h) - f(x)}{x+h-x} = \lim_{h\rightarrow 0} \: \frac{f(x+h) - f(x)}{h} = \frac{\Delta y}{\Delta x}$
Now I'm trying to understand the proof of product rule:
$f'(x)\cdot g'(x) = f'(x)\cdot g(x) + f(x)\cdot g'(x)<br />$
to simplify notation:
$let\;u=f(x)\;and\;v=g(x).\;\;Then \;D(u\cdot v) = u' \cdot v + u \cdot v'$
Ok, now to proof:
$(u \cdot v)' = \lim_{h\rightarrow 0} \frac{\left[u(x+h) \cdot v(x+h) \right] - \left[u(x) \cdot v(x) \right]}{h}<br />$
we know that $\Delta u = u(x+h) - u(x) \;,\; thus\; u(x+h) = \Delta u + u(x)$ so we can substitute this with respected to u and v in equation and get:
$(u \cdot v)' = \lim_{h\rightarrow 0} \frac{\left[(\Delta u+u(x)) \cdot (\Delta v + v(x)) \right] - \left[u(x) \cdot v(x) \right]}{h}$
Now to do some algebra we get:
$= \lim_{h\rightarrow 0} \left\{ \frac{\Delta u \cdot \Delta v}{h} + \frac{\Delta u}{h} \cdot v(x) + \frac{\Delta v}{h} \cdot u(x) + \frac{u(x) \cdot v(x)}{h} - \frac{u(x) \cdot v(x)}{h} \right\}$
$= \lim_{h\rightarrow 0} \left\{ \frac{\Delta u \cdot \Delta v}{h} + \frac{\Delta u}{h} \cdot v(x) + \frac{\Delta v}{h} \cdot u(x) \right\}$
And we know that $u' = \lim_{h\rightarrow 0} \frac{\Delta u}{h} \; , \; v' = \lim_{h\rightarrow 0} \frac{\Delta v}{h}$
so finally we substitute u' and v' to the equation we get:
$= \lim_{h\rightarrow 0} \frac{\Delta u \cdot \Delta v}{h} + u' \cdot v + v' \cdot u$
How i can get rid of $\lim_{h\rightarrow 0} \frac{\Delta u \cdot \Delta v}{h}$ since this is some error in my proof.
2. Originally Posted by tabularasa
How i can get rid of $\lim_{h\rightarrow 0} \frac{\Delta u \cdot \Delta v}{h}$ since this is some error in my proof.
It is not an error. Write it as $\lim_{h\rightarrow 0} h\cdot\frac{\Delta u}h \cdot \frac{\Delta v}{h}$. Then $\lim_{h\to0}\frac{\Delta u}h = u'(x)$, $\lim_{h\to0}\frac{\Delta v}h = v'(x)$, and of course $\lim_{h\to0}h = 0$. Therefore, by the product rule for limits, $\lim_{h\rightarrow 0} h\cdot\frac{\Delta u}h \cdot \frac{\Delta v}{h} = 0$.
3. Originally Posted by Opalg
It is not an error. Write it as $\lim_{h\rightarrow 0} h\cdot\frac{\Delta u}h \cdot \frac{\Delta v}{h}$. Then $\lim_{h\to0}\frac{\Delta u}h = u'(x)$, $\lim_{h\to0}\frac{\Delta v}h = v'(x)$, and of course $\lim_{h\to0}h = 0$. Therefore, by the product rule for limits, $\lim_{h\rightarrow 0} h\cdot\frac{\Delta u}h \cdot \frac{\Delta v}{h} = 0$.
Where's that h $\lim_{h\to0} h\cdot\$ coming from?
4. Originally Posted by tabularasa
Where's that h $\lim_{h\to0} h\cdot\$ coming from?
Notice that he split the denominator into h^2, so there has to be another h in the numerator to counter that.
5. Originally Posted by FusionHK
Notice that he split the denominator into h^2, so there has to be another h in the numerator to counter that.
Aah, too clever. i just didn't saw that one coming. Thank you! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964063286781311, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/69774/is-the-class-of-cardinals-totally-ordered | # Is the class of cardinals totally ordered?
In a Wikipedia article
http://en.wikipedia.org/wiki/Aleph_number#Aleph-one
I encountered the following sentence:
"If the axiom of choice (AC) is used, it can be proved that the class of cardinal numbers is totally ordered."
But isnt't the class of ordinals totally ordered (in fact, well-ordered) without axiom of choice? Being a subclass of the class of ordinals, isn't the class of cardinals obviously totally ordered?
-
3
I think is issue is with the actual definition of "cardinal" when the axiom of choice is not assumed. It seems like the axiom is invoked somewhere in the process of the typical definition of cardinals. – Bill Cook Oct 4 '11 at 13:06
5
The axiom of choice is equivalent to the statement that all cardinals have a corresponding ordinal. So if AC is not true, there are cardinals which cannot be well-ordered. – Thomas Andrews Oct 4 '11 at 13:41
2
The following questions (in particular the fist one) seems to be related to this one: math.stackexchange.com/questions/53770/… math.stackexchange.com/questions/53752/… – Martin Sleziak Oct 4 '11 at 13:42
1
@Bill: Only if you "insist". We can define $\aleph$ cardinals just the same without the axiom of choice. For example, $|X|=\aleph_\alpha$ if and only if there is a bijection between $X$ and the $\alpha$-th initial ordinal. – Asaf Karagila Oct 4 '11 at 16:06
## 3 Answers
If I understand the problem correctly, it depends on your definition of cardinal. If you define the cardinals as initial ordinals, then your argument works fine, but without choice you cannot show that every set is equinumerous to some cardinal. (Since AC is equivalent to every set being well-orderable.)
On the other hand, if you have some definition which implies that each set is equinumerous to some cardinal number, then without choice you cannot show that any two sets (any two cardinals) are comparable. (AC is equivalent to: For two sets $A$, $B$ there exists either an injective map $A\to B$ or an injective map $B\to A$. It is listed as one of equivalent forms if AC at wiki.)
-
The statement that the class of cardinals is a subclass of the class of ordinals is equivalent to the axiom of choice.
-
The ordinals are well ordered regardless to any assumption of choice. It is defined that the class of ordinal numbers is the smallest transitive class that can be well ordered, and they form a backbone of transitive models, that is if $M,N$ are two transitive models have the same ordinals then $L^M=L^N$.
Without the axiom of choice there are non-well orderable sets. Their cardinality, if so, is not an $\aleph$ number. We define the cardinality of $X$ as either finite, or $\aleph_\alpha$ for some ordinal $\alpha$, in case that $X$ can be well ordered; or as a definable subset of the class of $A$'s such that there is a bijection between $X$ and $A$.
For example, it is consistent with ZF that there are infinite sets that cannot be split into two infinite sets (every partition into two disjoint sets will yield one of them finite). Such set does not even have a countable subset and therefore incomparable with $\aleph_0$.
To see how total order of cardinals is equivalent to the axiom of choice:
If the axiom of choice holds, then every set can be well ordered and is finite or equivalent to some $\aleph$-cardinal. Therefore all cardinals are $\aleph$'s and so cardinals are totally ordered (and well ordered too).
On the other hand, if cardinals are totally ordered, given a set $X$ denote $H(X)$ the least ordinal $\alpha$ such that there is no injective function from $\alpha$ into $X$.
It can be shown that $\alpha$ is an $\aleph$ cardinal (by the fact it has no bijection with smaller ordinals which are injectible into $X$), and $\alpha\nleq|X|$.
By the assumption that cardinalities of all sets are comparable $|X|$ is comparable with $\alpha$, therefore $|X|<\alpha$, and we have that $X$ can be injected into $\alpha$ and so it inherits a well order from such injection.
Therefore all sets can be well ordered, which is equivalent to the axiom of choice.
(The term for which I used for $H(X)$ is also known as Hartogs number)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353702068328857, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/112658/set-exponentiation-is-y-always-disjoint-from-yx-closed | ## Set Exponentiation: Is Y always disjoint from Y^X? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If $y \in Y$ and $g \in Y^X$, we often write $y+g$ as shorthand for the map $x \mapsto y+ g(x)$. Similarly if $f \in Y^X$ then $f+g = x \mapsto f(x)+g(x)$. However this presupposes that we can distinguish between an element of $Y$ and an element of $Y^X$. That is, we require these sets be disjoint. Are they?
-
6
Why do you need to presuppose such a thing? Just declare that $y$ is shorthand for the constant map $X \to Y$ with value $y$, then note that if $Y$ is a monoid then $Y^X$ canonically inherits a monoid structure. None of what I've said depends on questions like whether $Y$ and $Y^X$ are disjoint (and such questions are not even meaningful in the version of set theory in my head). – Qiaochu Yuan Nov 17 at 9:09
1
Yes. Why? Mathematicians overload symbols all the time. – Qiaochu Yuan Nov 17 at 10:08
3
Of course, in general the sets may not be disjoint, so the problem you identify actually occurs. The fact is that $Y$ may have a function from $X$ to (some other part of) $Y$ as an element. For example, consider $Y=\text{HC}$, the set of all hereditary countable sets, and let $X=\omega$; observe in this case that $Y^X\subset Y$, since any function from $\omega\to\text{HC}$ is itself hereditarily countable. Similar examples abound. But meanwhile, this is rarely a problem for mathematical communication, since one can resolve ambiguities in notation by explaining what is meant. – Joel David Hamkins Nov 17 at 11:13
2
Qiaochu, your proposed solution about reconsidering every element of $Y$ to be a constant map from $X$ to $Y$ doesn't actually resolve the ambiguity, in the case that $Y$ itself has that map as a point. In other words, it could be that your new version of $3+g$ is still ambiguous, if the constant $3$ map is also in $Y$ (as it is in my example with HC). That is, are you adding $3$ to each point $g(x)$, or are you adding the constant map $3$ to each point $g(x)$? (Absurd, I know...) – Joel David Hamkins Nov 17 at 11:29
1
I should have written "hereditarily countable" rather than "hereditary countable". And it is fine, Yianni, that you posted my comment as an answer. – Joel David Hamkins Nov 17 at 22:14
show 8 more comments
## 1 Answer
As Joel David Hamkins points out, the assertion is false.
"The fact is that $Y$ may have a function from $X$ to (some other part of) $Y$ as an element. For example, consider $Y=\mathrm{HC}$, the set of all hereditarily countable sets, and let $X=\omega$; observe in this case that $Y^X \subset Y$, since any function from $ω \rightarrow HC$ is itself hereditarily countable. Similar examples abound."
-
Joel: If $Y^X=Y$, $x \in X$ and $f \in Y$ wouldn't the sequence $f, f(x), f(x)(x), \dots$ contradict foundation? – Ramiro de la Vega Nov 18 at 0:06
Ramiro, yes, I had come to the same conclusion myself, and had just previously deleted my comment. – Joel David Hamkins Nov 18 at 0:13
But meanwhile, what my argument shows is that one can arrange $Y^X=Y\cup Y_0$ for any given nonempty $Y_0$, by starting with $Y_0$, and then adding all functions from $X$ to what you have so far. After $|X|^+$ iterations, you get $Y$ with $Y^X=Y\cup Y_0$. – Joel David Hamkins Nov 18 at 0:16
1
Joel, in your last comment, the equation should be $Y=Y_0\cup Y^X$. That is, $Y_0$ is part of $Y$, not of $Y^X$. – Andreas Blass Nov 19 at 11:59
Andreas, yes, that's right. – Joel David Hamkins Dec 4 at 2:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442545771598816, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/270995/how-to-get-coordinates-of-point-knowing-distance-from-x-y-and-angle | # How to get coordinates of point knowing distance from x,y and angle?
I have such a problem :
I am given :
• x,y
• $\|a\|$
• $\alpha$
• $\vec{v}$ and $\|v\|$
I need to get the coordinates of point X1Y2.
-
## 2 Answers
Use this fact that for two vector $v=(x_1,x_2),w=(y_1,y_2)$ we can evaluate $v.w$, the dot product of $v$ and $w$, by two ways. They are : $$v.w=x_1y_1+x_2y_2$$ and $$v.w=|v||w|\cos(\alpha)$$
Personally, I prefer @Karolis's answer but we can have an elementary approach according to what was given to us.
• $||a||=\sqrt{(X-X_1)^2-(Y-Y_2)^2}$
• $XX_1+YY_2=vw=||v||.||a||.\cos(\alpha)$
Above system have two equations of two unknowns. As you noted, we have $||a||,||v||,\alpha,X,Y$ so, put the known values and evaluate $X_1,Y_2$. I hope I could help.
-
Thanks a lot for this but if I am correct I will have an equation with 2 unknowns, right ? – Patryk Jan 6 at 0:50
@Patryk: Right. But you have a system of equation with 2 unknowns and I think after solving this system, you will get X1 and Y1. Tell me if you have any problem in it. ;-) – Babak S. Jan 6 at 6:36
Somehow I can't see the second equation. Can you edit your answer so that it is a bit more clearer :) ? – Patryk Jan 6 at 10:45
@Patryk: Sorry and forgive me for the delay. – Babak S. Jan 6 at 21:02
Good job explaining! +1 – amWhy Feb 23 at 0:07
$$(x_1, y_1) = (x, y) + \frac{a}{\|v\|} \cdot R(\alpha) \cdot \vec{v}$$ Where $R(\alpha)$ is a rotation matrix.
-
@Patryk: This is a best answer for a fast result.+1 ;-) – Babak S. Jan 6 at 11:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179449081420898, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/25333/why-does-0-1 | # Why does 0! = 1? [duplicate]
Possible Duplicate:
Prove $0! = 1$ from first principles
Why does 0! = 1?
All I know of factorial is that x! is equal to the product of all the numbers that come before it. The product of 0 and anything is 0, and seems like it would be reasonable to assume that 0! = 0. I'm perplexed as to why I have to account for this condition in my factorial function (Trying to learn Haskell). Thanks.
-
This can be answered by a simple google search. – Eric♦ Mar 6 '11 at 19:05
3
– Rahul Narain Mar 6 '11 at 19:07
Dear Orbit, please refer to the question linked to above on this website. – Akhil Mathew Mar 6 '11 at 19:41
## marked as duplicate by Rasmus, Jonas Meyer, Akhil MathewMar 6 '11 at 19:41
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 5 Answers
Mostly it is based on convention, when one wants to define the quantity $\binom{n}{0} = \frac{n!}{n! 0!}$ for example. An intuitive way to look at it is $n!$ counts the number of ways to arrange $n$ distinct objects in a line, and there is only one way to arrange nothing.
-
Same question here as I have for Eric, why 1 way to arrange nothing, instead of 0 ways? – Orbit Mar 6 '11 at 19:12
Can you give a argument as to why it -should- be 0 ways? – Mitch Mar 8 '11 at 18:46
+1 This is a nice answer, it seems the factorial operation was created exactly to count the number of ways to arrange $n$ objects in a line. – Gustavo Bandeira Nov 25 '12 at 2:13
It has many reasons.
For example, we can have a power series: $e^x = \sum_{n} x^n/n!$ and we would like the first term to be $1$.
Also, how many permutations are there of $0$ numbers? Well, one.
-
In a combinatorial sense, $n!$ refers to the number of ways of permuting $n$ objects. There is exactly one way to permute 0 objects, that is doing nothing, so $0!=1$.
There are plenty of resources that already answer this question. Also see:
http://mathforum.org/library/drmath/view/57905.html
http://wiki.answers.com/Q/Why_is_zero_factorial_equal_to_one
http://en.wikipedia.org/wiki/Factorial#Definition
-
Intending on marking as accepted, because I'm no mathematician and this response makes sense to a commoner. However, I'm still curious why there is 1 way to permute 0 things, instead of 0 ways. – Orbit Mar 6 '11 at 19:10
If you want to do nothing, there is a way to do it, you just don't do it. But if you say there are 0 ways to do nothing, then you are implying that it is impossible to do nothing, which is of course not the case. This is how I look at it. – fdart17 Mar 6 '11 at 19:37
We know that $\binom{n}{n}$ and $\binom{n}{0}= 1$. Thus $0! = 1$.
-
3
The theorem that $\binom{n}{k} = \frac{n!}{k!(n-k)!}$ already assumes $0!$ is defined to be $1$. Otherwise this would be restricted to $0 <k < n$. A reason that we do define $0!$ to be $1$ is so that we can cover those edge cases with the same formula, instead of having to treat them separately. We treat binomial coefficients like $\binom{5}{6}$ separately already; the theorem assumes $0 \leq k \leq n$. – Carl Mummert Mar 6 '11 at 19:33
It's because $n! = \prod_{0<k\le n} k.$ ($n!$ is the product of all numbers $1, 2,\dots n$) For $n = 0$ there isn't any number greater then 0 and lesser or equal to $n$, so the product is empty; the empty product is defined by convention as 1.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546290636062622, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/295802/inductive-prove-sum-k-1n-binom-n-k-2n-1 | # inductive prove$\sum_{k=1}^{n} \binom n k = 2^n -1$
today i wrote calculus exam, i had this problem given, which was to prove by induction: $$\sum_{k=1}^{n} \binom n k = 2^n -1$$ for all $n\in \mathbb{N}$
i have the feeling that i will get $0$ points for my solution, because i did this:
Base Case: $n=1$
$$\sum_{k=1}^{1} \binom 1 1 = 1 = 2^1 -1 .$$
Induction Hypothesis: for all $n \in \mathbb{N}$: $$\sum_{k=1}^{n} \binom n k = 2^n -1$$
Induction Step: $n \rightarrow n+1$
$$\sum_{k=1}^{n+1} \binom {n+1} {k} = \sum_{k=1}^{n} \binom {n+1} {k} + \binom{n+1}{n+1} = 2^{n+1} -1$$
please show me my mistake because next time is my last chance in this class. Thanks
-
The problem is that you prove nothing during the induction phase. – Damien L Feb 5 at 21:56
yeaaah, this is my problem i think. can you please give me more steps in between? – doniyor Feb 5 at 21:57
1
Another nitpick, your hypothesis is what you want to prove. You should say that there exists an $n\in\Bbb N$ such that for all $k\leq n$ the identity holds. – Clayton Feb 5 at 21:58
## 3 Answers
Your first mistake is that you stated the induction hypothesis incorrectly: it should be simply that
$$\sum_{k=1}^n\binom{n}k=2^n-1\;.\tag{1}$$
What you gave as induction hypothesis is actually the theorem that you’re trying to prove!
Now, assuming $(1)$ you want to prove that $$\sum_{k=1}^{n+1}\binom{n+1}k=2^{n+1}-1\;.$$
This is an occasion when it’s not a good idea to split off the last term. Instead, use the Pascal’s triangle identity for binomial coefficients:
$$\begin{align*} \sum_{k=1}^{n+1}\binom{n+1}k&=\sum_{k=1}^{n+1}\left(\binom{n}k+\binom{n}{k-1}\right)\\\\ &=\sum_{k=1}^{n+1}\binom{n}k+\sum_{k=1}^{n+1}\binom{n}{k-1}\\\\ &\overset{*}=\sum_{k=1}^n\binom{n}k+\sum_{k=0}^n\binom{n}k\\\\ &=\left(2^n-1\right)+\binom{n}0+\sum_{k=1}^n\binom{n}k\\\\ &=2^n-1+1+2^n-1\\\\ &=2\cdot2^n-1\\\\ &=2^{n+1}-1\;. \end{align*}$$
At the starred step I used the fact that $\binom{n}{n+1}=0$ to drop the last term of the first summation, and I did an index shift, replacing $k-1$ by $k$, in the second summation.
-
Brian, brilliant. my Bad, i am soo dumb. thank you so much – doniyor Feb 5 at 22:05
@doniyor: You’re very welcome. – Brian M. Scott Feb 5 at 22:05
## Did you find this question interesting? Try our newsletter
email address
suppose that $$\sum_{k=1}^{n} \binom n k = 2^n -1$$ then $$\sum_{k=1}^{n+1} \binom {n+1}{k} =\sum_{k=1}^{n+1}\Bigg( \binom {n}{ k} +\binom{n}{k-1}\Bigg)=$$ $$=\sum_{k=1}^{n+1} \binom {n}{ k} +\sum_{k=1}^{n+1}\binom{n}{k-1}=$$ $$=\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\sum_{k=0}^{n} \binom {n}{k}=$$ $$=\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\binom{n}{0}+\sum_{k=1}^{n} \binom {n}{k}=$$ $$=2\sum_{k=1}^{n} \binom {n}{ k}+\binom{n}{n+1}+\binom{n}{0}=$$ since $\binom{n}{n+1}=0, \binom{n}{0}=1$ $$=2\sum_{k=1}^{n} \binom {n}{ k}+1=2(2^n-1)+1=2^{n+1}-2+1=2^{n+1}-1$$
-
wow, great. thanks Adi. there are so smooth and beautiful ways... – doniyor Feb 5 at 22:24
There are a few points where I'd suspect someone could take issue with your answer:
The Induction Hypothesis is supposed to be the idea that given the hypothesis holds for a set of values from the base case up to an n that you could use that to prove the n+1 case. You seem to imply that the Induction Hypothesis holds for all natural numbers which is a bit of an overstatement to my mind.
Lastly, where is the use of the induction hypothesis in the last step? Where do you use the $2^n$-1 part that should be part of the solution.
$\sum_{k=1}^{n+1} \binom {n+1} {k} = \sum_{k=1}^{n} \binom {n+1} {k} + \binom{n+1}{n+1} = (n+1)\sum_{k=1}^{n}\binom {n} {k} + 1 = (n+1)(2^n-1)+1 = n2^n-n+2^n-1+1 = 2^n(n+1)-n$ which still has to be simplified into the final result but this at least is using the hypothesis.
-
i just blindly wrote $2^{n+1}-1$. i didnot know how to come to solution thru binomials. – doniyor Feb 5 at 22:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9574621319770813, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/160954-points-intersection.html | # Thread:
1. ## points of intersection
$y=x^3-2x+1$
$y=x^2$
The question says to use the bisection method to find the points of intersection of the 2 curves. I know how to use the bisection method to find the root of an equation (like on this page First Steps in Numerical Analysis), but how would I use it to find the point of intersection? Thanks.
Moderator edit: Use [tex] and [/tex] tags NOT [latex] and [/latex].
2. You have $y=x^3-2x+1$ and $y=x^2$ and intersection occurs at $x^2=x^3-2x+1$
Holding that thought
$x^2=x^3-2x+1$
$x^2-x^3+2x-1= 0$
Now apply the bisection method, to kick if off try the interval (0,1)
3. thanks! got the answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8814338445663452, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/3639/what-does-ramification-have-to-do-with-separability | ## What does ramification have to do with separability?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Does ramification have anything to do with inseparability? It feels like an extension of Q in which p ramifies should somehow correspond to an extension of F_p(t). Does totally ramified <--> purely inseparable?
In fact, saying an irreducible polynomial f(x) is inseparable is the same as saying that f(x) ramifies when we extend Q[x] to L[x], where L is the splitting field of f(x).
By correspond, I generally mean taking an extension of Q defined by a root of p(x)=r, where r is a rational making the extension nontrivial, and then extending F_p(t) by a root of p(x)=t. It's interesting because then this extension of F_p(t) corresponds to a number of extensions of Q (this is the same thing when you do Galois theory by looking at fundamental groups of branched coverings of C. Then do you look at etale fundamental groups of objects associated to these function fields over finite fields?).
-
I should also add that saying that an extension $L/K$ is separable is equivalent to saying that the extension $L[x]/K[x]$ of Dedekind domains is unramified - that is, irreducible polynomials in $K[x]$ do not have any repeated factors in $L[x]$. – David Corwin Aug 23 2010 at 23:50
## 4 Answers
Let (A,m) and (B,n) be local rings, with a local map f : A --> B. (The condition that f is local means that f^{-1}(n) = m.) Also, assume that f obeys a finiteness condition called "essentially of finite type"; I'll ignore this. By definition, f is unramified if (1) Bm=n and (2) B/n is a seperable extension of A/m. Condition (1) is usually the hard part to verify, but in this answer I will concentrate on condition (2) and try to provide some intuition for why this condition is included.
Let f:A --> B be a map of rings. I might need some finite generation hypothesis; I'm not sure. Then f is unramified if and only if the following is true: For every prime ideal p in A, the tensor product B \otimes_{A} \bar{Frac(A/p)} is isomorphic to a direct sum of several copies of \bar{Frac(A/p)}. Here \bar indicates algebraic closure.
Tensoring with the algebraic closure of the residue field at a prime is called "taking the geometric fiber" over that prime, in algebraic geometry. So the geometric statement is that a map is unramified if and only if all of its geometric fibers are reduced and of dimension 0. (Again, modulo any finiteness hypothesis I may have forgotten.)
The point here is that, if L/K is a separable algebraic field extension, then L \otimes_K \bar{K} is isomorphic to \bar{K}^[L:K]. For an inseparable extension, this tensor product has nilpotents. (Specifically, if t is in L but not in K, and t^p=u is in K, then (t-u^{1/p}) will become nilpotent in the tensor product.) So the geometric fiber will not be reduced for such an extension.
While the definition of unramified requires separability, in the sense explained above, there is no implication in the other direction.
I used the early parts of deJong's notes as a reference when writing this.
-
1
In the third paragraph, did you mean to say that a map is *un*ramified iff all its geometric fibers are reduced of dimension 0? – Alison Miller Nov 2 2009 at 3:31
Thanks, I've fixed it now. – David Speyer Nov 2 2009 at 3:55
1
The preceding error also occurs in the second paragraph. Also, I think it should be stressed that mB = n is usually the hardest part of verifying a map is unramified! – Bhargav Nov 2 2009 at 17:34
Thanks for all the comments. I have revised the answer. – David Speyer Nov 2 2009 at 18:02
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In the related context of dominant maps of curves (ie, extensions of function fields), a map which is ramified at every point must be inseparable. For example, for P^1 in characteristic p, the map z \mapsto z^p is ramified everywhere, and corresponds to the inseparable extension k(z^p) \to k(z). On the other hand, a separable map must be generically smooth and thus ramified at only finitely many points. I believe the converse also holds (every inseparable map is everywhere ramified), but I'm not entirely sure.
-
I don't think this is true.
Any quadratic field can be defined by taking a root of x^2 = d. Your recipe, if I understand it right, would say to look at the polynomial x^2 - t over F_2(t). This polynomial is inseparable, so your hypothesis would imply that 2 ramifies in ANY quadratic extension of Q, which is false.
As to the more conceptual question, I don't know of any results that explicitly define some canonical extension of a function field corresponding to any extension of Q. In other words, I don't think there's any way to talk about ramification of number fields in terms of inseparability of characteristic p extensions of function fields. If one sticks to function fields themselves and asks what ramification has to do with separability, the answer is that they are certainly not the same thing.
-
Hmm okay but 2 ramifies in a large class of quadratic extensions of Q. Also, ramification and separability are both related to the existence of nilpotents in algebras. – David Corwin Nov 2 2009 at 1:44
Here's a candidate definition for "total ramification" which jibes quite well with pure inseparability:
A finite map f:X -> Y is totally ramified at y if the scheme theoretic fibre X_y -> y is a universal homeomorphism.
If k is a field, and A is a finite k-algebra, then A is totally ramified over k in the above sense if and only if a) A is local, and b) the last condition holds after all base changes on k. If k has characteristic 0, this is equivalent to requiring that A be local with residue field k. This means that in the case X and Y are curves over a field of characteristic 0, this gives the usual notion. On the other hand, a finite extension L/K of fields is totally ramified in the above sense iff L/K is geometrically connected iff L/K is purely inseparable.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929720938205719, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Molecular_modeling | # Molecular modelling
(Redirected from Molecular modeling)
The backbone dihedral angles are included in the molecular model of a protein.
Modeling of ionic liquid
Molecular modeling encompasses all theoretical methods and computational techniques used to model or mimic the behaviour of molecules. The techniques are used in the fields of computational chemistry, drug design, computational biology and materials science for studying molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling techniques is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (the Molecular mechanics approach), or explicitly modeling electrons of each atom (the quantum chemistry approach).
## Molecular mechanics
Molecular mechanics is one aspect of molecular modelling, as it refers to the use of classical mechanics/Newtonian mechanics to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and van der Waals forces. The Lennard-Jones potential is commonly used to describe van der Waals forces. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is known as a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are known as energy minimization techniques (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are known as molecular dynamics.
$E = E_\text{bonds} + E_\text{angle} + E_\text{dihedral} + E_\text{non-bonded} \,$
$E_\text{non-bonded} = E_\text{electrostatic} + E_\text{van der Waals} \,$
This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively known as a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using high level quantum calculations and/or fitting to experimental data. The technique known as energy minimization is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, $\mathbf{F} = m\mathbf{a}$. Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization technique is useful for obtaining a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.
## Variables
Molecules can be modeled either in vacuum or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are known as implicit solvation simulations.
## Applications
Molecular modelling methods are now routinely used to investigate the structure, dynamics, surface properties and thermodynamics of inorganic, biological and polymeric systems. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes.
## References
• C.D. Schwieters, J.J. Kuszewski, N. Tjandra and G.M. Clore, "The Xplor-NIH NMR Molecular Structure Determination Package," J. Magn. Res., 160, 66-74 (2003).
• C.D. Schwieters, J.J. Kuszewski, and G.M. Clore, "Using Xplor-NIH for NMR molecular structure determination," Progr. NMR Spectroscopy 48, 47-62 (2006).
• M. P. Allen, D. J. Tildesley, Computer simulation of liquids, 1989, Oxford University Press, ISBN 0-19-855645-4.
• A. R. Leach, Molecular Modelling: Principles and Applications, 2001, ISBN 0-582-38210-6
• D. Frenkel, B. Smit, Understanding Molecular Simulation: From Algorithms to Applications, 1996, ISBN 0-12-267370-0
• D. C. Rapaport, The Art of Molecular Dynamics Simulation, 2004, ISBN 0-521-82586-7
• R. J. Sadus, Molecular Simulation of Fluids: Theory, Algorithms and Object-Orientation, 2002, ISBN 0-444-51082-6
• K.I.Ramachandran, G Deepa and Krishnan Namboori. P.K. Computational Chemistry and Molecular Modeling Principles and Applications 2008 [1] ISBN 978-3-540-77302-3 Springer-Verlag GmbH | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8619120121002197, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/31451/adding-rotation-to-internal-coordinates/32138 | # Adding rotation to internal coordinates
I'm optimizing the geometry of a system composed of several interacting masses (a molecule). The energy of the system depends on the relative position of the masses, and all velocities are zero.
In a simple case, the energy is invariant with translation and rotation of the whole system, so I can optimized using internal coordinates, defining these internal coordinates as the different distances, angles and dihedral angles between the masses. For converting coordinates to and from Cartesian I have the matrix of coordinate derivatives, etc. J. Chem. Phys. 117, 9160 (there are small mistakes in eqs. 34 and 35).
Now the question is how to proceed when the energy is not invariant with translation and rotation. I have to include translation and rotation of the whole system to the degrees of freedom of the optimization. Translation is easy, because I can add the Cartesian coordinates of the center of mass, and the derivatives with respect to the coordinates of the masses are easy. But how can I add rotation?
I guess it will be related to the orientation of the principal axes of inertia, but I can't find a clear expression that will give me the three rotational coordinates I need, and their derivatives with respect to all the masses' coordinates.
Can anyone give me some help or pointers?
-
Is the system invariant or not under translations and rotations? i.e.: do you have an invariant system and are hoping to include a description of translation and rotation? or do you have interactions that break the translational and rotational invariance? if so, what coordinates do these interactions depend on? – Emilio Pisanty Jul 9 '12 at 14:09
@EmilioPisanty In general, the system is not invariant under translations and rotations. The dependence is not simple: there's a large number of fixed external point charges ("fixed" meaning they are not affected by rotation or translation). Note that I'm not asking for a method or algorithm to optimize the geometry, but for a way to add translation and rotation degrees of freedom of the whole system to a set of redundant internal coordinates. – Jellby Jul 9 '12 at 14:46
If you are looking for a way to describe an orientation of an object, Euler angles might be what you are looking for. – Alexey Bobrick Jul 9 '12 at 23:50
1
What's wrong with just using cartesian coordinates for everything? You can add SO(3) coordinates for the system, but you can also just write down the force laws in cartesian coordinates, or using SO(3) coordinates plus translations for the internal stuff. – Ron Maimon Jul 10 '12 at 20:06
@RonMaimon In chemical systems it is often more efficient (fewer iterations) to use internal coordinates. Internal coordinates also make it easier to add constraints to distances and angles, and are more closely related to the "meaning" of the process. But of course, using Cartesian coordinates is always a possibility. – Jellby Jul 11 '12 at 7:59
show 3 more comments
## 3 Answers
### EDIT: Later addition
This project is ill conceived, you should put the system in rectangular absolute coordinates, there is absolutely no gain from what you are doing, it is objectively wrong, and needlessly complicated.
However, you are specifying the location of point particles one after the other, in relative polar coordiantes, so what you can do to fix the orientation of the whole thing is add 1 fictitious new particle, which is attached to the next atom with an angle, and gives you an orientation angle for the whole thing. This is probably the simplest modification.
I have to say that you're wasting your time in writing code for point particles using relative coordinates. This is deranged.
### For rigid bodies
You should use the orientation of one of the components as defining the orientation of the whole molecule. There is no analog of the center of mass for rotations.
The total energy function can be written in terms of the rotation matrices $R_i$ which rotate the parts from some initial choice of orientation to the one you are looking at, plus the position of the center of mass.
$$H(x_1,R_1,x_2,R_2....,x_n,R_n)$$
It is invariant with respect to translations and rotations
$$H(RR_i,x_i+c) = H(R_i,x_i)$$
The center of mass coordinates give you a center for x, but there is no analogous orientation average, because the space of orientations is not infinite in extent. So instead of using an orientation average, you just use the orientation of one object, say object 1, to fix the orientation. Then the R matrix for object 1 defines the global orientation, and the rest of the objects orientations are relative to the first.
The energy is then a function of R, the orientation of object 1, and of the relative orientations of the other objects to object 1, $R_1^{-1} R_i$ and the positions relative to the center of mass. The relative orientation between object i and object j is $R_i^{-1}R_j$, and this relative orientation is independent of whether you use the R matrix relative to the frame defined by object 1, or relative to the original frame. The whole object can be reconstructed by placing the center of mass somewhere, placing object 1 at the right position with orientation R, then the rest of the objects relative to the axes defined by object 1.
You shouldn't use the moment of inertia principal axes as your definition of the orientation of the whole thing, because these become discontinuous when two of the moments of inertia become equal, so that the object will rotate discontinuously at those times when two moments happen to become equal.
-
I'd need two objects (not aligned with the center of rotation) to define the orientation, I guess. Otherwise the orientation around the axis determined by the first object is undefined, right? Now, when it comes to defining all the derivatives $B_{ij}$, does it mean all derivatives with respect to objects other that 1 are zero? Or are all as in the equations in my answer? – Jellby Jul 16 '12 at 8:59
@Jellby: I assumed the "objects" are irregular, so that they each have an orientation as rigid bodies. Are these rigid bodies or point particles? If they are point particles, ignore this answer, but from your description, it seemed you had orientation matrices for pairs, so they were rigid bodies, like amino acids. The derivatives "B_ij" for the rotation matrices are the angular velocities, which are given in antisymmetric matrix form by $R^{-1}\dot{R}$ where dot is a time derivative. – Ron Maimon Jul 16 '12 at 9:15
No, my "objects" are just point particles (nuclei), and there is no velocity either, as there's no time. The derivatives I need are of internal (or global) coordinates with respect to Cartesian coordinates of the nuclei, or vice versa, which are used to convert between the two kinds of coordinates. – Jellby Jul 16 '12 at 9:37
@Jellby: Ok, then you need to fix four arbitrary point particles to define the frame. You should have said this, because if you are doing point particles, it is absolutely 100% wrong to use any sort of relative coordinates, you should use cartesian coordinates period, and stop wasting time. – Ron Maimon Jul 16 '12 at 17:27
I'm sorry that's what you think. But as I said, in the comments to my question, Cartesian coordinates, while possible, are generally not the most efficient. It is true this is not a life-or-death question, I can have the job done by using Cartesian coordinates, but since I already have a program that works very nicely with internal coordinates when there is translational and rotational invariance, I just needed to add these degrees of freedom. – Jellby Jul 16 '12 at 17:29
show 8 more comments
This is slightly too long, and requires a bit too much ${\LaTeX}$, for a comment.
"a given displacement of the center of mass does not transform to the same displacement to each and every mass of the system" is not quite right. The maths you are displaying say the opposite: a small displacement $\delta x_i$ on molecule $i$ effects a displacement on $x_c$ which depends on $m_i$, which is right and perfectly physically understandable. Your statement in words would describe the matrix $\frac{\partial x_i}{\partial x_c}$, for which there is not enough information. Specifically, this matrix is the inverse of the matrix you're describing, and to get that inverse, you need the whole matrix, i.e. you need the rest of the internal coordinates in order to get the effect of $\delta x_c$ on an individual position $x_i$.
To be more specific, let me draw an analogy with two masses on a line, with coordinates $x_1$ and $x_2$. You want one of your final coordinates to be the centre of mass, $$\xi_1=x_c=\frac{m_1}{M}x_1+\frac{m_2}{M}x_2,$$ and this fixes one of the rows of the matrix $\frac{\partial \xi_j}{\partial x_i}$. The other row is still free: there is of course the canonical choice $$\xi_2=x_r=x_2-x_1$$ but other choices are of course possible such as $(x_2-x_1)^3$, for one, or even $x_1$ or $x_2$; note that the latter are perfectly valid as a change of coordinates, but they radically change the physical content of the transformation.
To make this clearer, consider the matrix you quote, with $x_r$ chosen as $\xi_2$: $$\frac{\partial\xi_j}{\partial x_i}=\begin{pmatrix}m_1/M &m_2/M\\ -1&1 \end{pmatrix}.$$ Then the determinant is 1 and the inverse is $$\frac{\partial x_i}{\partial \xi_j}= \begin{pmatrix}1&-m_2/M\\1&m_1/M \end{pmatrix}.$$ If you now focus on the first column, it tells you that $\frac{\partial x_i}{\partial x_c}=1$ for $i=1,2$ and it is this that translates into your statement
it does not give the same displacement for all masses, i.e. a given displacement of the center of mass does not transform to the same displacement to each and every mass of the system (as I'd expect)
You can see that it does: a given displacement of the centre of mass does transform to the same displacement for each mass of the system. However, this property does not depend on us having chosen $x_c$ as one coordinate, but also on our canonical choice of $x_r$ as the other. If you choose $\xi_2=x_2$, say, (or indeed anything that is not of the form $f(x_r)$) and repeat the exercise, you'll see that the matrix $\frac{\partial x_1}{\partial\xi_j}$ has the same problem from the quote.
Sorry for the long post that doesn't deal with rotation and is certainly not an answer to your question, but I do hope it helps with the problems you're having. Particularly, I would personally advise strongly against pursuing the geometrical centre as a quantity of interest - it simply ignores the physical contribution of the masses to the geometry and cannot therefore be right. You only need one global coordinate (per spatial dimension, of course) and this must be the centre of mass. The rest of the game, as you correctly infer, is in the choice of internal coordinates. As to where to go from there, I think it would be useful if you posted some examples of what internal coordinates you've been using in your test examples, so that we can get a better idea of how your maths look like.
-
I hadn't thought about how the presence of other internal coordinates would affect the inverse conversion matrix. I guess I thought that, if I remove all degrees of freedom except center of mass translation, the system will move as a rigid object, but now I believe that's not the case, rather internal degrees of freedom will change in undetermined ways. I've added a footnote to my answer. As for the physics of the problem, since I'm optimizing a geometry and the (potential) energy is independent of the masses, I don't think there's any problem with using the geometric center. – Jellby Jul 16 '12 at 8:53
And about examples of what internal coordinates, I'm using in general redundant internal coordinates, that is (a subset af all possible) distances between two points, angles between three points and dihedral formed by four points. In methanol, for instance, I'd have 3 CH distances, 1 CO, 1 OH, 3 HCH angles, 3 HCO, 1 COH, 3 HCOH dihedral. That's a total of 15 internal coordinates, which is more than the 3*6-6=12 needed (that's why they are redundant), but still don't contain the translational and rotational degrees of freedom. – Jellby Jul 16 '12 at 10:40
For methanol, I'd use the Euler angles of the CO bond. You should be careful because at equilibrium the two smaller moments of inertia are equal and, as Ron points out, this can bring trouble. – Emilio Pisanty Jul 16 '12 at 12:50
But I'm after a solution that would work for any other system, even linear. I guess I'll have to choose some specific atoms to define the orientation (or rather think of an algorithm to choose them automatically). – Jellby Jul 16 '12 at 13:09
Linear molecules are a problem because two of their moments of inertia are basically zero. (The electronic angular momentum is already accounted for and nuclear rotation only contributes to hyperfine structure.) Choosing the orientation of any given bond (or possibly two adjacent bonds) will most likely work. Having a universal algorithm for choosing which bond to use sounds unlikely. – Emilio Pisanty Jul 16 '12 at 13:50
This is what I've managed so far. In general, to convert to and from Cartesian coordinates, we need the the (Wilson's) $B$ matrix, a rectangular matrix whose elements are:
$$B_{ij}=\frac{\partial q_i}{\partial x_j}$$
where $q_i$ are the internal or collective coordinates, and $x_j$ are the Cartesian coordinates ($3N$ in total, $x$, $y$ and $z$ for each of the $N$ masses). This matrix can be used to convert derivatives and displacements. So from the vector of first derivatives (gradient) in Cartesian coordinates $g_x$, we can obtain the corresponding gradient in internal coordinates $g_q=(B^T)^+g_x$ (where $(B^T)^+$ is the Moore-Penrose pseudo inverse of the transpose of the $B$ matrix). A displacement in internal coordinates $\delta q$ can be obtained from a displacement of Cartesian coordinates: $\delta q=B\delta x$. For higher-order derivatives, the matrix of second derivatives $B'$ is needed:
$$B'_{ijk}=\frac{\partial^2 q_i}{\partial x_jx_k}$$
OK, now for the case of purely internal coordinates (distances, angles, dihedral angles), the expressions for the above $B$ and $B'$ elements are given in J. Chem. Phys. 117, 9160. But this assumes that the energy (function to minimize) is invariant with respect to translations and rotations, so the internal coordinates can describe all the degrees of freedom needed. If there is an external potential, field, charges, etc. this is not the case, and I must add translation and rotation of the whole system to the set of internal coordinates.
As I said, translation is easy, but not so much. At first I thought I would include the center of mass coordinates.
$$x_c=\sum\frac{m_ix_i}{M} \qquad M=\sum m_i$$
(now I use $x$ only for the $x$ coordinates, similar expressions would apply to $y_c$ and $z_c$) which gives:
$$\frac{\partial x_c}{\partial x_i} = \frac{m_i}{M} \qquad \frac{\partial x_c}{\partial y_i} = 0 \qquad \frac{\partial x_c}{\partial z_i} = 0$$
and similarly for $y_c$ and $z_c$. This is all fine, but there is a problem: it does not give the same displacement for all masses, i.e. a given displacement of the center of mass does not transform to the same displacement to each and every mass of the system (as I'd expect)[*]. In order to get this behaviour, I have to add not the center of mass but the geometrical center:
$$x_c=\sum\frac{x_i}{N} \qquad \frac{\partial x_c}{\partial x_i}=\frac{1}{N} \qquad \frac{\partial^2 x_c}{\partial x_i\partial x_i}=0$$
This ensures that when a displacement is converted to cartesian coordinates ($\delta x=B^+\delta q$) all masses will be displaced in the same way.
Now for rotation. Since I've used the geometrical center above, I consider rotation around this same center (so all Cartesian coordinates should be assumed to have $x_c,y_c,z_c$ subtracted). For a given mass, rotation around the $z$ axis can be intuitively given as $\arctan\frac{y}{x}$, or:
$$\frac{\partial R_z}{\partial x_i} = \frac{-y_i}{x_i^2+y_i^2} \qquad \frac{\partial R_z}{\partial y_i} = \frac{x_i}{x_i^2+y_i^2} \qquad \frac{\partial R_z}{\partial z_i} = 0$$
and symmetrically (cyclic) for $R_y$ and $R_x$. The second derivatives:
$$\frac{\partial^2 R_z}{\partial x_i\partial x_i} = \frac{2x_iy_i}{(x_i^2+y_i^2)^2} \qquad \frac{\partial^2 R_z}{\partial x_i\partial y_i} = \frac{y_i^2-x_i^2}{(x_i^2+y_i^2)^2} \qquad \frac{\partial^2 R_z}{\partial x_i\partial z_i} = 0$$
or, by analogy with translation, I could divide them all by $N$.
Applying these expressions in some test cases seemed to work, but I'm not sure if it was just luck. Besides, I have no idea what the rotational coordinates themselves, $R_x$, $R_y$, $R_z$, would be, I just used their derivatives. For any single mass I can get the angle with the axes, but for the whole system of $N$ masses? Computing the principal axes of inertia and the Euler angles of those is a possibility, but is this related to the derivatives above?
Does any of the above make sense?
[*] After reading Emilio Pisanty's answer, I think this is not a problem. To follow his example with two masses (assuming $m_1=1$ and $m_2=2$), if I remove $\xi_2=x_2-x_1$ as a coordinate, the $\frac{\partial x_i}{\partial \xi_j}$ matrix becomes the column vector $(0.6, 1.2)$. In this case, a given displacement of the center of mass $\Delta\xi_1$ will transform in a Cartesian displacement $\Delta x_1=0.6\Delta\xi_1$ and $\Delta x_2=1.2\Delta\xi_1$. Now suppose initially $x_1=0$, $x_2=1$, initially the center of mass would be at $\xi_1=\frac{2}{3}$; if the desired displacement is $\Delta\xi_1=0.5$, this gives $\Delta x_1=0.3$ and $\Delta x_2=0.6$, so the final coordinates would be $x_1=0.3$ and $x_2=1.6$, and the final center of mass is indeed $\xi_1=\frac{2}{3}+0.5=\frac{7}{6}$. But the distance between $x_1$ and $x_2$ has changed, which should be considered OK, as I didn't include the distance in the coordinates, which is like saying I don't care what happens with it. So, I guess my problem was that the system is underdetermined, and this shouldn't happen if I have at least the needed $3N$ coordinates.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 82, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267329573631287, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/22649-stuck-maths-home-work.html | # Thread:
1. ## Stuck on Maths Home work :(
Hi, can you help me please, whats the range?
2. Originally Posted by joe_07
Hi, can you help me please, whats the range?
the range is the set of output values (usually y-values) for which a function is defined. so, for example, the range of $y = x^2$ is $y \in [0, \infty)$ since we can only get non-negative outputs (y-values) for all inputs (x-values) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8744222521781921, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/106191/probability-selection-from-a-collection-of-100?answertab=active | # probability (selection from a collection of 100)
Suppose a microprocessor is chosen at random from a collection of 100. Assume 20 are Intels and 80 are AMDs. Also assume that 10 of the Intels are 2.0 GHz. What is the probability that the selected microprocessor is 2.0 GHz given that it is an Intel?
-
## 2 Answers
Hint: What proportion of the Intels are 2.0 Ghz?
Alternatively, you can use the formula $$P(A|B) = \frac{P(A \cap B)}{P(B)}.$$ Here, the event $B$ is that the microprocessor is an Intel, and the event $A$ is that it is a $2.0$ Ghz processor. To find the probability of $A$ given $B$, you must determine the probability that a randomly chosen processor is both Intel and $2.0$ Ghz (what proportion of the processors posess both these properties?) and divide that by the probability that the processor is Intel.
-
There are twenty intel chips and ten are 2 GHz. The choice is random, so there is a 1/2 probability of choosing a 2 GHz chip.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243664741516113, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/205284-minimize-function-subject-constraint.html | # Thread:
1. ## minimize function subject to constraint
could you help to minimize function F = x^2+y^2 subject to constraint:
1-x<0
2-0.5x-y<=0
x+y-4<0
and is there difference in solving the problem if we say insted of < say <=
2. ## Re: minimize function subject to constraint
please any body can help me to solve this proplem any information
3. ## Re: minimize function subject to constraint
Sketch the area associated with the constraints.
1. $1-x<0$ (or $x>1$), gives you an area to the right of the vertical $x=1.$
Do the same fo the other two constraints and you should come up with a triangular region.
$x^{2}+y^{2}$ is the square of the distance of a point from the origin. It's minimum value will therefore be the square of the distance from the origin to the point in the region which is closest to the origin.
4. ## Re: minimize function subject to constraint
i draw the reigon as you mension i get trainangle as shown in the attachment and if we draw line from origin to the point which is the nearset to the reigeion we get the function will be minimize at x=1,y=1.5...but
note that is true if x>=1 .However , here in the question x> 1 not x>=1 so that x will never equal 1
is there differeance in the solution if x>1 or x>=1 or there are same
Attached Thumbnails
5. ## Re: minimize function subject to constraint
x>1 as well as x>=1 both result in converging min(x^2+y^2)->3.25 which is a circle through {x,y}={1, 1.5}...so the shape of the region is useless here because its closest POINT to the origin is picked to minimize the function.
6. ## Re: minimize function subject to constraint
Thanks for all
this mean the point (1,1.5) is the right answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8789594769477844, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/1692/how-much-can-we-compress-rsa-public-keys/1695 | # How much can we compress RSA public keys?
I am wondering to what degree we can define an RSA variant, with a security argument that it is as safe as regular RSA with a given modulus size $m$ (e.g. $m=2048$), in which the public key has a compact representation of $k \ll m$ bits.
We can fix the public exponent to our favorite customary value, e.g. $e=2^{16}+1$ or $e=3$, thus need to store only the public modulus $n$. We need not store the leftmost bit of $n$, which is always set by definition; nor the rightmost bit, which is always set since $n$ is odd. With a little effort, we could save (very) few more bits noticing $n$ has no small divisors, but that will still be $k \sim m$ bits.
We can do better by forcing the $\lfloor m/2-log_2(m)-2\rfloor$ high bits of $n$ to some arbitrary constant such as $\lfloor\pi \cdot 2^{\lfloor m/2-log_2(m)-4\rfloor}\rfloor$. Observe that we can chose the smallest prime factor $p$ of $n$ just as we would do in regular RSA, then find the maximum integer interval $[q0,q1]$ such that any $q$ in that interval cause $n=p\cdot q$ to have the right high bits, then pick a random prime $q$ in that interval (most often there will be at least one, if not we try another $p$). Some of the security argument is that
• generating an RSA key $(p,q')$ using a regular method, with random huge primes in some appropriate range and no other criteria beside the number of bits in $n'=p\cdot q'$, and $p<q$;
• then deciding the high bits of $n$ from those in $n'$;
• then finding $[q0,q1]$, generating $q$ as a random prime in that interval, and setting $n=p\cdot q$;
demonstrably gives the same distribution of $(p,q)$ as said regular generation method, hence is as secure; then we remark that the high bits of $n$ are random (with some distribution not too far from uniform), and public, thus fixing it can't much help an attack (I think this can be made rigorous).
This is now $k \sim m/2+log_2(m)$ bits to express the public key. We can save a few more bits, each one at worse doubles the amount of work to generate the private key (we can repeat the generation process outlined above until we find a key with these bits equal to some public arbitrary constant; or equal to bits from a hash of the other bits if we want a tighter assurance that the scheme is not weakened).
Can we do better, and what's the practical limit?
Update: In RSA Moduli with a Predetermined Portion: Techniques and Applications, Marc Joye discuses this, and reaches $k \sim m/3$ (without claim of optimality). I'm worried that I do not see an argument that the manner in which $(p,q)$ are selected in Algorithm 3 does not weaken $n$ against a dedicated factoring algorithm.
-
2
Is the goal to reduce the number of bits that need to be made public in order to publish the public key for a given RSA-like scheme or is the goal to reduce the number of bits that must be securely stored to allow one to regenerate one's own public key from scratch? – ByteCoin Jan 19 '12 at 3:12
@ByteCoin: I want to reduce the number of bits that need to be made public in order to use the public key. In some memory-tight contexts (e.g. Smart Cards), this is quite desirable. – fgrieu Jan 19 '12 at 9:28
2
This would only save space while storing the public key. To use the public key to encrypt or verify signatures you'd probably be working in the Montgomery representation in which case the numbers are all the same size as the modulus and can't be compressed. If you're decrypting or signing on the smart card then you're probably exploiting the CRT and the Montgomery representation and using larger exponents so the compression is useless again. This is all rather academic as nobody would use RSA in a memory constrained system anyway! Or please enlighten me if I'm mistaken. – ByteCoin Jan 19 '12 at 17:35
1
@ByteCoin: I want to save space when storing issuer/authority public keys in a Smart Card. Pushing things to the limit, it could be this modern device with just 2kB of EEPROM and twice as much RAM. Yes, one can do RSA or Rabin signature verification securely and plausibly fast on such a device with no RSA hardware. A secondary objective is to shorten certificates (embedding ID and Public Key using ISO 9796-2 signature with message recovery), keeping them within a 256 bytes block limit. – fgrieu Jan 19 '12 at 18:48
## 2 Answers
Daniel J. Bernstein mentioned your way of compressing RSA public keys in his paper "A secure public-key signature system with extremely fast verification". The naive way you outline roughly doubles the work for each extra bit. If there were a better method which did not run very slowly then it could be repurposed as a factoring algorithm. So if it were possible to decompress arbitrary 104 to 128-bit strings into secure 2048-bit RSA public keys faster than factoring as David Schwartz suggests then that would be quite remarkable. Every time you ran the algorithm you'd effectively be finding the approximately equal sized factors of some 2048bit number of which you'd specified a lot of the bits. Although there's no theoretical reason I can think of why this should be impossible, nor would it render RSA necessarily insecure, it does strike me as rather unlikely.
As you hint, a further (impractical) way to reduce the storage of the unspecified low bits of the modulus would be to store their residues modulo various small primes and reconstruct using the CRT. In theory this should save between 3 to 4 bits for a 2048-bit modulus. You save the space because you can rely on the residues not being zero.
-
1
Kudos for the Bernstein reference. It traces the idea of fixing the high bits in $m$ to Guillou and Quisquater, in a 1990 paper on ISO 9796(-1); that must also be where I learned of it. However that 1990 reference does not randomize the fixed bits, which (later) turned out to be a bad idea because it might help NFS. – fgrieu Jan 19 '12 at 8:45
I fail to follow "If there were a better method which did not run very slowly then it could be re-purposed as a factoring algorithm", and think it is disproved by the much improved bound in the reference now in the updated question. The best I can get is that if we can reach $k$ bits with a generation cost $X$, then we have a factoring algorithm of cost $X \cdot 2^k$. – fgrieu Jan 20 '12 at 16:56
Actually, it appears that we can do a bit better by using an unbalanced RSA key; that is, one composed of two primes of different sizes.
For example, suppose we have a 512 bit p and a 1536 bit q; to generate a key, we can select a random 512 bit prime p, and then for q, we search for a prime in the range $(C/p, (C+2^k)/p)$ (where $C$ is our 2048 bit constant which includes the bits we want to force, and $k$ is the number of bits we're willing to vary). We expect about $2^k/p \times (1 / \log( C/p )) \approx 2^{k-512} / (\log(2) (2048-512))$ primes; if $k \approx 522$, then there would be 1 expected prime in the range. This would allow us to express a 2048 bit RSA key with only 522 bits.
Now, the obvious question is: what does this do to the security. I'm pretty sure that this algorithm generates an RSA modulus which is no easier to factor than a random modulus with the same sized factors, but that doesn't really answer the question. Now, we know that the time taken by NFS doesn't vary based on the size of the factors, but ECM does speed up if there are smaller factors. So, how small can we make $p$ before ECM becomes faster than NFS (and thus lowering our security level)? I don't know the answer to that one (or even if my example of a 512 bit factor would be already over the limit; it wouldn't surprise me if it was).
I believe this trick can be used to shrink RSA public keys to some extent, but I don't know how far you can take it. On the other hand, I do hope this is an academic exercise; if you're really interested in small public keys, it'd certainly be better to use an elliptic curve algorithm (which has small public keys without seeing how close we can get to the security edge).
-
I don't follow this. In what interval are we expecting to find just one prime? For 1536 bit numbers, roughly .1% are primes. – ByteCoin Jan 19 '12 at 3:00
1
@ByteCoin: yes, roughly 0.1% of 1536 bit numbers are prime, hence if we make our interval of 1536 bit numbers be 1000 integers long, we have a decent probability that at least one of them actually will be prime. The size of that interval is $(C + 2^k)/p - C/p = 2^k/p$ and so $2^k \approx p * 1000$; that is, the number of bits that we need to publish (because they can't be assumed) is about 10 more than the size of the smaller prime – poncho Jan 19 '12 at 3:58
As you point out, ECM factoring is made greatly more efficient. I thus see little hope to have a general, positive security argument that the scheme is as secure as balanced RSA. – fgrieu Jan 19 '12 at 6:55
1
– ByteCoin Jan 19 '12 at 14:43
1
– ByteCoin Jan 20 '12 at 0:16
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494374394416809, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-geometry/142890-proof-working-derivatives-possibly-mean-value-theorem.html | # Thread:
1. ## proof: working with derivatives, possibly mean value theorem?
Let f:R-->R be differentiable for all x. Suppose that f'(x)>1 for all x. Show that the graph of y=f(x) can intersect the graph of y=x no more than once.
I have no idea where to even begin... any ideas?
2. Originally Posted by sfspitfire23
Let f:R-->R be differentiable for all x. Suppose that f'(x)>1 for all x. Show that the graph of y=f(x) can intersect the graph of y=x no more than once.
I have no idea where to even begin... any ideas?
First you need to realize what the question is asking.
If $\Gamma_f$ (the graph) intersects $\Delta=\left\{(x,x):x\in\mathbb{R}\right\}$ then that means that $(x,f(x))=(x,x)\implies f(x)=x$. So, why can't we have that $f(x)=x,f(x')=x'$ for $x\ne x'$?
3. Isn't x' just going to be zero, though, seeing as x is any real number?
4. Originally Posted by sfspitfire23
Isn't x' just going to be zero, though, seeing as x is any real number?
I have no idea what you mean. This is much, much simpler than you're making it. The question is asking why a function whose derivative which is strictly greater than one can't have two fixed points. To see this suppose that it did, call them $x_1,x_2,\text{ }x_1<x_2$. Then, what does the MVT say about the derivative of $f$ on $[x_1,x_2]$?
5. Oh I see now. So if we suppose towards contradiction that f has two fixed points: x_1 and x_2 with x_1< x_2, then we would have f(x_1)= x_1 and f(x_2)= x_2. Then, by the Mean Value Theorem, we would have for some c in the interval (x_1, x_2):
f'(c) = f(x_2) - f(x_1) / x_2 - x_1 = (x_2 - x_1)/(x_2 - x_1) = 0
which contradicts the assumption that f'(x)>1 for all x...?
6. Originally Posted by sfspitfire23
Oh I see now. So if we suppose towards contradiction that f has two fixed points: x_1 and x_2 with x_1< x_2, then we would have f(x_1)= x_1 and f(x_2)= x_2. Then, by the Mean Value Theorem, we would have for some c in the interval (x_1, x_2):
f'(c) = f(x_2) - f(x_1) / x_2 - x_1 = (x_2 - x_1)/(x_2 - x_1) = $\color{red}1$
which contradicts the assumption that f'(x)>1 for all x...?
Yes, except I'm sure you meant what I changed above
7. In more general terms, given $f'(x)\neq 1$, you can have at most one fixed point. But if you also have $|f'(x)|\leq A<1$, then you know for sure that there exists a fixed points, and hence it is unique. To show this, take the sequence defined by $x_{n+1}=f(x_n)$ and show it is convergent to the unique fixed points.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959771454334259, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/56067/how-to-interpret-the-phrase-transforms-under-the-irreducible-representation/56278 | How to interpret the phrase “transforms under the irreducible representation”?
I'm reading Robert Gilmore's "Lie Groups, Physics, and Geometry," and trying to understand his brief presentation of Galois theory. I think I get the gist of the method, but would be grateful for help understanding some details. Let me first quote a bit from page 10, highlighting the terminology I don't understand, and after the quotes, I will focus on specifics. Please bear with me while I try to clarify my questions:
The general quadratic equation has the form $$(z-r_1)(z-r_2)=z^2-I_1 z+I_2=0$$ $$\begin{equation}\tag{1.17}I_1=r_1 + r_2\end{equation}$$ $$I_2=r_1 r_2$$ The Galois group is $S_2$ with subgroup chain shown in Fig. 1.3. [elided]
The character table for the commutative group $S_2$ is $$\begin{equation}\tag{1.18}\begin{array}{r|crc} \text{Irreducible Rep's} & I & (12) & \text{Basis Functions} \\ \hline \Gamma^1 & 1 & 1 & u_1 = r_1 + r_2 \\ \Gamma^2 & 1 & -1 & u_2 = r_1 - r_2 \end{array}\end{equation}$$
Linear combinations of the roots that transform under the one-dimensional irreducible representations $\Gamma^1, \Gamma^2$ are $$\begin{equation}\tag{1.19}\begin{bmatrix} u_1 \\ u_2 \end{bmatrix}=\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\begin{bmatrix} r_1 \\ r_2 \end{bmatrix}=\begin{bmatrix} r_1+r_2 \\ r_1-r_2 \end{bmatrix}\end{equation}$$
The equations (1.17) and (1.19) are trivial to understand. What's bugging me is (1.18) and how the group representations fit into the scheme. Please bear with me while I clarify my questions.
I understand that $\Gamma^1$ and $\Gamma^2$ are one-dimensional (scalar) representations of the group $S_2 = \{I, (12)\}$. That means that the group elements of $S_2$ map to the elements of $\Gamma^1$ in such a way that (group-theoretic) products of elements in $S_2$ map to (matrix) products of the corresponding elements in $\Gamma^1$; ditto $\Gamma^2$.
I see that we want to apply the elements of $S_2$ to the sequence of roots $(r_1, r_2)$, permuting them, and then examine the corresponding actions on the linear combinations $r_1+r_2$ and $r_1-r_2$.
Now arises the first question: how did we find these linear combinations? It looks like magic that the $\Gamma$'s, castrated one-dimensional group representations that they are, just copied into a square matrix and applied to a column vector of the roots, deliver up exactly the linear combinations that we need to investigate, the ones that will generate solutions to the quadratic equation. This gets even more spooky as we graduate up the line to the cubic and quartic, where the character tables feed us, similarly, exactly the linear combinations we need to solve the general equations. Of course, when we get to quintics, the game is over, and that's the whole point of (at least this elementary corner of) Galois theory.
My confusion deepens. Gilmore writes, on page 9, that these linear combinations are "basis functions" for the irreducible representations, the $\Gamma$s. I usually conceptualize archetypes for "basis functions" as sine waves for Fourier transforms, or Dirac deltas, or other orthonormal polynmomials, and such. I'm scratching my head wondering what on Earth Gilmore could mean. The irreducible representations are elements of a vector spaces, to be sure, and there should be various bases for these vector spaces, to be sure. But how are the linear combinations basis functions for the $\Gamma$? Functions of what, to what? What's the domain (the set of roots, $\{r_1, r_2\}$?)? What's the range? The linear combinations of roots don't map to the set of roots, and I can't see the sense in which they might be bases for the vector space in which the one-dimensional irreducible representations $\Gamma^1$ and $\Gamma^2$ live.
I now sink completely when Gilmore says that the "linear combinations of roots transform under the one-dimensional irreducible representations...". Transform? To what? "Transform" is a transitive verb, and it needs a subject and an object, doesn't it? These one-dimensional irreducible representations are just scalar numbers, so using them to transform a linear combination of roots can only be to multiply the linear combinations by the scalars? Or did we magically back out from the representations to the group, $S_2$, where I can understand how to apply the group elements, $I$ and $(12)$, to the sequence of roots, permuting them.
I can see the machinery at work, it's not difficult at all, but I can't connect what Gilmore writes with what I think I know. I am sure he uses abbreviated terminology, to save space, but I am not connecting.
I apologize in advance if I haven't clarified my questions sufficiently or if it's too long, but I hoped that someone out there with mastery of the subject might clear the fog for me.
-
A brief Google search suggests to me that the author is using chemists' terminology for representation theory. The so-called basis functions appear to be the character functions for each representation. These form the basis of an auxiliary vector space (called the space of class functions). But it's still a mystery to me what exactly the author means for things to transform under a representation... – Zhen Lin Aug 7 '11 at 9:57
– Zhen Lin Aug 7 '11 at 9:58
1
If you actually want to learn this material, I recommend getting a good thorough book on the representation theory of finite groups (e.g. James-Liebeck) before tackling the Lie theory. – Qiaochu Yuan Aug 7 '11 at 16:05
thanks. Just ordered James-Liebeck (the only one on amazon with 5 stars under a search for "representation theory of finite groups" -- though Fulton-Harry gets four and a half). – Reb.Cabin Aug 7 '11 at 20:37
4 Answers
Let's first consider the general case of an $n$th-degree polynomial. The elements $\sigma \in S_n$ act by permutation on the roots $z_1,\dots,z_n$ by $z_i \mapsto z_{\sigma(i)}$. This same action also acts on functions of the roots: $$(\sigma f)(z_1,\dots,z_n)=f(z_{\sigma(1)},\dots,z_{\sigma(n)})$$ Here it's best to think of the roots $z_i$ as indeterminates. For example, the symmetric functions $I_k$ (e.g. $I_1=z_1+\dots+z_n)$ are invariant under the $S_n$ action.
Now we also have one-dimensional representations $\Gamma$ (a.k.a. characters) of the group $S_n$, which are just group homomorphisms from $S_n$ to $\mathbb C^\times$. Let's consider functions $f$ on the roots such that $$\sigma f =\Gamma(\sigma)f$$ for all $\sigma \in S_n$. This is what Gilmore means by functions that "transform under the one-dimensional representation." Here $f$ will actually be a linear combination of roots.
For example, if $n=2$, and we have $\Gamma^2:S_2 \rightarrow \mathbb C$, defined by $I \mapsto 1$, $(12) \mapsto -1$. If $f(r_1,r_2)=r_1-r_2$, then $$If=f=\Gamma(I)f$$ $$(12)f=-f=\Gamma((12))f$$ since: $$(12)f(r_1,r_2)=f(r_2,r_1)=r_2-r_1=-f(r_1,r_2)$$
I hope this clarifies at least what he means by "functions that transform under the representation." Anyway, it seems interesting to approach Galois theory from the point of view of representation theory, so I wonder if anyone is aware of a clearer treatment in the same vein.
Another resource for Lie and representation theory you might want to look at are Brian Hall's notes http://arxiv.org/abs/math-ph/0005032.
-
Thanks. Very helpful! I will read Brian Hall's notes. – Reb.Cabin Aug 7 '11 at 16:50
Did you find this question interesting? Try our newsletter
email address
There is a lot of background here, and again, the best way to learn it is to find a good textbook on the representation theory of finite groups. But here's a brief sketch.
Any finite group $G$ has a finite list of irreducible representations. Maschke's theorem guarantees that any (say, complex, finite-dimensional) representation decomposes into a direct sum of irreducible representations. More concretely, this says that given any representation, we can simultaneously block-diagonalize all of the elements of $G$ so that the blocks correspond to irreducible representations.
$S_2$ has a $2$-dimensional representation $V$ given by its action as permutation matrices. This representation decomposes into a direct sum $V_0 \oplus V_1$ of the trivial representation and the nontrivial (sign) representation. We can say that the elements of $V_0$ transform under the trivial representation, and the elements of $V_1$ transform under the sign representation. (This seems to be physics / chemistry terminology. In mathematics we would just say that $V_0$ is, or perhaps is isomorphic to, the trivial representation, and so forth.)
Okay, so how do we find this decomposition? Write $V = \text{span}(a, b)$ where the nontrivial element $g$ of $S_2$ exchanges $a$ and $b$. Then $g$ fixes the vector $a + b$, so $a + b$ spans the trivial subrepresentation. Next, we want to find a vector that $g$ multiplies by $-1$, and such a vector is given by $a - b$. So $V$ decomposes into a direct sum $\text{span}(a + b) \oplus \text{span}(a - b)$ where $S_2$ acts by the trivial representation on the first summand and by the sign representation on the second summand.
There is a more general projection formula here, but to learn what it is, again, you really should find a good textbook on the representation theory of finite groups.
-
Thank you. That explains how we find the linear combinations given the one-dimensional representations. – Reb.Cabin Aug 7 '11 at 16:50
The key idea of solution by radicals is intimately tied up with representation theory, although it wasn't originally seen in that light. Suppose that we have an extension $K$ of a field $F$ with $[K:F] = n$, and that $K$ is the splitting field of some irreducible polynomial $p(x) \in F[x]$ (with no repeated root). Suppose also (for convenience) that $K$ contains a primitive $n$-th root of unity $\omega$, and that $G$, the Galois group of $K$ over $F$ is known to be cyclic, generated by $\alpha$, say. Let $r_0$ be one root of $p(x)$. Since $|G| = [K:F],$ the roots of $p(x)$ are the elements of $\{\alpha^{j}(r_0):0 \leq j \leq n-1 \}$. We let $r_j = \alpha^j(r_0 )$ for $0 \leq j \leq n-1$. Again for convenience, we suppose that these roots are linearly independent over $F$. What happens in Galois theory is that we can replace $r_0$ with an element $s_0 \in K$ such that $\alpha^j(s_0) = \omega^j s_0$ for each $j$. Then $s_0$ has minimum polynomial $x^n - s_0^n$ over $F$. In other words, $K$ is shown to be the radical extension $F[s_0]$ of $F$. The quadratic case of the question is an easy example of this. What has this to do with representation theory?
The roots of $p(x)$ form an $F$-basis for $K$, so every element of $K$ may be expressed in the form $\sum_{j=0}^{n-1} f_{j}\alpha^{j}(r_0),$ where each $f_j \in F$. The element $r_0$ is almost irrelevant here: it is the linear combination $\sum_{j=0}^{n-1} f_j \alpha^{j}$ which is important, and we can think of the element of $K$ as obtained by applying this linear combination of automorphisms to $r_0$. I suppose this is why the writer of the book talked of "transforming" the roots.
The important object now is the group algebra $F\langle \alpha \rangle$, which consists of all $F$-linear combinations of $\{\alpha^j : 0 \leq j \leq n-1 \}.$ This is isomorphic as an $F$-vector space to $K$, but it has additional structure. The group algebra inherits an obvious multiplication from $\langle \alpha \rangle$, extending it by $F$-linearity. Now comes the representation theory, which is relatively straightforward in the case of cyclic groups. Our assumptions guarantee that the integer $n$ is invertible in $F$ (that is, the characteristic of $F$ is either $0$ or coprime to $n$). There are $n$ different group homomorphisms from $\langle \alpha \rangle$ to $F$, say $\{ \lambda_i: 0 \leq i \leq n-1 \}$, each uniquely specified by $\lambda_i(\alpha) = \alpha^{i}$. These may be extended by $F$-linearity to $F$-algebra homomorphisms (that is, ring homomorphisms which are also $F$-linear) from $F \langle \alpha \rangle$ to $F$.
The key idea is to find a new basis $\{e_{i}: 0 \leq i \leq n-1 \}$ for $F\langle \alpha \rangle$ such that $\lambda_{j}(e_i) = \delta_{ij}$. If we can do this, we must have $\alpha^{j} = \sum_{i=0}^{n-1} \omega^{j} e_i,$ because the right hand combination is the unique $F$-combination of $e_i$'s which has the correct evaluation under each $\lambda_j$. In this particular case, and in most treatments of Galois theory in texts, this can be done by inverting a Van der Monde matrix. But it can also be done by orthogonality relations for group characters (for which, as Qiaochu suggests, you really need to consult a representation theory text).
The upshot is that we may set $e_i = \frac{1}{n} \sum_{j=0}^{n-1} \lambda_{i}(\alpha^{-j}) \alpha^{j}$ for each $i$. It is an instructive exercise with roots of unity to confirm that $\lambda_{k}(e_i) = \delta_{ik}$, but note that the case $i = k$ is clear. By the way, this "orthogonality" property forces the $e_{i}$ to be linearly independent, hence a basis for $F\langle \alpha \rangle$. Notice that $\lambda_j(\alpha.e_i) = \lambda_j(\alpha ) \delta_{ij},$ so that (by comparison of the evaluations of each $\lambda_j$), we have $\alpha.e_i = \lambda_i(\alpha)e_i = \omega^i.e_i$ for each $i$.
Now the pieces are in place to complete the Galois theory: we set $s_0 = e_1(r_0).$ This makes sense, because $e_1$ is an $F$-linear combination of field automorphisms (of $K$), so may be applied to $r_0$ to produce a new element of $K$. Notice that $\alpha(s_0) = (\alpha.e_1)(r_0) = \omega.e_1(r_0) = \omega.s_0 .$ Hence $\alpha^j(s_0) = \omega^{j}.s_0$ for $0 \leq j \leq n-1$, which was what we wanted. (We should note that $e_1(r_0) \neq 0$, since $ne_1(r_0) = \sum_{j=0}^{n-1}\omega^{-j} r_j$ (as each $r_j = \alpha^j(r_0)),$ while $\{r_0,\ldots ,r_{n-1} \}$ is linearly independent over $F$).
-
Excellent road map! Thank you. – Reb.Cabin Aug 8 '11 at 13:46
Because the many different types of applications of repn theory are themselves important, and the comparisons among them shed light on aspects otherwise possibly overlooked... perhaps another "answer" here won't hurt: I think there is yet-another reasonable segue from more primitive notions to "repn theory"...
(Also, it is not clear to me that anyone needs to read a whole book about repns of finite groups in characteristic $0$, especially over $\mathbb C$. Many of these suffer from "completeness", so that critical ideas are swamped by compulsion to include all-possible...)
When we have a finite group $G$ with $n$ elements acting linearly on a vectorspace $V$ over a field $k$ of characteristic $0$ (so we can divide by positive integers such as $n$), it is easy to form $G$-invariant elements, by averaging: given $v\in V$, the element $\frac{1}{n}\sum_{g\in G} g\cdot v$ is easily checked to be invariant, by changing variables in the sum. Further, if $v$ was already $G$-invariant, this map just returns $v$. That is, this averaging is some kind of projector to the "trivial $G$-isotype" in $G$, that is, things un-moved by $G$. This is where expressions like $r_1+r_2$ come from. The application to Galois theory has the vector space $V$ be the larger field.
Slightly more generally, let $\chi:G\rightarrow k^\times$ be a group hom (maybe to ${\pm 1}\subset k^\times$). To say that a vector "transforms by/under/whatever-verb" is that $g\cdot v=\chi(g)v$. In the previous paragraph, $\chi=1$. It is a small step from the pure averaging as in the previous paragraph to try $\hbox{proj}_\chi v=(?) \frac{1}{n}\sum_{g\in G} \chi(g)\,g\cdot v$. In fact, the obvious change-of-variables argument shows that this weighted average transforms by $\chi^{-1}$, so, in fact, and inverse has to be inserted somewhere, for example, $\hbox{proj}_\chi v=\frac{1}{n}\sum_{g\in G} \chi^{-1}(g)\,g\cdot v$.
As far as Galois theory goes, to solve equations in radicals following Lagrange and Vandermonde (when possible), the previous paragraph explains why we'd want to adjoin sufficiently many roots of unity to the ground field, and how to form expressions varying by prescribed characters $\chi$ of the group. The expressions are Lagrange resolvents.
An interesting complication occurse for non-abelian groups $G$, that in an action of $G$ on $V$, there are typically more vectors than those that transform by a one-dimensional repn $\chi$ as above. Namely, $V$ may break up into "irreducible" pieces some of which are larger than one-dimensional. In this case, it requires more set-up to explain what "transforms by" could mean, as well as to write down the corresponding "projector". (There still is a projector, namely, averaging against the function $\Theta_\pi(g)=\hbox{trace}(\pi(g))$ where $\pi:G\rightarrow GL_n(k)$...)
The "traditional" treatment of Galois theory occurs much prior to the traditional treatment of repn theory, and the traditional treatments of both ignore each other... for no mathematical reason... so there aren't so many available references.
-
Thank you, Paul. Excellent info and more grist for the mill! – Reb.Cabin Aug 11 '11 at 3:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 194, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415227770805359, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/80786/list | ## Return to Question
2 added 15 characters in body
A recent question(http://mathoverflow.net/questions/80770/reference-request-riemanns-existence-theorem) reminded me of a question I've had in the back of my mind for a long time. It is said that Grothendieck wanted the center-piece of SGA1 to be a completely algebraic proof (without topology) of the following theorem: $\pi_1^{et}(\mathbb{P}^1_{\mathbb{C}}\smallsetminus a_1,...,a_r)\cong$ the profinite completion of $\langle \alpha_1,...,\alpha_r|\alpha_1...\alpha_r=1\rangle$.
As you may know, he did not succeed.
From my experience with Grothendieck's ideas, he often has a proposed proof in mind that would take years (if it all) to be realized. Did Grothendieck have an idea of how to prove this fact algebraically? If so, what was the missing element in his proposed proof?
1
# Did Grothendieck have a plan for proving Riemann Existence algebraically?
A recent question (http://mathoverflow.net/questions/80770/reference-request-riemanns-existence-theorem) reminded me of a question I've had in the back of my mind for a long time. It is said that Grothendieck wanted the center-piece of SGA1 to be a completely algebraic proof (without topology) of the following theorem: $\pi_1^{et}(\mathbb{P}^1_{\mathbb{C}}\smallsetminus a_1,...,a_r)\cong$ the profinite completion of $\langle \alpha_1,...,\alpha_r|\alpha_1...\alpha_r=1\rangle$.
As you may know, he did not succeed.
From my experience with Grothendieck's ideas, he often has a proposed proof in mind that would take years (if it all) to be realized. Did Grothendieck have an idea of how to prove this fact algebraically? If so, what was the missing element in his proposed proof? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593803286552429, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/108997?sort=votes | ## The cycle structure of twisted wires, connected
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose you have $n$ (blue) wires linearly arrayed at junction box $A$, connected to a remote junction box $B$, where the wires are now arrayed along a line in a randomly permuted order, i.e., each of the $n!$ permutations is equally likely at $B$. Now you tie together every other wire at $A$ and at $B$ with a (green) connector, like this:
What is the probability that you have formed a single cycle (as illustrated)? More generally, what are the combinatorics of the cycle structures achievable in this manner? (It may be best to separate out the $n$-even case from $n$ odd.)
I came upon this thinking of the wires as an arrangement of lines, where each line crosses every other before reaching junction box $B$, in which case, for $n$ even, one necessarily arrives at $n/2$ cycles, each containing two (blue) wires. All $n$ wires in a single cycle is in some sense the obverse situation.
Update. Will Swain's argument shows, as Noam points out, that the probability of a single cycle is asymptotically $\frac{1}{\sqrt{n}}$. I would be interested to learn if there is a way to see this intuitively without Will's explicit calculation. Perhaps an assessment of the probability of repeatedly avoiding premature closing of a loop as one criss-crosses from $A$ to $B$...?
-
The only property I can recall (which may be irrelevant) is guaranteed length of monotonic subsequence of a permutation which is like 1 + sqrt(n-1). Also, I note a group theoretic rendering of Joseph's problem: given a green involution g on n letters, how many solutions psi are there to g^psi = gc, where c can be any of the cycles of n letters. Perhaps someone who is familiar with conjugacy classes in S_n can give Joseph what he wants. Gerhard "Not A Professional Group Theorist" Paseman, 2012.10.09 – Gerhard Paseman Oct 9 at 15:53
Maybe I mean c instead of gc. Gerhard "Someone Check My Work Please" Paseman, 2012.10.09 – Gerhard Paseman Oct 9 at 15:55
## 2 Answers
Here's an alternative way of thinking about the problem and Will's answer in the case where $n$ is even.
If we identify the identically labelled vertices on each side, we're left with a graph on $n$ vertices formed by the union of two matchings: The fixed matching ($(1,2), (3,4), \dots, (n-1,n)$) and a random matching. Now imagine exposing the random matching one edge at a time.
At the start (before the new matching is exposed), we have a set of $n/2$ isolated edges. There's a $\frac{1}{n-1}$ chance that the first exposed edge in the second matching is already in the first matching, leaving us with a closed loop together with $n-1$ isolated edges. Otherwise, we have a single path of length $2$ together with $n-2$ isolated edges.
When the next edge is exposed, its endpoints are chosen uniformly at random from the degree $1$ vertices. It closes a cycle if the two endpoints are from the same path. The key thing here is that there's always $\frac{n}{2}-1$ open paths that can be closed, regardless of what happened in the previous edge. This is true in general: Whether an edge closes an existing path off or connects two paths, it always reduces the number of open paths by exactly $1$. So as the $k^{th}$ edge in the new matching is exposed, there's $\frac{n}{2}+1-k$ open paths and a $\frac{1}{n-2k+1}$ chance of closing one of them.
It follows that the number of cycles can be thought of as $x_1+\dots+x_{n/2}$, where the $x_i$ are independently $1$ with probability $\frac{1}{2i-1}$ and $0$ otherwise. This means that
-The probability that there's exactly one cycle (that $x_2=x_3=\dots=0$) is $$\frac{2}{3} \frac{4}{5} \dots \frac{n-2}{n-1} = \frac{2^n \left(\frac{n}{2}!\right)^2}{n\cdot n!}$$
-The expected number of cycles is $\frac{1}{1}+\frac{1}{3}+\dots+\frac{1}{n-1} \approx \frac{1}{2} \log n$
-The number of cycles is reasonably concentrated around its mean (e.g. by Chernoff's bound we have $$P(|X-E(X)| \geq \frac{1}{2} E(X) ) \leq 2 e^{-E(X)/16} = 2n^{-1/32})$$
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Untangle the permutation as a big cycle green-blue-green-blue, and choose an orientation. We're going to count the posible shapes for this cycle, with the vertices labeled by their original position. We first have an alternating cyclic permutation of all the green wires. There are $(n/2)!\cdot (n/2)!/(n/2)$ ways to do this. Then we can choose an orientation of each of the green wires. There are $2^n$ ways to do this. Finally we note that two big cycles with reversed orientation correspond to the same permutation, so we divide by $2$. The total number of ways to get one big cycle is:
$\frac{2^n \left(\frac{n}{2}\right)!\left(\frac{n}{2}\right)!}{n}$
and the probability is:
$\frac{2^n \left(\frac{n}{2}\right)!\left(\frac{n}{2}\right)!}{n\cdot n!}$
which as Noam Elkies points out is asymptotically proportional to $n^{-1/2}$.
-
Interesting, Will! For $n=10$, that's 41%, but the expression of course goes to zero with large $n$. Thanks! – Joseph O'Rourke Oct 6 at 15:16
2
Well $2^n (n/2)!^2 / (n \cdot n!)$ goes to zero as $n \rightarrow \infty$, but only as $n^{-1/2}$, not $o(2^{-n})$. It's still almost 4% for $n=1000$. In general it's the inverse of $n 2^{-n} {n \choose n/2}$, and we know that $2^{-n} {n \choose n/2}$ (which is the probability of an exact tie after $n$ fair coin tosses) is asymptotically proportional to $n^{-1/2}$. – Noam D. Elkies Oct 7 at 3:31
Naturally, I didn't remember the estimate correctly. That makes it seems like the probability that there are at most two cycles is quite large, since you shouldn't expect too large of a drop-off of probability from a cycle of length $n$ to two cycles, one of length $n-k$ and one of length $k$. – Will Sawin Oct 7 at 5:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422433376312256, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/250/a-challenge-by-r-p-feynman-give-counter-intuitive-theorems-that-can-be-transl/260 | # A challenge by R. P. Feynman: give counter-intuitive theorems that can be translated into everyday language
The following is a quote from Surely you're joking, Mr. Feynman . The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to his challenge? Theorems should be totally counter-intuitive, and be easily translatable to everyday language. (Apparently Banach-Tarski paradox was not a good example.)
Then I got an idea. I challenged them: "I bet there isn't a single theorem that you can tell me - what the assumptions are and what the theorem is in terms I can understand - where I can't tell you right away whether it's true or false."
It often went like this: They would explain to me, "You've got an orange, OK? Now you cut the orange into a finite number of pieces, put it back together, and it's as big as the sun. True or false?"
"No holes."
"Impossible!
"Ha! Everybody gather around! It's So-and-so's theorem of immeasurable measure!"
Just when they think they've got me, I remind them, "But you said an orange! You can't cut the orange peel any thinner than the atoms."
"But we have the condition of continuity: We can keep on cutting!"
"No, you said an orange, so I assumed that you meant a real orange."
So I always won. If I guessed it right, great. If I guessed it wrong, there was always something I could find in their simplification that they left out.
-
2
I think this is a great question, and look forward to the answer! – BBischof Jul 21 '10 at 4:26
Good question. +1. – Mehper C. Palavuzlar Oct 27 '10 at 11:32
Your example is more about real-world limitations (physics) than "everyday language". – Mark C Nov 9 '10 at 16:37
## 28 Answers
Every simple closed curve that you can draw by hand will pass through the corners of some square. The question was asked by Toeplitz in 1911, and has only been partially answered in 1989 by Stromquist. As of now, the answer is only known to be positive, for the curves that can be drawn by hand. (i.e. the curves that are piecewise the graph of a continuous function)
I find the result beyond my intuition.
For details, see http://www.webpages.uidaho.edu/~markn/squares/ (the figure is also borrowed from this site)
-
3
Whoa. This is definitely going on my reading list. – Larry Wang Jul 21 '10 at 6:21
Edited to use SO's image hosting instead of mine – Larry Wang Sep 11 '10 at 1:39
4
Your example figure is a little misleading. There are far more obvious places to put the square. If we map your figure onto Sulawesi in the obvious way (after reflecting the north-eastern peninsula about the equator), then there is a square in the (reflected) north-eastern peninsula containing Tombatu; another one centred on Limboto; a third one between the Gulf of Tomini and Makassar Strait; and so on. These little squares arise, roughly, whenever a long thin region gets fatter again. – TonyK Sep 17 '10 at 14:14
6
– TonyK Sep 17 '10 at 14:44
My favorite would probably be Goodstein's theorem:
Start with your favorite number (mine is 37) and express it in hereditary base 2 notation. That is, write it as a power of 2 with exponents powers of 2, etc.
So, 37 = 2^(2^2 + 1) + 2^2 + 1. This is the first element of the sequence.
Next, change all the 2's to 3's, and subtract one from what's remaining and express in hereditary base 3 notation.
We get 3^(3^3 + 1) + 3^3 + 1 - 1= 3^(3^3 + 1) + 3^3 (which is roughly 2 x 10^13). This is the second element of the sequence.
Next, change all 3's to 4's, subtract one, and express in hereditary base 4 notation.
We get 4^(4^4 + 1) + 4^4 - 1 = 4^(4^4 + 1) + 3*4^3 + 3*4^2 + 3*4 + 3 (which is roughly 5 x 10^154) . This is the third element of the sequence.
Rinse, repeat: at the nth stage, change all the "n+1" to "n+2", subtract 1, and reexpress in hereditary base n+2 notation.
The theorem is: no matter which number you start with, eventually, your sequence hits 0, despite the fact that it grows VERY quickly at the start.
For example, if instead of starting with 37, we started with 4, then (according to the wikipedia page), it takes 3*2^(402653211) - 2 steps ( VERY roughly 10^(100,000,000), or a 1 followed by a hundred million 0s). 37 takes vastly longer to drop to 0.
-
18
More amazingly: this theorem cannot be proved in Peano arithmetic, though it can be proven with transfinite induction. – ShreevatsaR Jul 28 '10 at 6:00
3
Well, I guess this shouldn't coun't as everyday language. Although it certainly can be considered counterintuitive. – Sam Nov 26 '10 at 23:30
4
– Vhailor Aug 5 '11 at 16:39
– P.. Jan 13 at 15:41
Suppose you have a large collection of books, all of the same size. Balance one of them on the edge of a table so that one end of the book is as far from the table as possible. Balance another book on top of that one, and again try to get as far from the table as possible. Take $n$ of them and try to balance them on top of each other so that the top book is as far as possible away from the edge of the table horizontally.
Theorem: With enough books, you can get arbitrarily far from the table. If you are really careful. This is a consequence of the divergence of the harmonic series. I think if you haven't heard this one before it's very hard to tell whether it's true or false.
-
8
Sure, in the real world, things like wind would quickly make such a construction impossible. But I think, unless he knew about this example already, that someone like Feynman would be surprised to learn that this was possible even in principle. I think this particular idealization is much easier to stomach than the one that leads to Banach-Tarski; it's a physical idealization, not a mathematical one. – Qiaochu Yuan Jul 28 '10 at 8:02
7
@Qiaochu: Your description is not quite right. If you balance one book as far from the edge of the table as possible, then the next book must be placed exactly on top of it. What you mean is that you place one book as far from the edge as possible on top of another book, and then place those two books on top of another book as far from its edge as possible, and repeat for some n books, and then place those n on the edge of a table. – Samuel Aug 3 '10 at 20:50
3
@Alexander: The furthest you can go for the second book is 1/2 the book. For the third book (underneath them) it's 1/3, for the fourth it's 1/4, etc. This result is just a statement of the fact that the harmonic series is divergent. – BlueRaja - Danny Pflughoeft Aug 4 '10 at 4:13
8
If you haven't actually done this, you should try: you can get surprisingly far! (If you have a big collection of yellow Springer books, they make a good choice; CD covers are a reasonable substitute.) – Matt E Sep 11 '10 at 4:23
3
@Qiaochu: I'm pretty sure Feynman would say, correctly: With real books make of paper, it won't work! They're just too flexible. You're thinking of idealized books here, aren't you? (I just read that chapter in Feynman's book, and this is the type of answer he'd give.) – Hendrik Vogt Jun 26 '11 at 11:44
show 10 more comments
The Monty Hall problem fits the bill pretty well. Almost everyone, including most mathematicians, answered it wrong on their first try, and some took a lot of convincing before they agreed with the correct answer.
It's also very easy to explain it to people.
-
18
I eventually got my intuition to mesh with Monty Hall. Just imagine presenting the contestant with 100 doors instead of 3. – Larry Wang Jul 21 '10 at 5:46
3
+1 I guess the question is whether Feynman would miss it... :D – BBischof Jul 21 '10 at 5:51
10
Monty Hall problem is quite sensitive to the way it is stated, so a witty person like Feynman would have had no problem in saying he was right. – mau Jul 21 '10 at 16:22
3
The Monty Hall problem is a psychology problem, not a math problem. If it were a math problem, there would be a defined probability that Monty opens the door which contains the prize, thus giving the contestant certain knowledge. Readers of the problem make the reasonable-for-humans assumption that Monty will not choose the prize door. Further analysis depends on Monty making an IID choice of non-prize doors, or a biased choice. It's a psychology problem. – Heath Hunnicutt May 25 '11 at 22:57
8
@Heath, the Monty Hall problem is a mathematics problem and not a psychology problem. There is indeed a defined probability that Monty opens the door with the prize, that probability is 0%. Monty never, ever, opened the door with the prize. Indeed, that would make the game pointless. – Dour High Arch Oct 4 '11 at 3:23
show 4 more comments
You have two identical pieces of paper with the same picture printed on them. You put one flat on a table and the other one you crumple up (without tearing it) and place it on top of the first one. Brouwer's fixed point theorem states that there is some point in the picture on the crumpled-up page that is directly above the same point on the bottom page. It doesn't matter how you place the pages, or how you deform the top one.
-
If I crumple it up minimally - that is, say, add one, very slight crease, or perhaps simply do nothing at all - then translate it a significant distance, wouldn't this not hold? – bdonlan Jul 22 '10 at 4:01
8
I probably should have mentioned that the second page needs to be on top of the first, meaning within the boundaries of it (you can't put it on another table...) – Tomer Vromen Jul 22 '10 at 17:04
+1 this is what I was going to post. I still don't understand the proof. – BlueRaja - Danny Pflughoeft Jul 24 '10 at 22:43
4
Maybe the proof is hard, but I don't find that intuitively very difficult to grasp. Where do you think the points would be going? Unless you put the prop besides the paper, I don't see how the theorem can fail. Feynman would have aced this one immediately. (Besides he would first tell you that real sheets of paper are made of atoms and thus, Brouwer's theorem is not true for them.) – Raskolnikov Nov 26 '10 at 19:03
I think that the best proof is not hard, but requires a bit of heavy machinery, namely homology groups or homotopy groups. Maybe the computations of the relevant group is hard but it is easy to convince yourself of given the appropriate model of the circle (up to homotopy) – Sean Tilson Dec 2 '10 at 22:21
show 2 more comments
My first thought is the ham sandwich theorem--given a sandwich formed by two pieces of bread and one piece of ham (these pieces can be of any reasonable/well-behaved shape) in any positions you choose, it is possible to cut this "sandwich" exactly in half, that is divide each of the three objects exactly in half by volume, with a single "cut" (meaning a single plane).
-
1
To me this doesn't look counterintitive, since the centroids of the three pieces are in the same plane.... – N. S. Aug 27 '12 at 2:52
Scott Aaronson once basically did this for a bunch of theorems in computer science here and here. I particularly like this one:
Suppose a baby is given some random examples of grammatical and ungrammatical sentences, and based on that, it wants to infer the general rule for whether or not a given sentence is grammatical. If the baby can do this with reasonable accuracy and in a reasonable amount of time, for any “regular grammar” (the very simplest type of grammar studied by Noam Chomsky), then that baby can also break the RSA cryptosystem.
-
12
But this obviously assumes rules of grammar which no actual language would follow, because babies actually learn languages. So "regular grammars" are obviously the wrong model for real languages. – Peter Shor Nov 26 '10 at 23:50
Peter: The usual response---following Chomsky's "Poverty of the Stimulus" argument---is simply that babies can't have the capacity to learn arbitrary grammars (if not regular grammars, then certainly not context-free, etc). Instead, there must be many facts about the grammars actually used by humans that are hardwired into the human brain (an interesting conclusion that many people probably would've denied, at least in the 50s and 60s). For more, see this thesis by Ronald de Wolf: homepages.cwi.nl/%7Erdewolf/publ/philosophy/phthesis.pdf – Scott Aaronson Mar 21 at 12:27
There are true statements in arithmetic which are unprovable. Even more remarkably there are explicit polynomial equations where it's unprovable whether or not they have integer solutions with ZFC! (We need ZFC + consistency of ZFC)
-
I added a statement I've always found much more shocking but which is in exactly the same direction as what you said. – Noah Snyder Jul 21 '10 at 5:33
2
(Or rather, in both cases, it's unprovable unless arithmetic is inconsistent in which case everything is provable.) – Noah Snyder Jul 21 '10 at 5:50
2
Can you give more info on the "explicit polynomial equations" part? Haven't heard of that and it sounds interesting. – Edan Maor Jul 21 '10 at 6:59
3
Edan: Cf. Hilbert's tenth problem. One can write down to any statement a polynomial which has an integer root iff it's true. By the incompleteness theorem, there will be statements that can't be proved in any formal system, hence one can't prove whether or not the polynomial has a root [if the system is consistent]. – Akhil Mathew Jul 21 '10 at 23:20
1
– BlueRaja - Danny Pflughoeft Aug 4 '10 at 4:15
show 1 more comment
What is the smallest area of a parking lot in which a car (that is, a segment of) can perform a complete turn (that is, rotate 360 degrees)?
(This is obviously the Kakeya Needle Problem. Fairly easy to explain, models an almost reasonable real-life scenario, and has a very surprising answer as you probably know - the lot can have as small an area as you'd like).
Wikipedia entry: Kakeya Set.
-
+1. Well, "perform a complete turn" may seem to mean rotation about a point, but I do agree the answer to the Kakeya problem (whose rigorous statement is in terms Feynman could surely understand) is counter-intuitive. – ShreevatsaR Jul 29 '10 at 1:37
34
Feynman's response: Cars are not infinitely thin. – Peter Shor Nov 26 '10 at 23:54
Also, cars can't move sideways, although you can approximate sideways motion by moving back and forth a very short distance while turning the wheel appropriately. – Tanner Swett Nov 16 '11 at 0:14
One that Feynman would have rejected: There are always two antipodal points on the Earth that are the same temperature. (This follows from the fact that a continuous scalar field on a circle has a diameter whose endpoints have the same value)
One that Feynman might have preferred: Draw a triangle. Draw a circle through the midpoints of the triangle's sides. This circle also passes through the the foot of each altitude and the midpoint of the line segment from the orthcentre to each vertex. The nine point circle
-
2
I am roughly quoting my undergrad differential geometry professor here: There is always at least one spot on the earth where there is absolutely zero wind. This follows from the hairy ball theorem as the wind direction on the surface of the earth forms a vector field. – JavaMan Aug 5 '11 at 15:58
6
Not only that, but there are always two antipodal points on the Earth that are the same temperature and the same barometric pressure! This is the Borsuk-Ulam Theorem. :) – Bruno Aug 5 '11 at 17:11
It is possible for a group of people to hold a secret ballot election in which all communication is done publicly. This is one of the many surprising consequences of the existence of secure multiparty computation.
(Of course, the ballots are only "secret" under some reasonable cryptographic assumptions. I guess Feynman may have objected to this.)
-
Similar to the Monty Hall problem, but trickier: at the latest Gathering 4 Gardner, Gary Foshee asked
I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?
We are assuming that births are equally distributed during the week, that every child is a boy or girl with probability 1/2, and that there is no dependence relation between sex and day of birth.
His Answer: 13/27. This was in the news a lot recently, see for instance BBC News. (Later analysis showed the answer depends on why the parent said that.)
-
8
That is not really mathematics. It's more about how language tricks us (and even the BBC news article you linked says so). – Sklivvz♦ Jul 27 '10 at 23:23
3
"I have two children. One is a boy born at midnight. What is the probability I have two boys?". If we assume a continuous distribution for the birth time, the event that both the children have been born ad midnight has probability zero. Therefore the answer to this other question should be 1/2. Is this correct? – Federico Ramponi Sep 11 '10 at 2:20
You'd have to specify a time interval and take a limit to make the usual formula work. Otherwise, you just get a $0/0$ form. The computation of this kind of probabilities goes through Bayes' rule. – Raskolnikov Nov 26 '10 at 22:02
6
If you go around asking random women "Do you have two children, one of which is a baby born on Tuesday?" then indeed the women who say yes to that question have a probability of 13/27 that the other child is a boy. But if a woman says the statement above, I'm not so sure that the Bayesian calculation is correct. – Peter Shor Nov 26 '10 at 23:54
1
For anyone reading this who doesn't know what they are talking about - one interpretation of the question implies that the asker can't have a second boy born on a Tuesday. Another interpretation of it gives no information about the second child. In the second case the chance the other child is a boy is 1/2. In the first case alex.jordan's first comment explains it nicely. – psr Aug 3 '12 at 23:56
show 5 more comments
Position-based Cryptography. This is a fun example since it seems very "out of left field".
The setup: Three servers are positioned in known locations on the globe (their positions can be arbitrary, provided they aren't on top of each other).
A single computer wants to prove its location to the servers. In other words, if the computer is actually located where it claims, then the protocol will accept. However, if the computer is located anywhere else, then the protocol will reject, no matter how the computer cheats (it is even allowed to recruit friends to help it cheat).
All communication is subject to the laws of physics -- information travels at speed c, and quantum mechanics holds.
Theorem 1: This is impossible if all communication is classical. Cheating is always possible.
Theorem 2: This is possible if quantum communication is possible.
-
Do you have a link? Plus you haven't defined how they are allowed to "prove" where they are – Casebash Jul 28 '10 at 11:29
1
– Qiaochu Yuan Jul 28 '10 at 18:30
This is not true, if the attackers are allowed to share a large enough quantum state, cheating is always possible: the attackers can pretend to be somewhere none of them is located. Right now theorem 2 is only proven when the attackers are forbidden from sharing any quantum state: the authors hope that the result might remain true if they share a little bit of quantum state, but not too much. – Generic Human Jun 27 '12 at 22:01
– Jeremy Hurwitz Jul 3 '12 at 4:46
You ask that the result be "counterintuitive", but Feynman doesn't insist on that. He says that if you can phrase a true-or-false mathematical question in language that he can understand, he can immediately say what the right answer is, and that if he gets it wrong, it is because of something you did.
I think Feynman is being less than 100 percent serious. Not that he didn't win every time he put this challenge to people--- but he probably only issued this challenge when he wanted to make a rhetorical point (about either the impracticality of a lot of mathematical investigation, or about the inability of mathematicians to faithfully translate their problems into normal language).
The Banach-Tarski result is obviously a terrible example, because the key to any paradoxical decomposition of a sphere, nonmeasurability, is almost impossible to convey in non-technical terms, and has no physical meaning. And of course he would choose this example for his essay, if the only purpose of the challenge is to make the point illustrated marvelously by that particular response.
Here are some statements that might have given Feynman some pause.
• The regular $n$-gon is constructible with an unmarked ruler and compass. (Really a family of true-or-false statements, one for each $n \geq 3$.)
It takes some work to properly spell out what "constructible" means here, but it can be done in plain English. It has been known since the 1800s (thanks to Gauss and Wantzel) that this statement is true if $n$ is the product of a nonnegative power of $2$ and any nonnegative number of distinct Fermat primes, and false otherwise.
More concretely, the sequence of positive integers $n$ for which it is true is partially listed here. Could Feynman have generated that sequence with his series of answers to true-or-false questions given by taking $n=3,4,5,\dots$? I very much doubt it.
• The Kelvin conjecture (roughly, "a certain arrangement of polyhedra partitions space into chunks of equal volume in a way that minimizes the surface area of the chunks"--- but you can be more precise without leaving plain English). According to Wikipedia it was posed in 1887. It was neither proved nor disproved until 1993, when it was disproved.
I find this example particularly compelling because Feynman presumably would not have caricatured Kelvin (something of a physicist himself) as a mathematician who only works on silly questions that nobody would ever ask.
• Other geometrical optimization problems come to mind, e.g. the Kepler conjecture, the double bubble conjecture, and the four color conjecture (all theorems now, but let's pretend they're conjectures and ask Feynman). My guess is that Feynman would have been right about the truth values of these statements. But the mathematician's response is, of course, "OK. Why are they true?"
This highlights a real difference between math and the physical sciences. It is much more common in the sciences to be in a situation where knowing what happens in a given situation is useful, even if you don't know why it happens. In math, this is comparatively rare: for example, the "yes or no" answers to the Clay Millennium problems are nowhere near as valuable as the arguments that would establish those answers. Feynman almost certainly knew this, but pretended not to in order to make the rhetorical points mentioned above.
-
All of three dimensional space can be filled up with an infinite curve.
-
2
I think "line" may be misleading, as the space-filling curve is not a line by typical geometric or algebraic definitions. – Isaac Jul 21 '10 at 4:35
@Isaac, I edited in my answer in light of your comment. – Ami Jul 21 '10 at 5:03
+1 because I really like the space-filling curve. – Isaac Jul 21 '10 at 5:53
15
Certainly Feynman could make a very strong argument that this was not a legitimate curve. – Noah Snyder Jul 21 '10 at 6:11
1
@Noah: But surely he would also admit "legitimate curves" have some thickness to them, making the physical version of the problem somewhat more trivial than the mathematical one. – Hurkyl May 8 '12 at 7:13
The Fold-and-Cut Theorem is pretty unintuitive. http://erikdemaine.org/foldcut/
-
For any five points on the globe, there is an angle in outer space from which you could see at least 4 of the 5 points (assuming the moon or anything isn't in the way). The proof is pretty simple, too...
-
5
I don't think this is true unless you count points on the border of the semi-sphere as being visible (which they really aren't..) – BlueRaja - Danny Pflughoeft Aug 4 '10 at 4:20
"The natural numbers are as many as the even natural numbers".
This statement is trivial and not worth to be named "theorem", but it is rather counterintuitive if you don't know the meaning of "as many". At least, it was for Galileo :)
-
3
But not for Feynman, – TonyK Sep 17 '10 at 15:49
6
Actually "The rationals numbers are as many as natural numbers" would be more counter intutive :) – Dinesh May 7 '11 at 11:15
There are as many binary sequences as there are sequences of sequences of real numbers. – fhyve Nov 18 '12 at 20:32
Morely's trisection theorem -- not discovered until 1899. http://www.mathpages.com/home/kmath376/kmath376.htm.
Trisect the three angles of a triangle. Take the intersection of each trisector with its nearest trisector of the nearest other vertex. These three points form an equilateral triangle!
With bisectors, they all meet in a point. That trisectors should give an equilateral triangle is pretty surprising.
-
5
I don't know that this theorem is counterintuitive, exactly. I have no intuition about it one way or the other. – Qiaochu Yuan Sep 18 '10 at 19:42
3
Yeah, maybe not counterintuitive, except this way -- my "intuition" is also "no intuition" -- this construction should produce nothing in particular, especially since you start with an arbitrary triangle. Or maybe it should produce something related to the original triangle. That it comes out equilateral no matter what kind of triangle you started with is a surprise. That does seem kinda counterintuitive. – David Lewis Feb 2 '12 at 22:46
1
– puri Sep 17 '12 at 20:06
Now that Wiles has done the job, I think that Fermat's Last Theorem may suffice. I find it a bit surprising still.
-
3
Why? It's easy to prove that no cube can be written as a sum of two cubes, and similarly for fourth powers, and "intuitively", having two that add up to another seems even harder for higher powers... – ShreevatsaR Jul 28 '10 at 6:14
@ShreevatsaR, how easy is it to prove that no cube can be written as the sum of two cubes? I only know one proof and although it is not difficult it takes a few pages of detailed algebra (not the sort of thing one could do in their head). – anon Sep 12 '10 at 8:32
1
@muad: Yes, I meant one of the usual proofs, a page or two long… the n=4 is easier. Anyway, I didn't mean to comment on the length of the proof; was just saying that Fermat's Last Theorem isn't so surprising because it only asserts the impossibility of something we have no strong reason for imagining possible. But of course, I realise that "surprisingness" is subjective, and no doubt my experience is coloured by having heard of it as a theorem in the first place. :-) – ShreevatsaR Sep 12 '10 at 19:17
Number theory is filled with theorems that look aesthetically just like Fermat's Last Theorem, so the theorem itself is not that surprising. What is surprising, to me, is that it's so much more difficult to prove! – BlueRaja - Danny Pflughoeft Sep 6 '11 at 16:13
The rational numbers are both a continuum (between any two rationals you can find another rational) and countable (they can be lined up in correspondence with the positive integers).
Mathematician missed it for hundreds (thousands?) of years, until Cantor.
Of course, the proof of that works both ways, and is equally surprising the other way -- there is a way to order the integers (or any countable set) that makes it into a continuum.
-
1
– David Lewis Aug 2 '10 at 16:30
11
It is disingenuous of you to use the word 'continuum' here! That refers to the real line, and has done for a long time. It's what the Continuum Hypothesis is about. – TonyK Sep 17 '10 at 14:25
4
Right -- not a continuum. I should have said "densely ordered". – David Lewis Sep 18 '10 at 16:31
1
We didn't know until Cantor came along that the rational numbers are countable? A way of counting them seems obvious in retrospect: write each rational number down with an extra 0 before each digit, and represent - and / with 1 and 2, so that, for example, -35/57 becomes 1030520507. – Tanner Swett Aug 26 '11 at 12:47
1
Um, what's 135/57 then? – Keenan Pepper Nov 15 '12 at 6:08
show 2 more comments
Goldbach's Conjecture.
Granted this is open, so it may be cheating a bit. However, this seems like a very hard problem to intuit your way to the "conventional wisdom". By contrast, `P != NP` is "obviously" true. Indeed, Feynman had trouble believing `P != NP` was even open.
-
– ShreevatsaR Jul 28 '10 at 6:09
4
– Jeremy Hurwitz Jul 28 '10 at 21:52
1
Thanks! Aaronson reporting being told by Levin himself of a conversation with Feynman is close enough, and pretty good as far as attributions to apocrypha go. :-) – ShreevatsaR Jul 29 '10 at 1:44
A very counter intuitive result is "The Lever of Mahomet". I saw this somewhere in the 4 volume set,"The World of Mathematics" (ca. 1956). It works like this: Imagine a flatbed rail car with a lever attached by a hinge such that the lever swings 180 degrees in a vertical arc, parallel to the tracks. We assume the lever has no friction, and is only influenced by gravity, inertia, and the acceleration of the rail car. (assume other idealizations, like no wind). Now, the question is, given ANY predetermined motion of the railcar, both forward and backwards, however erratic (but continuous and physically realizable of finite duration), show there exists an initial position of the lever such that it will not strike the bed at either extreme of travel during the prescribed motion. The solution only invokes the assumption of continuity.
-
Every set can be well ordered, I once sat on the bar of my favourite place and ran into this girl I haven't seen for years. I explained a bit about AC, Zorn's Lemma and the Well-Ordering principle and that they are all equivalent.
(In another time, my friend told me that if every set can be well-ordered I should tell him the successor of 0 in the real numbers, I answered that 1 is. He then argued that he means the real numbers, and not the natural numbers. I told him that my well-order puts the natural numbers with the usual ordering first, then the rationals and then the irrationals. But I can shift a finite number of positions if he wants me to.)
-
Do axioms count as theorems? – Tanner Swett Aug 26 '11 at 13:01
2
I am working extensively in ZF, I never recalled one of the axioms being "Axiom of choice if and only if Well ordering principle if and only if Zorn's lemma". And formally? Yes. Axioms are indeed theorems. – Asaf Karagila Aug 26 '11 at 13:34
An infinite amount of coaches, each containing an infinite amount of people can be accommodated at Hilbert's Grand Hotel. Visual demonstration here.
-
20
I think this is one Feynman would object to on physical-possibility grounds ;) – Jamie Banks Jul 21 '10 at 6:32
3
Indeed, to cast it into the original story, Feynman might reply, “You said a hotel, so I assumed you meant a real hotel.” – Josh Lee Nov 26 '10 at 17:53
Feynman could play either way since it is not beyond Feynman's mathematical intuition. – timur Dec 24 '10 at 5:13
-
Here's my two cents worth.
We have three sets: $\mathbb{R}$, $\{x \in \mathbb{R}:x \in [0,1]\}$, and the Cantor (ternary) set. They all have the same "size", the same number of elements (cardinality), which is uncountably infinite, but they have measure $\infty$, $1$ and $0$ respectively. That is, for any arbitrary length I can find an uncountably infinite set with that measure.
P.S. I'm surprised no one mentioned Russell's paradox.
-
2
I see this example as measure theory rescuing my intuition from the set-theoretic notion of cardinality. I don't have anything against cardinality, but I do think it is more natural for a laymathematician to regard $[0,2]$ as twice as big as $[0,1]$. – Srivatsan Dec 21 '11 at 13:02
That depends on how the "twice as big" measurement is derived. Twice as "long", yes. Twice as many elements, no, since twice $\infty$ is still infinity. – Samuel Tan Dec 21 '11 at 13:06
Theorem: Any predicate which can be evaluated in polynomial time on a nondeterministic turing machine can be evaluated in polynomial time on a deterministic turing machine.
Or is asking an open question cheating? :)
-
That's a conjecture, not a theorem; and it's one that's largely thought to be false. – BlueRaja - Danny Pflughoeft Aug 4 '10 at 4:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503037929534912, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/38392/list | ## Return to Answer
5 typo
The main thing to visualize is the Hopf fibration of $S^2$, its suspensions, and their various compositions.
Let $f \colon S^3 \to S^2$ be the Hopf fibration.
When you suspend $f$ to get $g \colon S^4 \to S^3$, you effectively embed a 2-sphere as the equator of a 3-sphere and extend the mapping in parallel to 2-spheres of longitudelatitude. Thus away from the poles you still have circles as preimages.
You can see that $f$ and $g$ compose to give a map $h \colon S^4 \to S^2$. To get a sense of how this looks as a fibration, you can work backwards. First, the preimage of a point in $S^2$ under $f$ is a circle in $S^3$. As noted above, each pointwise preimage of this circle under the suspension $g$ is again generically a circle. When the different circles fit together cleanly, it looks like you get a torus fibration, where the tori twist and interlink within each longitudinal latitudinal 3-sphere of $S^4$ analogously to the meshing of circles in $S^3$ for the Hopf fibration. If you now suspend this situation, you get a torus fibration over $S^3$ that looks like $h$ within each 2-sphere of longitudelatitude.
(I'm still not happy with this description but decided to post it in the hope it might spark some ideas.)
4 deleted 48 characters in body; deleted 1 characters in body
The main thing to visualize is the Hopf fibration of $S^2$, its suspensions, and their various compositions.
Let $f \colon S^3 \to S^2$ be the Hopf fibration.
When you suspend $f$ to get $g \colon S^4 \to S^3$, you effectively embed a 2-sphere as the equator of a 3-sphere and extend the mapping in parallel to 2-spheres of longitudeslongitude. Thus away from the poles you still have circles as preimages(the preimage of each pole is of course itself).
You can see that $f$ and $g$ compose to give a map $h \colon S^4 \to S^2$. To get a sense of how this looks as a fibration, you can work backwards. First, the preimage of a point in $S^2$ under $f$ is a circle in $S^3$. As noted above, each pointwise preimage of this circle under the suspension $g$ is again generically a circle. When the different circles fit together cleanly, it looks like you get a torus fibration, where the tori twist and interlink within each longitudinal 3-sphere of $S^4$ analogously to the meshing of circles in $S^3$ for the Hopf fibration. If you now suspend this situation, you get a torus fibration over $S^3$ that looks like $h$ within each 2-sphere of longitude.
(I'm still not happy with this description but decided to post it in the hope it might spark some ideas.)
3 added 136 characters in body; deleted 3 characters in body; added 109 characters in body
The main thing to visualize is the Hopf fibration of $S^2$, its suspensions, and their various compositions.
Let $f \colon S^3 \to S^2$ be the Hopf fibration.
When you suspend $f$ to get $Sf g \colon S^4 \to S^3$, you effectively embed a 2-sphere as the equator of a 3-sphere and extend the mapping in parallel to 2-spheres of longitudes. Thus away from the poles you still have circles as preimages (the preimage of each pole is of course itself).
You can see that $f$ and $Sf$ g$compose to give a map$h \colon S^4 \to S^2$. To get a sense of how this looks as a fibration, you can work backwards. First, the preimage of a point in$S^2$under$f$is a circle in$S^3$. As noted above, each pointwise preimage of this circle under the suspension$Sf$g$ is again generically a circle. When the different circles fit together cleanly, it looks like you get a torus fibration, where the tori twist and interlink within each longitudinal 3-sphere of $S^4$ analogously to the meshing of circles in $S^3$ for the Hopf fibration. If you now suspend this situation, you get a torus fibration over $S^3$ that looks like $h$ within each 2-sphere of longitude.
(I'm still not happy with this description but decided to post it in the hope it might spark some ideas.)
2 added 13 characters in body; added 34 characters in body
The main thing to visualize is the Hopf fibration of $S^2$, its suspensions, and their various compositions.
Let $f \colon S^3 \to S^2$ be the Hopf fibration.
When you suspend $f$ to get $Sf \colon S^4 \to S^3$, you effectively embed a 2-sphere as the equator of a 3-sphere and extend the mapping in parallel to 2-spheres of longitudes. Thus away from the poles you still have circles as preimages (the preimage of each pole is of course itself).
You can see that $f$ and $Sf$ compose to give a map $S^4 \to S^2$. To get a sense of how this looks as a fibration, you can work backwards. First, the preimage of a point in $S^2$ under $f$ is a circle in $S^3$. As noted above, each pointwise preimage of this circle under the suspension $Sf$ is again generically a circle. When the different circles fit together cleanly, it looks like you get a torus fibration, where the tori twist and interlink in within each longitudinal 3-sphere of $S^4$ analogously to the meshing of circles in $S^3$ of theHopf for the Hopf fibration.
1
The main thing to visualize is the Hopf fibration of $S^2$, its suspensions, and their various compositions.
Let $f \colon S^3 \to S^2$ be the Hopf fibration.
When you suspend $f$ to get $Sf \colon S^4 \to S^3$, you effectively embed a 2-sphere as the equator of a 3-sphere and extend the mapping in parallel to 2-spheres of longitudes. Thus away from the poles you still have circles as preimages (the preimage of each pole is of course itself).
You can see that $f$ and $Sf$ compose to give a map $S^4 \to S^2$. To get a sense of how this looks as a fibration, you can work backwards. First, the preimage of a point in $S^2$ under $f$ is a circle in $S^3$. As noted above, each pointwise preimage of this circle under the suspension $Sf$ is again generically a circle. When the different circles fit together cleanly, it looks like you get a torus fibration, where the tori twist and interlink in $S^4$ analogously to the circles in $S^3$ of theHopf fibration. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312092661857605, "perplexity_flag": "head"} |
http://advogato.org/person/ssp/diary/19.html | #### 17 Mar 2013 ssp»(Master)
Porter/Duff Compositing and Blend Modes
In the Porter/Duff compositing algebra, images are equipped with an alpha channel that determines on a per-pixel basis whether the image is there or not. When the alpha channel is 1, the image is fully there, when it is 0, the image isn’t there at all, and when it is in between, the image is partially there. In other words, the alpha channel describes the shape of the image, it does not describe opacity. The way to think of images with an alpha channel is as irregularly shaped pieces of cardboard, not as colored glass. Consider these two images:
When we combine them, each pixel of the result can be divided into four regions:
One region where only the source is present, one where only the destination is present, one where both are present, and one where neither is present.
By deciding on what happens in each of the four regions, various effects can be generated. For example, if the destination-only region is treated as blank, the source-only region is filled with the source color, and the ‘both’ region is filled with the destination color like this:
The effect is as if the destination image is trimmed to match the source image, and then held up in front of it:
The Porter/Duff operator that does this is called “Dest Atop”.
There are twelve of these operators, each one characterized by its behavior in the three regions: source, destination and both. The ‘neither’ region is always blank. The source and destination regions can either be blank or filled with the source or destination colors respectively.
The formula for the operators is a linear combination of the contents of the four regions, where the weights are the areas of each region:
$$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both} \cdot [b]$$
Where $$[s]$$ is either 0 or the color of the source pixel, $$[d]$$ either 0 or the color of the destination pixel, and $$[b]$$ is either 0, the color of the source pixel, or the color of the destination pixel. With the alpha channel being interpreted as coverage, the areas are given by these formulas:
$$A_\text{src} = \alpha_\text{s} \cdot (1 – \alpha_\text{d})$$
$$A_\text{dst} = \alpha_\text{d} \cdot (1 – \alpha_\text{s})$$
$$A_\text{both} = \alpha_\text{s} \cdot \alpha_\text{d}$$
The alpha channel of the result is computed in a similar way:
$$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$$
where $$[\text{as}]$$ and $$[\text{ad}]$$ are either 0 or 1 depending on whether the source and destination regions are present, and where $$[\text{ab}]$$ is 0 when the ‘both’ region is blank, and 1 otherwise.
Here is a table of all the Porter/Duff operators:
| | | | |
|----------|----------------|----------------|----------------|
| | $$[\text{s}]$$ | $$[\text{d}]$$ | $$[\text{b}]$$ |
| Src | $$s$$ | $$0$$ | s |
| Atop | $$0$$ | $$d$$ | s |
| Over | $$s$$ | $$d$$ | s |
| In | $$0$$ | $$0$$ | s |
| Out | $$s$$ | $$0$$ | $$0$$ |
| Dest | $$0$$ | $$d$$ | d |
| DestAtop | $$s$$ | $$0$$ | d |
| DestOver | $$s$$ | $$d$$ | d |
| DestIn | $$0$$ | $$0$$ | d |
| DestOut | $$0$$ | $$d$$ | $$0$$ |
| Clear | $$0$$ | $$0$$ | $$0$$ |
| Xor | $$s$$ | $$d$$ | $$0$$ |
And here is how they look:
Despite being referred to as alpha blending and despite alpha often being used to model opacity, in concept Porter/Duff is not a way to blend the source and destination shapes. It is way to overlay, combine and trim them as if they were pieces of cardboard. The only places where source and destination pixels are actually blended is where the antialiased edges meet.
Blending
Photoshop and the Gimp have a concept of layers which are images stacked on top of each other. In Porter/Duff, stacking images on top of each other is done with the “Over” operator, which is also what Photoshop/Gimp use by default to composite layers:
Conceptually, two pieces of cardboard are held up with one in front of the other. Neither shape is trimmed, and in places where both are present, only the top layer is visible.
A layer in these programs also has an associated Blend Mode which can be used to modify what happens in places where both are visible. For example, the ‘Color Dodge’ blend mode computes a mix of source and destination according to this formula:
\(\begin{equation*}
B(s,d)=
\begin{cases} 0 & \text{if $$d=0$$,}
\\
1 & \text{if $$d \ge (1 – s)$$,}
\\
d / (1 – s) & \text{otherwise}
\end{cases}
\end{equation*}\)
The result is this:
Unlike with the regular Over operator, in this case there is a substantial chunk of the output where the result is actually a mix of the source and destination.
Layers in Photoshop and Gimp are not tailored to each other (except for layer masks, which we will ignore here), so the compositing of the layer stack is done with the source-only and destination-only region set to source and destination respectively. However, there is nothing in principle stopping us from setting the source-only and destination-only regions to blank, but keeping the blend mode in the ‘both’ region, so that tailoring could be supported alongside blending. For example, we could set the ‘source’ region to blank, the ‘destination’ region to the destination color, and the ‘both’ region to ColorDodge:
Here are the four combinations that involve a ColorDodge blend mode:
In this model the original twelve Porter/Duff operators can be viewed as the results of three simple blend modes:
Source: $$B(s, d) = s$$
Dest: $$B(s, d) = d$$
Zero: $$B(s, d) = 0$$
In this generalization of Porter/Duff the blend mode is chosen from a large set of formulas, and each formula gives rise to four new compositing operators characterized by whether the source and destination are blank or contain the corresponding pixel color.
Here is a table of the operators that are generated by various blend modes:
The general formula is still an area weighted average:
$$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both}\cdot B(s, d)$$
where [s] and [d] are the source and destination colors respectively or 0, but where $$B(s, d)$$ is no longer restricted to one of $$0$$, $$s$$, and $$d$$, but can instead be chosen from a large set of formulas.
The output of the alpha channel is the same as before:
$$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$$
except that [ab] is now determined by the blend mode. For the Zero blend mode there is no coverage in the both region, so [ab] is 0; for most others, there is full coverage, so [ab] is 1.
Syndicated 2013-03-17 18:50:24 (Updated 2013-03-25 13:06:40) from Søren Sandmann Pedersen
Latest blog entries Older blog entries
New Advogato Features
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 53, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216424822807312, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/24495/metric-signature-explanation?answertab=votes | # metric signature explanation
Can anyone explain what metric signature is?
I have a basic knowledge regarding tensors, btw.
Also, how is it related to fundamental understanding of general relativity?
Thanks.
-
Do you know what the metric tensor is, or should we explain that before explain what it's signature means? – John Rennie Apr 27 '12 at 15:00
@JohnRennie I sort of know what the metric tensor is, but if metric tensor is explained at least briefly, I will appreciate :) – user27515 Apr 27 '12 at 15:03
## 1 Answer
In relativity there is an invariant called the proper time, $\tau$. It's an invariant in the sense that all observers will agree on it's value. In special relativity the proper time is defined as:
$$d\tau^2 = ds^2 = c^2dt^2 - dx^2 - dy^2 - dz^2$$ or $$d\tau^2 = -ds^2 = -c^2dt^2 + dx^2 + dy^2 + dz^2$$
You see both sign conventions and I've never been sure which is more generally accepted. Anyhow, you can write the equation for $ds^2$ as a matrix equation using:
$$ds^2 = g_{\alpha\beta}x^\alpha x^\beta$$
where $x$ is the vector $(t, x, y, z)$ and $g$ is the matrix:
$$\left( \begin{matrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix}\right)$$
The matrix is called the metric tensor (or a representation of it) and the signature is the number of positive and negative values on the leading diagonal. In this case it's (1,3) or you often just add together the negative and positive numbers to give, in this case, just 2.
Exactly the same equation is used in general relativity, but the matrix representing the metric tensor is more complicated and generally not diagonal, so you have to diagonalise it to calculate the signature. The Wikipedia article goes into this in more detail.
The reason we're interested in the signature is that we expect spacetime to have one timelike co-ordinate and three spacelike co-ordinates, so we expect the signature to be always (1,3).
-
1
You missed a $d$ in $c^2dt^2$. But more importantly: I think it's pretty universal that the sign of proper time matches the sign of coordinate time. $c^2d\tau^2 = c^2dt^2-dx^2-dy^2-dz^2$. The issue with the metric signature is just about whether $c^2d\tau^2 = \pm ds^2$. In one signature, the Minkowski metric tensor has diagonal elements $-1,1,1,1$ and in the other signature it has diagonal elements $1,-1,-1,-1$. The latter is more common in particle physics, the former in GR and fundamental QFT, I think. – David Zaslavsky♦ Apr 27 '12 at 23:12
You say often just add the numbers to signature of 2 but I have never before heard the signature of Minkowski spacetime referred to as 2. I have always seen the (1,3) notation (or -,+,+,+ or +,-,-,-). It doesn't make sense to add the diagonal elements and say the signature is 2. A flat 2 dimensional plane space would also have signature 2 by that method which certainly is not the same as (1,3). – FrankH Mar 19 at 19:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398062229156494, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/tagged/continued-fractions+dirichlet-series | # Tagged Questions
0answers
166 views
### How to simplify $\newcommand{\bigk}{\mathop{\vcenter{\hbox{K}}}}\prod_{p\in\mathbb{P}}\left(1+\bigk_{k=1}^{\infty }\frac{f_k(s)}{g_k(s)}\right)^{-1}$
I'd like to simplify $$\newcommand{\bigk}{\mathop{\huge\vcenter{\hbox{K}}}}B(s)=\prod_{p\in\mathbb{P}}\left(1+\bigk_{k=1}^{\infty }\frac{f_{k}(s)}{f_{k}(s)}\right)^{-1}$$ to something of the form ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261509776115417, "perplexity_flag": "middle"} |
http://mathhelpforum.com/business-math/205113-npv-automobile-dealer.html | # Thread:
1. ## NPV of an Automobile Dealer
See figure attached for the question.
I have concerns regarding the way the question is worded.
Does the company incur Revenue/O&M/Savings at the end of the 1st year,(i.e. immediately after the building is constructed) or at the end of the 2nd year?
When I read the question I thought the Revenue/O&M/Savings would all begin to occur at the end of the 2nd year.
Which is correct and why?
Also, attached is my work for the NPV and RORC assuming the Revenue/O&M/Savings occur at the end of the 1st year.
Could someone please clarify? I want to make sure I am able to extract the information from the questions correctly!
$\text{For Reference: }$
$P = \text{Present Worth}$
$F = \text{Future Worth}$
$A = \text{Annuity}$
$\text{Example of using the mnemomic expressions is below, }$
$(\frac{A}{P}, i\%,N) \equiv \text{ A constant that when multiplied with the present worth, will provide the equivalent annuity over N peroids given a discount rate of i}\%$
$\text{Similiarly for the other mnemomic expressions.}$
Attached Thumbnails
2. ## Re: NPV of an Automobile Dealer
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536135196685791, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/lagrangian-formalism%20energy-conservation | Tagged Questions
0answers
174 views
Classical Mechanics: A particle move in one dimension under the influence of two springs [closed]
A particle of mass $m$ can move in one dimension under the influence of two springs connected to fixed points a distance $a$ apart (see figure). The springs obey Hooke’s law and have zero unstretched ...
2answers
583 views
Can a force in an explicitly time dependent classical system be conservative?
If I consider equations of motion derived from the pinciple of least action for an explicilty time dependend Lagrangian $$\delta S[L[q(\text{t}),q'(\text{t}),{\bf t}]]=0,$$ under what ...
3answers
544 views
Is there a valid Lagrangian formulation for all classical systems?
Can one use the Lagrangian formalism for all classical systems, i.e. systems with a set of trajectories $\vec{x}_i(t)$ describing paths? On the wikipedia page of Lagrangian mechanics, there is an ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8297730684280396, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/104962/maximum-number-of-generators-of-a-curve-in-mathbbp3 | Maximum number of generators of a curve in $\mathbb{P}^3$
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $H_{d,g}$ denote the Hilbert scheme of curves of degree $d$ and genus $g$ locally of complete intersection in $\mathbb{P}^3$. Given a curve $C \in H_{d,g}$, denote by $S(C)$ a minimal set of generators of the ideal sheaf $I(C)$ (minimal in the sense no proper subset of $S(C)$ generate $I(C)$ as an ideal).
For a fixed integer $e>0$, is there a way of computing the maximum number of elements in $S(C)$ of degree $e$ i.e. if we impose the natural grading (by the degree of polynomial) on $S(C)$ then what is the maximal cardinality of $S(C)_e$ (where $S(C)_e$ denotes the $e$-th graded part of $S(C)$)?
-
Did you already look at Gruson-Lazarsfeld-Peskine, as suggested to you previously? – Jason Starr Aug 18 at 13:26
@Starr: Which result are referring to? Is it the one that bounds the highest degree of the polynomial defining a curve! – Naga Venkata Aug 18 at 17:38
"The" minimal set of generators is not well-defined. – Qiaochu Yuan Aug 18 at 21:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8835810422897339, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/43164/if-1-screw-can-support-120-lbs-how-much-weight-can-25-screws-support/43179 | # If 1 screw can support 120 lbs, how much weight can 25 screws support?
While this is mainly about my personal home improvement project, I figure this would be a good opportunity for some of you to apply your physics skills to a real-world situation.
My situation is this: I am hanging 500 lbs of drywall (two layers weighing about 250 lbs each) on 5 rows of metal furring channels (hat channels) which are mounted onto RSIC (sound insulation clips) which are screwed into studs with drywall screws. There are 25 total clip assemblies across all 5 rows of channels. This wall is suspended (ie, mounted solely on channels and none of the edges touch the walls, floor, or ceiling). Here's a diagram of a single clip assembly:
Edit 11/1/2012 4:56AM EDT
Below is a diagram of my actual clip assembly array. The wall is 13' 10" long and 8' tall. The vertical black lines represent the studs and are spaced apart at about 16" (it's not exact because I had to add a couple studs and the frame joins up with another frame at one point in the wall), the red dots are the clip assemblies, and the horizontal grey lines are the hat channels that are mounted onto the clips. Ignore the green dots and yellow arrows.
The problem: Since drywall screws are not as strong as wood screws which I should have used, I am worried that the 25 screw/clip assemblies may not be strong enough to bear the weight of 500 lbs. To test the strength of a single screw, I mounted a single clip into a dummy stud and ended up being able to hang 120 lbs of dumbells on it. This mock assembly has been up for over a month and shows no signs of being on the verge of snapping. I could probably hang another 30 lbs on it before it gave away.
So, using simple math, it seems I would be able to say, "Well, if one screw can support 120 lbs then 25 screws can support 3,000 lbs!" Of course, I am sure the math is not so simple. I am sure there is a curve there somewhere where adding more screws certainly allows for greater weight capacity, but the weight capacity deteriorates the more screws and more weight you add even if the number of screws and amount of weight was proportionate.
So is it really as easy as using simple math to solve this problem or does it require something more advanced?
BTW, I just took stabs at tags here. I am hopinq this is an actual physics question and that it will not get deleted. I've already posted a similar question as this on SE DIY site, but I think it's not so much related to handiwork as it isnecessarily
-
I assume that your test standing included and rail and all, or is that a untested possible point of failure? In any case, I imagine the problem is indeterminate without knowing the full geometry: how are the screws placed in relation to one another and to the sheets of dry wall; how many channels on each sheet and in what geometry; and so on. With enough data it becomes a basic statics problem suitable to freshmen physics or engineering students to find the screws with the maximum load on them. – dmckee♦ Nov 1 '12 at 0:46
4
The problem is that it's hard to be sure the model fits reality. If you could ensure that the load perfectly distributed on all the screws then it's just 25*120lb. If the weight is all on one tightened screw which then fails dropping the load onto the next screw and so on - then it can support about 120lbs. Quite a few large buildings have failed on not understanding this! – Martin Beckett Nov 1 '12 at 1:51
Thanks for the comments. Updated my answer with a diagram of my entire clip assembly. – oscilatingcretin Nov 1 '12 at 9:03
So, Martin, you're saying that, assuming the clips are spaced in such a way that the weight is evenly distributed, calculating how much weight these 25 screws can collectively support is as easy as taking the weight a single screw can support and multiplying that by the number of screws in the assembly? – oscilatingcretin Nov 1 '12 at 10:14
FYI while this isn't the type of question we particularly want to encourage on the site, it isn't off topic either. – David Zaslavsky♦ Nov 1 '12 at 17:36
## 1 Answer
As others have pointed out in the comments, it is not really trivial to apply any model to reality, especially since we don’t know much about reality. However, we can make a few educated guesses and estimations and see how well everything fits together.
First, assuming perfect weight distribution, we see that five screws would probably be sufficient to support your wall. And while the assumption is probably wrong, we can be relatively sure that a safety margin of 500% is a good start.
Second, we can look at how much a single screw has to support according to your diagram. An upper boundary for the area of the wall supported by a single screw seems to be four ‘rectangles’, corresponding to about $$\frac{4}{44} \approx 0.09 \hat{=} 45.5\textrm{ lb }\hat{=} \frac{1}{3}\textrm{ screwweight}_{\textrm{max}}\quad.$$ That still looks rather good, doesn’t it?
Third, we could check if there are any screws that, if they are removed, leave another screw with many more ‘neighbouring’ tiles. As far as I can see, the maximum would still be about six (corresponding to approx. $\frac{1}{2}\textrm{ screwweight}_{\textrm{max}}$).
I would hence tend to say that you are fine, but there are many, many problems that could possibly arise (not to mention that I am not a construction engineer and roughly followed http://xkcd.com/793/).
• It appears that you are building some sort of sound-proofing. Not to take into account possible issues with vibrations (and hence faster wear of the screws) seems silly.
• Depending on how and where you fix things to the wall, you might have to deal with ugly resonances, both between the two walls and within the drywall. Without knowing the speed of sound in drywalls, it is difficult to make any estimates here.
• Everything else I didn’t think of.
-
Yes, there are more things to consider like sudden structural settling. I can take a hammer and snap a drywall screw if I whack it really hard whereas a wood screw would just bend. My question assumes total structural stability. Being that I am no physicist, what does your equation/expression/whatever it is equate to? What's it saying in plain, layman's English terms? – oscilatingcretin Nov 1 '12 at 11:37
Oh, I was just relating the area of the drywall (made up of 44 rectangles) to those closest to one particular screw (4). Then the weight of the wall closest to a particular screw is $\frac{4}{44} \times \textrm{ total mass of wall}$. This happens to be $45.5\textrm{ lbs}$, which is equal to $\frac{1}{3}$ of the weight a single screw can support. – Claudius Nov 1 '12 at 13:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632859826087952, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/108439/reference-needed-for-negative-curves-on-blowup-of-the-projective-plane-at-generic/108445 | ## Reference needed for negative curves on blowup of the projective plane at generic points
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
I usually see the following theorem without seeing any reference or explanation
Theorem: Let X be a blowup of $P^2$ at generic r points. If $r\leq 8$ then there are only finitely many (-1) curves on $X$ , while if $r\geq 9$ there are infinitely many of them.
This is a little mysterious to me. I imagine the proof would be finding a family of curves in $P^2$ containing these points which are centers of the blowup, so that the multiplicities of those curves at these points are large enough to make the strict transform of these curves (-1) curves. However, I don't know what family to be chosen. Can someone give a proof or a reference containing a proof of this fact?
-
2
Does Hartshorne Chapter V, Exercise 4.15(e) count? – Will Sawin Sep 30 at 5:35
## 4 Answers
It is a result of Nagata but it is easy to explain (at least to se the difference between $r\le 8$ and $r\ge 9$):
The Picard group of the blow-up is generated by $L$, the pull-back of a general line, and $E_1,...,E_r$ the exceptional curves. Any curve disctinct from the $E_i$'s is linearly equivalent to $C=dL-\sum a_i E_i$ for some integers $d,a_1,\dots,a_r$ with $d>0$, which is the degree of the image of the curve in $\mathbb{P}^2$. The integer $a_i=E_i\cdot C$ is the multiplicity of this curve at $a_i$ and is then non-negative.
If $C$ is a $(-1)$-curve, we have $C^2=-1$ and $C\cdot C+C\cdot K_S=-2$ (adjunction formula) so
$\sum a_i=3d-1$
$\sum (a_i)^2=d^2+1$
By Cauchy-Schwartz we have $(\sum a_i)^2\le r\sum (a_i^2)$, which yields $9d^2-6d+1\le rd^2+r$. For $r\le 8$, we find thus only finitely many possiblities for $d$ ($d\le 6$), hence only finitely many $(-1)$-curves and for $r\ge 9$ we get infinitely many solutions.
It remains to see that the set of infinitely many numeric solutions gives infinitely many curves in the case $r\ge 9$. By a dimension count, we find that $|C|$ is not empty for each numeric solution. The fact that the points are in general position "should" imply that $|C|$ counts in fact an irreducible element. (This last part is the only one where something has to be checked; it seems natural but is not so easy, see the Nagata and Harbourne-Hirschowitz conjecture).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
First consider the case when the $9$ points is the base locus of a pencil of cubics.
Take the family of cubics through these $9$ points. On the blow up they form a basepoint-free linear system that defines an elliptic fibration. The exceptional curves give sections, and you can use the group structure on the elliptic curves to translate these curves. Any translate of any exceptional curve is another $(-1)$-curve and clearly there are infinitely many of them.
Now consider a family of $\mathbb P^2$'s and move the above special $9$ points into general position and consider the blown-up family of surfaces. According to Kodaira exceptional subvarieties are stable, so all the nearby members of the (blown-up) family has to have infinitely many (−1)-curves.
This means that the statement is true in an open neighbourhood of the $9$ points in $\mathbb P^2$.
-
4
@Sandor : Unless I'm mistaken, through nine general points, there will be only one cubic (for instance, because the space of cubics is 9-dimensional). The construction you give applies for particular sets of 9 points : the base loci of pencils of cubics. – Olivier Benoist Sep 30 at 6:06
Olivier, you're right, I didn't read the question carefully – Sándor Kovács Sep 30 at 7:03
1
...but I think the above deformation argument fixes it. – Sándor Kovács Sep 30 at 14:37
1
Jim: I'm not sure I understand your comment, in particular, the statement that there are only 3 points. The exceptional curves give 9 points on the generic fibre of the fibration. Choosing one of them as the origin, the others generate a subgroup of the group of k(P^1)-rational points on that curve. Unless the points are in very special position (in fact, unless they are the Hesse configuration), the group they generate will be infinite – Artie Prendergast-Smith Oct 1 at 13:26
1
@Artie: I meant "three-torsion" not "three torsion". They are the nine 3-torsion points and they are closed under multiplication. – Jim Bryan Oct 1 at 21:07
show 4 more comments
This result is exactly Theorem 4a of Nagata's paper "On rational surfaces, II". Nagata's terminology and notations are a little bit old-fashioned, but it is still very legible !
-
You already have plenty of references, but let me give you another one.
The standard quadratic transformation is the birational map $\sigma: \mathbf{P}^2 \dashrightarrow \mathbf{P}^2$ defined by
$$\sigma: [x_0, x_1, x_2] \rightarrow [x_1x_2, x_0x_2, x_0x_1].$$
This maps fails to be defined at the three coordinate points; in the classical terminology, one says it is based at those points. Blowing up the plane at the three coordinate points, $\sigma$ then lifts to an automorphism of the blowup; moreover, conjugating $\sigma$ by elements of $PGL(3)$, one gets an automorphism of the blowup of $\mathbf{P}^2$ at any triple of (non-collinear) points.
The idea is then to compose maps of this kind (based at different triples of points) to produce $(-1)$ curves on the blowup of $\mathbf{P}^2$ whose degree downstairs is arbitrarily large. That is, start with any $(-1)$-curve $C$ you like, for example the proper transform of the line through two of your points. Then apply a sequence of Cremona transformations such that the successive images of $C$ are curves whose degree gets larger at every step.
But how do we know such a sequence exists? This can be translated into a problem about root systems and Weyl groups. The key point turns out to be that for the blowup of $r \leq 8$ points the root system has finite Weyl group, whereas for $r=9$ the group is $W\left(\tilde{E}_8\right)$, which is infinite.
For details about the last paragraph, the source I know is the paper "Weyl Groups and Cremona Transformations", by Dolgachev.
-
2
This is a nice description, but one has to take care: in general there is no automorphism on the blow-up of $r\ge 9$ pts. The application of the quadratic maps is good, but it could move the points in a bad way. In fact, the application of an element of the Weyl group to a $(-1)$-curve gives an element $C$ of the Picard group, which satisfies $C^2=-1$ and $CK=-1$. In most case, it should yield another $(-1)$-curve, but it could also happen that the divisor is not irreducible. Anyway, for most elements it works, by the result of Nagata. – Jérémy Blanc Sep 30 at 18:06
Hi Jérémy, yes, I glossed over some subtleties in the answer. One should choose the points to be in Cremona general position, meaning that they are in linear general position, and remain so after any sequence of Cremona transformations. Also, as you say, the Cremona transformations are not automorphisms of a fixed blowup. – Artie Prendergast-Smith Sep 30 at 19:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380536079406738, "perplexity_flag": "head"} |
http://mathematica.stackexchange.com/questions/21484/how-to-solve-this-system-of-equations/21486 | # How to solve this system of equations?
I want to solve the following system of equations:
$$\begin{cases} 1+\sqrt{2 x+y+1}=4 (2 x+y)^2+\sqrt{6 x+3 y},\\ (x+1) \sqrt{2 x^2-x+4}+8 x^2+4 x y=4. \end{cases}$$ I tried
````Reduce[{ 1 + Sqrt[2 x + y + 1] == 4 (2 x + y)^2 + Sqrt[6 x + 3 y],
(x + 1) Sqrt[2 x^2 - x + 4] + 8 x^2 + 4 x y == 4 },
{x, y}, Reals]
````
I also tried with `Solve` and `NSolve`, but the evaluation exhausted my patience. I know the given system has $\left(\frac{1}{2};-\frac{1}{2}\right)$ as the only real solution. How do I tell Mathematica to get that solution ?
-
1
Did you try it in a new session? I get the result instantly – rm -rf♦ Mar 16 at 16:07
@rm-rf I think the problem is only visible in version < 9. – Jens Mar 16 at 16:11
In earlier versions one can do `NSolve[system,vars]` and postprocess to remove the complex-valued solutions. – Daniel Lichtblau Mar 16 at 20:49
## 3 Answers
This is something that differs dramatically between Mathematica version 8 and 9. In 8, it is indeed impossibly slow. In version 9, the result is returned instantly.
If you know there's only one solution you need, then it's often a good idea to not use `Reduce` because it will try to find all solutions and therefore spend a lot of time looking for a proof that it found them all. Instead, just do this:
````FindInstance[{1 + Sqrt[2 x + y + 1] ==
4 (2 x + y)^2 + Sqrt[6 x + 3 y], (x + 1) Sqrt[2 x^2 - x + 4] +
8 x^2 + 4 x y == 4}, {x, y}, Reals]
(* ==> {{x -> 1/2, y -> -(1/2)}} *)
````
This is usually much faster than `Reduce` or `Solve`.
-
Confirming that `Mathematica 9` can easily solve this system unlike `ver. 7 & 8` I'm going to suggest how to deal with it in earlier versions.
1. there are many many complex solutions so the restriction of the domain to `Reals` is important to get the only one real solution.
2. since the system appears to be difficult for `Mathematica 7 & 8` one should consider a simple transformation of the original variables.
There are terms `2 x + y, 6 x + 3 y, 8 x^2 + 4 x y` so one can conclude it should be a good idea to introduce a new variable `z == 2x + y`, now we have :
````system1 = { 1 + Sqrt[2 x + y + 1] == 4 (2 x + y)^2 + Sqrt[6 x + 3 y],
(x + 1) Sqrt[2 x^2 - x + 4] + 8 x^2 + 4 x y == 4} /. {y -> z - 2 x}//Simplify
````
```` { 1 + Sqrt[1 + z] == Sqrt[3] Sqrt[z] + 4 z^2,
8 x^2 + (1 + x) Sqrt[4 - x + 2 x^2] + 4 x (-2 x + z) == 4}
````
and this system appears to be much easier to solve :
````Reduce[system1, {x}, Reals]
````
````z == 1/2 && x == 1/2
````
We can use as well `Reduce[system1, {z, x}, Reals]` to get the solutions instantly unlike in this case `Reduce[system1, {x, z}, Reals]` i.e. when the specified variables to be found are in the reversed order.
We can use also `Solve` eliminating one variable :
````Solve[system1, {x}, {z}, Reals]
Solve[system1, {z}, {x}, Reals]
````
````{{x -> 1/2}}
{{z -> 1/2}}
````
-
```` {eq1, eq2} = {1 + Sqrt[2 x + y + 1] == 4 (2 x + y)^2 + Sqrt[6 x + 3 y],
(x + 1) Sqrt[2 x^2 - x + 4] + 8 x^2 + 4 x y == 4};
Select[y /. (Join @@ Solve[# == 0] & /@
GroebnerBasis[Subtract @@@ {eq1, eq2}, {x, y}]), # == Re[#] &] // Union
Solve[eq1 /. y -> -1/2, x, Reals]
````
-
lang-mma | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8692876696586609, "perplexity_flag": "middle"} |
http://www.purplemath.com/learning/viewtopic.php?f=14&t=1478&p=4494 | # The Purplemath Forums
Helping students gain understanding and self-confidence in algebra.
## Cross-sectional volume question?
Limits, differentiation, related rates, integration, trig integrals, etc.
2 posts • Page 1 of 1
### Cross-sectional volume question?
by Thatguy73 on Wed Aug 18, 2010 2:07 am
So I would like to know if I am doing this correctly. Here's the question:
Solid B has a volume of 8(pi)
It also is a Pyramid with a square base and a height of 8. What is the length of the side (s) of the base of B?
The Cross-sectional area would be A(x)=s^2 And I would need to write s in terms of x to put it in the definite integral to find the volume.
Since the height of the pyramid 8, I can make the Vertex of it at (0,0) and then make the base centered at (8,0). So the cross-sectional areas would need to be summed up on the interval [0,8]. I can find the volume of the pyramid:
$\displaystyle V=8\pi=\int_{\0}^8 A(x)dx$
$A(x)=s^2$
At the end of the base where x=8, the side (s) of the base would be 2y. So then something times x must give 2y
$kx=2y=s$ Therefore $A(x)=(kx)^2=k^2x^2$
$\displaystyle V=8\pi=\int_{\0}^8 k^2x^2dx$ Since k^2 is a constant:
$8\pi=k^2\int_{\0}^8x^2dx$
$8\pi=k^2(8^3/3)$
Solving for k:
$k=0.384$
Since: $kx=s$ Plug in x=8 and k.
$s=3.070$
Could someone tell if this method is correct? I just started learning this and it's kind of confusing.
Thanks
Thatguy73
Posts: 1
Joined: Wed Aug 18, 2010 1:16 am
Sponsor
Sponsor
### Re: Cross-sectional volume question?
by Martingale on Wed Aug 18, 2010 3:51 am
Thatguy73 wrote:So I would like to know if I am doing this correctly. Here's the question:
Solid B has a volume of 8(pi)
It also is a Pyramid with a square base and a height of 8. What is the length of the side (s) of the base of B?
The Cross-sectional area would be A(x)=s^2 And I would need to write s in terms of x to put it in the definite integral to find the volume.
Since the height of the pyramid 8, I can make the Vertex of it at (0,0) and then make the base centered at (8,0). So the cross-sectional areas would need to be summed up on the interval [0,8]. I can find the volume of the pyramid:
$\displaystyle V=8\pi=\int_{\0}^8 A(x)dx$
$A(x)=s^2$
At the end of the base where x=8, the side (s) of the base would be 2y. So then something times x must give 2y
$kx=2y=s$ Therefore $A(x)=(kx)^2=k^2x^2$
$\displaystyle V=8\pi=\int_{\0}^8 k^2x^2dx$ Since k^2 is a constant:
$8\pi=k^2\int_{\0}^8x^2dx$
$8\pi=k^2(8^3/3)$
Solving for k:
$k=0.384$
Since: $kx=s$ Plug in x=8 and k.
$s=3.070$
Could someone tell if this method is correct? I just started learning this and it's kind of confusing.
Thanks
your answer is correct (if you allow for rounding)
the exact answer is $\sqrt{3\pi}$
Martingale
Posts: 363
Joined: Mon Mar 30, 2009 1:30 pm
Location: USA
2 posts • Page 1 of 1
Return to Calculus
• Board index
• The team • Delete all board cookies • All times are UTC | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 21, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171885848045349, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/26797/why-does-work-equal-force-times-distance/27807 | # Why does work equal force times distance?
My 'government-issued' book literally says:
Energy is the capacity to do work and work is the product of net force and the 1-dimensional distance it made a body travel while constantly affecting it.
I didn't want to say it, but what!?
Seriously though; why does work equal $F \cdot d$ ?
Where did the distance part come from?
I always thought of time as the one thing we can only measure (not affect) so it justifies why we may measure other things in relation to time. But we have a much greater control over distance (since it's just a term for a physical dimension we can more or less influence as opposed to time).
How does distance translate to this here?
Level: US tenth grade equivalent.
-
2
– Ron Maimon May 5 '12 at 1:14
@RonMaimon Your approach is that kinetic energy is proportional to squared velocity and you calculate the expression for work from there. This is, as you know, quite opposite to standard procedures in elementary books. Did you choose such procedure because it is easier, or do you think one cannot properly show that work is force times distance? – Pygmalion May 5 '12 at 18:18
@Pygmalion: I did both--- there is a non-relativistic Galilean motivation for energy is v^2, but there is also an argument for why "force times distance" is the correct definition of work. You put the objects on ramps, and make a pully system to move other objects. If the force-times-distance matches, you can move the objects with the pullies while doing no work (colloquially and physics-wise, the two notions coincide). – Ron Maimon May 5 '12 at 19:54
1
@RonMaimon Could you direct me to your ramp-pully system explanation? Thanks. – Pygmalion May 5 '12 at 19:59
@Pygmalion: I put a brief version up, but it's Feynman's, really Archimedes'. – Ron Maimon May 6 '12 at 14:20
## 5 Answers
Fortunately, we've equipped ourselves with rulers in addition to clocks, so it's quite possible to measure time and distance, and quite useful to do both. Let's push something (with a force $F$ ) so that its velocity changes by $\Delta v$ and figure out how much its energy changes. Say our object is initially moving at speed $v_0$ and has energy $$E_0 = \frac{1}{2} m v_0^2$$ After the push, it is moving at $v = v_0 + \Delta v$ so its energy will be: $$E = \frac{1}{2} m (v_0 + \Delta v)^2$$ $$E = \frac{1}{2}m(v_0^2 + 2v_0\Delta v + \Delta v^2)$$ $$E = \frac{1}{2}mv_0^2 + m v_0\Delta v + \frac{1}{2}m\Delta v^2$$ Now with $\Delta v$ small enough so that we can ignore the last term with $\Delta v ^2$: $$E = E_0 + mv_0\Delta v$$ or in terms of the change in energy: $$\Delta E = m v \Delta v$$ There are a few different ways to proceed from here, one is to multiply by $\Delta t / \Delta t = 1$ and rewrite as: $$\Delta E = m \left( \frac{\Delta v}{\Delta t} \right) (\Delta t) \cdot v$$ Now recognizing from Newton's 2nd law that $$\frac{m \Delta v}{\Delta t} = \frac{\Delta (mv)}{\Delta t} = \frac{\Delta p}{\Delta t} = F$$ We have that $$\Delta E = F \Delta t \cdot v$$ but $\Delta t \cdot v = \Delta t \left( \frac{\Delta x}{\Delta t} \right) = \Delta x$ is just how far we pushed it, so $$\Delta E = F \cdot \Delta x$$
Thinking about the problem as time progresses, it is natural to also ask "How fast is the energy changing?" The answer is that we are supplying power to the system at the rate: $$P = \frac{\Delta E}{\Delta t} = F v$$ This may be closer to how you are thinking of it, but as you can see above, if you want to relate the applied force to the change in energy, multiplying the force by the distance over which it is applied gives the correct result.
-
Sorry if this is more (or less) mathematics than you were looking for in an answer. Let me know here in the comments if there is anything that is unclear. I hope this helps! – tmac May 4 '12 at 23:37
You should never have to apologize for math ;-) Nice answer. – David Zaslavsky♦ May 4 '12 at 23:46
@DavidZaslavsky Thanks for the feedback. I just wasn't sure how to interpret "Level: US Grade-10 equivalent" – tmac May 4 '12 at 23:54
1
I commend you for the answer, but the argument is usually the opposite. You start with the definition of the work and from this definition you obtain the kinetic energy. After all, why would kinetic energy be proportional to the square of velocity? – Pygmalion May 5 '12 at 7:53
1
@M.Na'el I know what you meant, I was just saying that it doesn't help that much in terms of your background because of the large variances in education. Nothing pathetic about it! This is a good fundamental question, hopefully these answers help. – tmac May 5 '12 at 20:23
show 8 more comments
You should regard this passage as a perhaps ill motivated definition of what work is for the purpose of your physics course. Let me make some remarks on energy though, since the passage "Energy is the capacity to do work" is misleading at best. There is a law or principle, called conservation of energy governing all natural phenoma we are aware of. According to this law there is a quantity called energy that does not change during any changes that nature undergoes. It is not tied to anything concrete like pushing boxes around, instead it is an abstract concept.
There are different forms of energy, among them: gravitational energy, kinetic energy, radiant energy, nuclear energy, mass energy, chemical energy, heat energy, elastic energy, electrical energy. Notice that those are just other names though, no one really knows what energy is, there are just different ways of calculating contributions to it.
For a given physical system the different forms of energies can sometimes be given by concrete formulas. But it is important to realize that conservation of energy is independent of that knowledge. As time goes on, the different forms of energy are converted into each other, conservation of energy means that their sum remains constant.
Now in general energy that is measured relative to the location of something else is called potential energy. Examples are the gravitational potential energy or the electric potential energy. How do you change the potential energy of an object? By moving it around. So how does force come into play? Well it turns out that as a general principle:
$$\{ \text{Change in potential energy} \} = (force) \times (\text{distance force acts through})$$
The reason for this formula is rather simple, if $V$ denotes the potential energy, you can compare it at two neighbouring points $P$ and $P + \delta P$, then per definition the infitesimal (that means that you neglect terms with $\delta P^2$) change of the potential is:
$$\delta V = V(P + \delta P) - V(P) = F \cdot \delta P$$
Now you just have to sum up those contributions. To give you a simple example: Take the potential energy $V(x) = \frac{1}{2} k x^2$, then
$$V(x + \delta x) - V(x) = \frac{1}{2} k (2 x \delta x + \delta x^2) = kx\delta x$$
So the force is $F = kx$, which you might recognize as Hooke's law.
If you want to read a much better description of this you should read chapter 4 in Volume 1 of Feynman's Lectures on Physics. Maybe you can get a copy of them in your local library.
-
+1 for Feynman 1, he does it right, and he is almost the only one who does it right at the know-nothing level. Landau also does it right, but at a higher level. – Ron Maimon May 5 '12 at 20:00
I don't think I'd ever struggled with the idea of a universally constant amount of energy. To me, heat is probably the most basic of all of its faces (since it's one of the bi-bang's prime outputs). It's the representation, though, that troubles me: Why force and distance? I'm getting an idea formed up from all of your answers so you're helping -hopefully- a future scientist! – Mussri May 5 '12 at 20:34
One question: Is $\delta V = \text d V$ ? – Mussri May 5 '12 at 20:37
Well, I am not sure what you mean by $dV$. Indeed in standard mathematical notation, $dV = \partial_x V dx + \partial_y V dy + \partial_z V dz$ and physicists like to write this as $dV = \vec F \cdot d \vec r$. In words: The infinitesimal change in potential energy is the Force times the infinitesimal change in position. If you like that is the definition of a force. That definition might not be compatible with what you learned in your physics course. But it is the right definition for conservative forces. Non conservative forces like friction are a bit trickier. – orbifold May 6 '12 at 1:11
So the answer to why force and distance is: You want to keep track of the change in potential energy. Infitesimally that change is determined by definition by $\vec F \cdot d\vec r$, now as you move along a path, you have to sum up (or integrate) those small contributions to get the total change in potential energy. Only in very simple circumstances the end result will literally be simply force times distance. – orbifold May 6 '12 at 1:19
show 12 more comments
The problem with the definition of the work is that our intuitive idea of work is simply wrong. Imagine for example, that you hold 10 kg box 1 m above the ground for an hour. You'd probably feel very tired and you'd think "what a work I've done". But you could easily put the box on a table, that is 1 m high. The effect on the box would be just the same. But did the table made any work? Of course not. Since table at rest does not have energy (capacity to do work), it could not possibly make any work. So you see, there is absolutely no direct correspondence between work and time.
An object therefore must be moved to actually make a mechanical work. And even in this case you can move object and still make no work. Imagine that you are carrying 10 kg box at the height of 1 m from one side of the room to the other side. It actually requires making some effort to do that. But you could put the box on the cart 1 m high and just gently and slowly push it over the room. But did the cart made any work? The answer is again negative. Since cart practically at rest does not have energy (capacity to do work), it could not possibly make any work.
Only the product of force and displacement in the direction of the force is a meaningful work.
-
Why isn't it the product of force times distance to the 2.3 power? – Ron Maimon May 5 '12 at 19:59
@RonMaimon OK, I need lever to explain that. – Pygmalion May 5 '12 at 20:35
@RonMaimon Sleeping over the problem, there is no problem to show that work must be linear in displacement. You simply halve the displacement and get half the work. The problem is to show that work is linear in force. That is why I need a lever. – Pygmalion May 6 '12 at 6:25
Why should half the displacement be half the work? It isn't true in time--- when a constant force is acting, twice the time doesn't make twice the work. You still need a lever to show this, because you need the motion adiabatic, so that equal increments of distance are additive lifting of some weight. You won't make a good argument when you already know the answer, it's best if you don't know anything, that's why old literature is useful, they didn't know anything. By the way, I should have meant (force times distance) to the 2.3 power, any other combination is not rotationally invariant. – Ron Maimon May 6 '12 at 14:04
If you argue that it's "force times distance" because you need to integrate for an inhomogenous force, consider Force times velocity to the 2.3 power times dt, which is (F v^1.3) dx. You need to argue that it isn't this combination, or some other one that gives the invariant change in energy. – Ron Maimon May 6 '12 at 14:18
show 1 more comment
To add to orbifold's answer, I'll give a quick repeat of Feynman's version of the conservation of energy argument. This is "d'alembert's priciple" or "the principle of virtual work", and it generalizes to define thermodynamic potentials as well, which include entropy quantities inside.
Suppose you have a bunch of masses on the Earth's surface. Suppose you also have some elevators, and pullies. You are asked to lift some masses and lower other masses, but you are very weak, and you can't lift any of them at all, you can just slide them around (the ground is slippery), put them on elevators, and take them off at different heights.
You can put two equal masses on opposite sides of a pully-elevator system, and then, so long as you lift a mass up by a height h, and lower an equal mass down by an equal height h, you don't need to do any work (colloquially), you just have to give little nudges to get the thing to stop and start at the appropriate height.
If you want to move an object which is twice as heavy, you can use a force doubling machine, like a lever with one arm twice as long as another. By arranging the heavy mass on the short arm, and the light mass on the long arm, you can move the heavy mass down, and the light mass up twice as much without doing any work.
In both these processes, the total mass-times-height is conserved. If you keep the mass-times-height constant at the beginning and at the end, you can always arrange a pully system to move objects from the initial arrangement to the final one.
Suppose now that the gravitational field is varying, so that some places, you have a strong "g" and other places a weak "g". This requires balancing the total force on opposite sides of the elevator, not the total mass. So the general condition that you can move things without effort is that if you move an object which feels a force "F" an amount "d" in the direction of the force is acting, you can use this motion plus a pully system to move another object which feels a force "F'" an amount "d'" against the direction of the force.
This means that for any reversible motion with pullies, levers, and gears
$$\sum_i F_i \cdot d_i = 0$$
This is the condition under which you don't have to do colloquial work to rearrange the objects. One can take the conserved quantity for these motions to be the sum of the force times the distance for each little motion, and it is additive among different objects, and so long as nothing is moving very fast, if you add up the changes in F dot d for all the objects, it must be zero if you did everything reversibly.
This generalizes to a dynamical situation by adding a quantity of motion which is additively conserved along with F dot d, this quantity is the kinetic energy. You can also go backwards, and start with the kinetic energy idea (which can be motivated by collisions), and rederive the F dot d thing. These are two complementary points of view that fit together to give a coherent picture of kinetic and potential energy.
if you have a static force field on a particle which has the property that along some closed cycle the sum of the force times the little displacements is not zero, then you can use this cycle to lift weights.
The proof is simple: arrange a pully system to lift/lower weights at every point along the cycle in such a way that the F dot d of the weights balances the F dot d of the force. Then take the particle around the loop in the direction where F dot d is net positive, while balancing out the force with the weights. At the end of the day, you lifted some weights and brought the particle back where it started.
This means that a nonconservative force can be used to lift a weight. As you traverse the loop, something must be eaten up out of the nonconservative force field, otherwise it is an inexhaustible source of weight-lifting, and violates the first law of thermodynamics. So eventually, all force fields settle down so that the integral of F dot d is zero along every loop. This is the definition of a conservative force.
-
Isn't that actually special case of d'Alembert's principle? – Pygmalion May 5 '12 at 20:28
@Pygmalion: Yes, I was trying to remember that guy's name while writing this! Thanks. – Ron Maimon May 5 '12 at 20:56
Of course Feynman's explanation assume a homogenous gravitational field. Work is more general than that. It seems to me that the only general way to introduce the work is by defining scalar field, which negative gradient results in force field - that is actually the energy. Of course, inverse of negative gradient is negative curve integral and work is positive curve integral. I guess I'd lost my students about the time I mentioned scalar field... – Pygmalion May 6 '12 at 6:33
Actually, @Pygmalion, you lost me at 'homogeneous gravitational field', I never thought there were none uniform fields. – Mussri May 6 '12 at 10:16
@RonM, So according to this example, should I see that work is 'how much a force exhausts of an object's potential energy'? Turning it to kinetic or otherwise? – Mussri May 6 '12 at 10:17
show 4 more comments
It's by moving something over a distance that you put energy into it. You can push a wall with all your strength for hours, but if it doesn't move then it hasn't gained any energy. (Instead your energy will have gone into heating up your body and heating the environment around you)
On the other hand if you carry a weight upstairs, then you are applying a force to it and moving it, and the higher you carry it the more energy the weight has and the more work you could make it do by dropping it.
-
2
This is not an argument. – Ron Maimon May 5 '12 at 1:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584664106369019, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/14431/pigeonhole-principle-for-infinite-case/14466 | ## Pigeonhole Principle for infinite case
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $X_n$ are finite sets for any natural integer $n$. let $Y$ be an infinite subset of $\prod_n X_n$. Do there exist $y$ and $y'$ in $Y$ and an infinite subset $S$ of $\mathbb N$ such that $y_n=y'_n$ for all $n$ in $S$?
-
2
You should insist that y, y' are distinct, since otherwise y=y' means the answer is trivially yes. – Joel David Hamkins Feb 7 2010 at 3:26
## 3 Answers
What about $X_n=\left\lbrace 1,2,3,...,n\right\rbrace$, and $Y$ consisting of the sequences $\left(1,1,1,...,1,2,3,4,5,6,...\right)$ with $n$ ones for all $n\in\mathbb N$?
The first true thing that comes into my mind when I hear "Pigeonhole Principle for infinite case" are some theorems in infinite Ramsey theory, such as this one.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
First, let me improve upon the countable counterexample of Darij Grinberg by giving an uncountable counterexample Y. Indeed, I shall give finite sets Xn and a subset Y of the product ΠXn having size continuum (that is, as large as possible), such that any two distinct y, y' in Y have only finitely many common values.
Let Xn have 2n elements, consisting of the binary sequences of length n. Now, for each infinite binary sequence s, let ys be sequence in the product ΠXn whose nth value is s|n, the length n initial segment of s. Let Y consist of all these ys. Since there are continuum many s, it follows that Y has size continuum.
Note that if s and t are distinct binary sequences, then eventually the initial segments of s and t disagree. Thus, eventually, the values of ys and yt are different. Thus, ys and yt have only finitely many common values. So Y is very large counterexample, as desired.
A similar argument works still if the Xn grow more slowly in size, as long as liminf|Xn| = infinity. One simply spreads the construction out a bit further, until the size of the Xi is large enough to accommodate the same idea. That is, if the liminf of the sizes of the Xn's is infinite, then one can again make a counterexample set Y of size continuum.
In contrast, in the remaining case, there are no infinite counterexamples. I claim that if infinitely many Xn have size at most k and Y is a subset of ΠXn having k+1 many elements, then there are distinct y,y' in Y having infinitely many common values. To see this, suppose that Y has the property that distinct y, y' in Y have only finitely many common values. In this case, any two y, y' must eventually have different values. So if Y has k+1 many elements, then eventually for sufficiently large n, these k+1 many sequences in Y must be taking on different values in every Xn. But since unboundedly often there are only k possible values in Xn, this is impossible.
In summary, the situation is as follows:
Theorem. Suppose that Xn is finite and nonempty.
• If liminf |Xn| is infinite, then there is Y subset ΠXn of size continuum, such that distinct y, y' in Y have only finitely many values in common.
• Otherwise, infinitely many Xn have size at most k for some k, and in this case, every Y subset ΠXn of size k+1 has distinct y,y' in Y with infinitely many common values.
In particular, if the Xn become increasingly large in size, then there are very bad counterexamples to the question, and if the Xn are infinitely often bounded in size, then there is a very strong positive answer to the question.
-
Actually, you don't even need the X_n's to grow. If you have X_n = {0,1} and you let Y be the interval [0,1], and let f:Y -> X_n be the nth digit of Y in base 2 you get the same phenomenon. (This also points out a problem with what I had previously said.) What you in fact need is you need a larger set mapping to a smaller set. Since the product of a countable number of finite sets can be uncountable you don't necessarily have this. – Inna Feb 7 2010 at 5:21
2
@Inna: What you are saying is not correct. If X_n has only two elements, then my argument shows that in any subset Y of the product, with at least three sequences in it, there will be two with infinitely many common values. After all, if y and y' have only finitely many binary digits in common, then any y'' with have to agree with one of them infinitely often, since there are no other digits available. – Joel David Hamkins Feb 7 2010 at 5:26
I think that what you are asking for is impossible. Given any element of $\prod_n X_n$ the element is uniquely determined by its image in each of the individual $X_n$'s. So if two elements of $Y$ agree on each $X_n$ then they must be the same element.
In language similar to yours, what you probably want for the finite case is "If $X$ and $Y$ are finite sets such that $|X| < |Y|$ and $f:Y\rightarrow X$ is any map then there exists an element $x\in X$ such that $|f^{-1}(x)| > x$." More generally, given any finite sequence $X_1,\ldots,X_n$ of finite sets and any set $Y$ such that $|Y| > |X_1|\cdot|X_2|\cdots|X_n|$ and any sequence of maps $f_i:Y\rightarrow X_i$ then there exists a sequence of elements $x_1,\ldots,x_n$ and two elements $y,y'\in Y$ such that $f_i(y) = f_i(y')$ for any $i$.
The problem with the infinite case is that there are injective but not surjective maps between infinite sets with the same cardinality. However, it is true that given a sequence of finite sets $X_1,X_2,\ldots$ and a set $Y$ with cardinality greater than that of $\prod X_n$, if you have any sequence of maps $f_i:Y\rightarrow X_i$ then there exists an uncountable subset $Z\subseteq Y$ such that for any two elements $z,z'$ of $Z$ you have $f_i(z) = f_i(z')$ for all $i$.
In even more generality, I believe that if you have any set of sets ${X_\alpha}$ and any set $Y$ such that the cardinality of $Y$ is larger than the cardinality of $\prod_\alpha X_\alpha$ then you have a similar statement.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527872800827026, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/57638/integrating-over-a-geodesic | # integrating over a geodesic
Suppose $\mathcal{M}$ is a Riemannian manifold with the volume measure $d\mu$(induced by the Riemannian metric ) and $\gamma$ is a geodesic on $M$. Is it true that $$\int_{\gamma}fd\nu\leq\int _{\mathcal{M}}fd\mu,$$ if yes what replaces $d\nu$ exactly? Edit:
$$\int_{\gamma}fd\nu\int _{\mathcal{M}}d\mu\leq\int _{\mathcal{M}}fd\mu\int_{\gamma}d\nu,$$
-
## 1 Answer
What is $d\nu$? Do you mean the measure induced by the induced metric on $\gamma$ as a submanifold? In that case the answer is no.
Let $\mathcal{M} = S^1\times S^1$ with coordinates $(x,y)\in [0,2\pi)^2$, with the line element $ds^2 = dx^2 + \epsilon dy^2$. And let $\gamma$ be the geodesic $y = 0$. The induced Riemannian metric on the geodesic is $d\ell^2 = dx^2$.
Let $f = 1$ the constant function. Then $\int_\gamma f d\nu = 2\pi$, while $\int_{\mathcal{M}}fd\mu = 4\pi^2 \epsilon$. By choosing $\epsilon$ sufficiently small you invalidate the proposed inequality.
More generally, purely by dimensional considerations you see that it is unreasonable to expect any expressions of the form you wrote to hold, except possibly in the case that $\gamma$ is treated as a (measure zero) subset of $\mathcal{M}$, and $d\nu$ is the measure $d\mu$, and $f$ is non-negative. Then the LHS is trivially zero and the RHS is non-negative.
-
@ Willie Wong I changed the inequality. – timhortons Aug 15 '11 at 19:26
Edit is still false. Let $M$ be the manifold given by the disjoint union of what I wrote in my above example, and $\mathbb{S}^2_r$, the standard sphere of radius $r$. Let $f = 0$ on the sphere. Now take $r\to\infty$. The LHS blows up, the RHS remains the fixed. – Willie Wong♦ Aug 15 '11 at 19:53
If you want a connected manifold, you can consider the case where $f$ has compact support in a small tubular neighborhood of the geodesic. – Willie Wong♦ Aug 15 '11 at 19:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486016035079956, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/222396/prove-a-function-is-continuously-differentiable | # prove a function is continuously differentiable
$f(x,y) =\begin{cases}\arctan(y/x) & x\neq 0\\ \pi/2 & x=0,y>0\\-\pi/2 & x=0,y<0.\end{cases}$
$f$ is defined on $\Bbb R^2\smallsetminus\{(0,0)\}.$
Show that $f$ is continuously differentiable on all of its domain.
Also use implicit function to show the above proof again.
Thanks!
-
1
You're repeating exactly your question from 3 hours ago. You must be patient and wait until somebody deals with that, and not send over and over the same question. – DonAntonio Oct 28 '12 at 4:38
1
I have no idea what your inequalities and bounds for $x$ and $y$ represent. Please fix those yourself. – EuYu Nov 1 '12 at 6:17
So...are you dividing by $0$ in there? That's...bad. – Cameron Buie Nov 1 '12 at 6:21
that's a function with different values in different domains – Frank Xu Nov 1 '12 at 6:53
Ah, I see. Please tell me if my interpretation is right. – EuYu Nov 1 '12 at 7:06
## 1 Answer
Maybe rewriting your equation as $$x \tan f = y$$ does help?
Edit:
Given the fact that the first hint did not help. Here, is the second hint: you can rewrite your equation as $$F(x,y,f) = x \tan f - y =0.$$ Can you then use the implicit function theorem to learn something about $\partial_x f$ and $\partial_y f$?
Edit2:
I just did see that you have changed your question and thus the points with $f=\pi/2 + n\pi$ are not excluded any more. In this case you should rewrite your equation as (check the special points at $f=\pi/2 + n\pi$ separately) $$F(x,y,f) = x \sin f - y \cos f =0.$$ and then apply the implicit function theorem.
-
doesn't help at all – Frank Xu Nov 1 '12 at 7:05
could you give me more hint? how can I apply implicit function? Is there any conclusion of implicit function about continuously differentiable? – Frank Xu Nov 1 '12 at 7:58
– Fabian Nov 1 '12 at 10:39
never say something is obvious. when you say something is obvious, it could be not obvious – Frank Xu Nov 1 '12 at 17:38
the part you cited as obvious is exactly the hard part – Frank Xu Nov 1 '12 at 17:39
show 7 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503262639045715, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/18725/how-much-effect-does-the-mass-of-a-bicycle-tire-have-on-acceleration | # How much effect does the mass of a bicycle tire have on acceleration?
There are claims often made that, eg, "An ounce of weight at the rims is like adding 7 ounces of frame weight." This is "common knowledge", but a few of us are skeptical, and our crude attempts at working out the math seem to support that skepticism.
So, let's assume a standard 700C bicycle tire, which has an outside radius of about 36cm, a bike weighing 90Kg with bike and rider, and a tire+tube+rim weighting 950g. With the simplifying assumption that all the wheel mass is at the outer diameter, how much effect does adding a gram of additional weight at the outer diameter have on acceleration, vs adding a gram of weight to the frame+rider?
-
## 2 Answers
A few simplifying assumptions:
• I'm going to ignore any rotational energy stored in the bike chain, which should be pretty small, and wouldn't change when you change bike tires
• I'm going to use 50 cm for the radius of the bike tire. This is probably a little big, and your bike will likely have a different radius, but it makes my calculations easier, so there. I will include a formula nonetheless.
• I'm going to assume that the rider provides a fixed torque to the wheels. This isn't strictly true, especially when the bike has different gears, but it simplifies our calculations, and, once again, the torque provided won't vary when you change the weight profile of the tire
OK, so now, let's analyze our idealized bicycle. We're going to have the entire $m$ of each of the two wheels concentrated at the radius $R$ of the tires. The cyclist and bicycle will have a mass $M$. The cycle moves forward when the cyclist provides a torque $\tau$ to the wheel, which rolls without slipping over the ground, with the no-slip conditions $v=R\omega$ and $a=\alpha R$ requiring a forward frictional force $F_{fr}$ on the bike.
Rotationally, with the tire, we have:
$$\begin{align*} I\alpha &= \tau - F_{fr}R\\ mR^{2} \left(\frac{a}{R}\right)&=\tau-F_{fr}R\\ a&=\frac{\tau}{mR} - \frac{F_{fr}}{m} \end{align*}$$
Which would be great for predicting the acceleration of the bike, if we knew the magnitude of $F_{fr}$, which we don't.
But, we can also look at Newton's second law on the bike, which doesn't care about the torque at all. There, we have (the factor of two comes from having two tires):
$$\begin{align*} (M+2m)a&=2F_{fr}\\ F_{fr}&=\frac{1}{2}(M+2m)a \end{align*}$$
Substituting this into our first equation, we get:
$$\begin{align*} a&=\frac{\tau}{mR}-\frac{1}{m}\frac{(M+2m)a}{2}\\ a\left(1+\frac{M}{2m} +1\right)&=\frac{\tau}{mR}\\ a\left(\frac{4m+M}{2m}\right)&=\frac{\tau}{mR}\\ a&=\frac{2\tau}{R(4m+M)} \end{align*}$$
So, now, let's assume a 75 kg cyclist/cycle combo and a 1 kg wheel, and a 0.5 m radius for our wheel. This gives $a=0.0506 \tau$. Increasing the mass of the cyclist by 1 kg results in the acceleration decreasing to $a=0.0500 \tau$. Increasing the mass of the wheels by 0.5 kg each results in the acceleration decreasing to $a=0.0494$, or roughly double the effect of adding that mass to the rider/frame.
-
Using the parameters you provided, and distributing the one gram evenly amongst the tires, you get the same result--putting the weight at the rims will slow the bike by roughly twice as much as you would by adding the weight to the rider. – Jerry Schirmer Dec 24 '11 at 16:40
When you say "roughly double", I gather you're figuring that much mass per wheel, so a total additional mass of 1g added to the wheels is equivalent to adding 4g to the non-rotating mass? And does this take into account the energy required to accelerate the added mass linearly, or does that bump it up to 5:1 (assuming my first assumption is correct)? – Hot Licks Dec 24 '11 at 20:40
Ok, read through that again and I think I understand it -- it looks like you've got a factor of two for the two tires, and you've entered the tire mass both into the angular and linear components, so the final answer is 2x -- a gram of added weight at the radius of the tires is equivalent to 2 grams on the frame (or the rider), with all factors accounted for. – Hot Licks Dec 25 '11 at 15:14
Right. The weight is either added to the rider or distributed evenly to both tires. – Jerry Schirmer Dec 26 '11 at 16:42
And a bit of a thought experiment confirms this: It would take as much force to accelerate a mass on a stationary wheel to a given tangential velocity as it does to accelerate the mass to the same linear velocity. But once accelerated to the tangential velocity it (of course) takes the same amount of force again to accelerate the (otherwise massless) wheel to the same linear velocity. So to accelerate simultaneously to the same tangential and linear velocity takes twice as much force as either alone. – Hot Licks Dec 26 '11 at 19:04
show 5 more comments
You have to put angular momentum into the spinning wheels.
The energy of a rotating object is = I w^2 /2
Where I is the moment of intertia, which for a ring is I = mr^2 /2
And w is the angular velocity in rad/s
Essentailly this is wasted energy since it has to be generated in addition to the1/2 m v^2 of the rider+bike.
And since to accelerate you need to increase the angular velocity quickly you have to put a lot of energy into the angular rotation quickly and since you are limited in how much power you can provide this limits the rate of change of 'w'
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9572154879570007, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2010/12/03/real-frobenius-reciprocity/?like=1&_wpnonce=81a70c627a | The Unapologetic Mathematician
(Real) Frobenius Reciprocity
Now we come to the real version of Frobenius reciprocity. It takes the form of an adjunction between the functors of induction and restriction:
$\displaystyle\hom_H(V,W\!\!\downarrow^G_H)\cong\hom_G(V\!\!\uparrow_H^G,W)$
where $V$ is an $H$-module and $W$ is a $G$-module.
This is one of those items that everybody (for suitable values of “everybody”) knows to be true, but that nobody seems to have written down. I’ve been beating my head against it for days and finally figured out a way to make it work. Looking back, I’m not entirely certain I’ve ever actually proven it before.
So let’s start on the left with a linear map $f:V\to W$ that intertwines the action of each subgroup element $h\in H\subseteq G$. We want to extend this to a linear map from $V\!\!\uparrow_H^G$ to $W$ that intertwines the actions of all the elements of $G$.
Okay, so we’ve defined $V\!\!\uparrow_H^G=\mathbb{C}[G]\otimes_HV$. But if we choose a transversal $\{t_i\}$ for $H$ — like we did when we set up the induced matrices — then we can break down $\mathbb{C}[G]$ as the direct sum of a bunch of copies of $\mathbb{C}[H]$:
$\displaystyle\mathbb{C}[G]=\bigoplus\limits_{i=1}^nt_i\mathbb{C}[H]$
So then when we take the tensor product we find
$\displaystyle\mathbb{C}[G]\otimes_HV=\left(\bigoplus\limits_{i=1}^nt_i\mathbb{C}[H]\right)\otimes_HV\cong\bigoplus\limits_{i=1}^nt_iV$
So we need to define a map from each of these summands $t_iV$ to $W$. But a vector in $t_iV$ looks like $t_iv$ for some $v\in V$. And thus a $G$-intertwinor $\hat{f}$ extending $f$ must be defined by $\hat{f}(t_iv)=t_i\hat{f}(v)=t_if(v)$.
So, is this really a $G$-intertwinor? After all, we’ve really only used the fact that it commutes with the actions of the transversal elements $t_i$. Any element of the induced representation can be written uniquely as
$\displaystyle v=\sum\limits_{i=1}^nt_iv_i$
for some collection of $v_i\in V$. We need to check that $\hat{f}(gv)=g\hat{f}(v)$.
Now, we know that left-multiplication by $g$ permutes the cosets of $H$. That is, $gt_i=t_{\sigma(i)}h_i$ for some $h_i\in H$. Thus we calculate
$\displaystyle gv=\sum\limits_{i=1}^ngt_iv_i=\sum\limits_{i=1}^nt_{\sigma(i)}h_iv_i$
and so, since $\hat{f}$ commutes with $h$ and with each transversal element
$\displaystyle\begin{aligned}\hat{f}(gv)&=\sum\limits_{i=1}^n\hat{f}(t_{\sigma(i)}h_iv_i)\\&=\sum\limits_{i=1}^nt_{\sigma(i)}h_i\hat{f}(v_i)\\&=\sum\limits_{i=1}^ngt_i\hat{f}(v_i)\\&=g\hat{f}\left(\sum\limits_{i=1}^nt_iv_i\right)\\&=g\hat{f}(v)\end{aligned}$
Okay, so we’ve got a map $f\mapsto\hat{f}$ that takes $H$-module morphisms in $\hom_H(V,W\!\!\downarrow^G_H)$ to $G$-module homomorphisms in $\hom_G(V\!\!\uparrow_H^G,W)$. But is it an isomorphism? Well we can get go from $\hat{f}$ back to $f$ by just looking at what $\hat{f}$ does on the component
$\displaystyle V=1V\subseteq\bigoplus\limits_{i=1}^nt_iV$
If we only consider the actions elements $h\in H$, they send this component back into itself, and by definition they commute with $\hat{f}$. That is, the restriction of $\hat{f}$ to this component is an $H$-intertwinor, and in fact it’s the same as the $f$ we started with.
2 Comments »
1. [...] proof of Frobenius reciprocity shows that induction is a left-adjoint to restriction. In fact, we could use this to define [...]
Pingback by | December 3, 2010 | Reply
2. [...] branching rule down, proving the other one is fairly straightforward: it’s a consequence of Frobenius reciprocity. Indeed, the branching rule tells us [...]
Pingback by | January 31, 2011 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 53, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913467288017273, "perplexity_flag": "head"} |
http://infoscience.epfl.ch/record/141317 | • EPFL
• Infoscience
• Bit Precision Analysis for Compressed Sensing
• English
• français
Infoscience
Conference paper
# Bit Precision Analysis for Compressed Sensing
• Presented at: Seoul, Korea Republic, 2009
• Published in: Proceedings of the IEEE International Symposium on Information Theory (ISIT), p. 1--5
• : , 2009
This paper studies the stability of some reconstruction algorithms for compressed sensing in terms of the bit precision. Considering the fact that practical digital systems deal with discretized signals, we motivate the importance of the total number of accurate bits needed from the measurement outcomes in addition to the number of measurements. It is shown that if one uses a $2k \times n$ Vandermonde matrix with roots on the unit circle as the measurement matrix, $O(\ell + k \log \frac{n}{k})$ bits of precision per measurement are sufficient to reconstruct a $k$-sparse signal $x \in \R^n$ with dynamic range (i.e., the absolute ratio between the largest and the smallest nonzero coefficients) at most $2^\ell$ within $\ell$ bits of precision, hence identifying its correct support. Finally, we obtain an upper bound on the total number of required bits when the measurement matrix satisfies a restricted isometry property, which is in particular the case for random Fourier and Gaussian matrices. For very sparse signals, the upper bound on the number of required bits for Vandermonde matrices is shown to be better than this general upper bound.
Keywords: algoweb_misc
#### Reference
• ALGO-CONF-2009-002
Record created on 2009-09-30, modified on 2012-03-21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8096879720687866, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/54200/superconducting-gap-temperature | # Superconducting gap, temperature
Tinkham (page 63) states that the temperature dependence of the gap energy of a superconductor $\Delta(T)$ can be calculated using the following integral: http://i45.tinypic.com/w1s13t.png
How can this actually be carried out? I am not sure how to approach this problem or re-arrange the equation for finding $\Delta(T)$ numerically.
-
## 1 Answer
I have not tried it on this specific equation, but in principle you can solve problems like this by a combination of numerical integration and a root finding algorithm. For a given variable $x$ you wish to determine for a fixed value $v$ of the integral, the root finding algorithm will find a solution to the equation
$v-integral(x)=0.$
The root finding algorithm will try to match the variable in such a way that the equation is satisfied, while the integral is evaluated numerically. In your example, $\Delta$ corresponds to the variable $x$ while $1/N(0)V$ takes on the role of v.
-
How can that be done in a programming language though? – ElizabethPor Feb 27 at 0:20
I would start with math software like mathematica, since it contains useful algorithms for numerical integration and root-finding. – Frederic Brünner Feb 27 at 1:19
This has been done in Mathematica. I'll link you a screen shot: i.stack.imgur.com/0Km6p.png. Problem is, when I try to do this in another language (i.e. Python: pastie.org/6347014), I cannot ask Python to declare and find $\Delta$ without getting a `x is not declared error`. – ElizabethPor Feb 27 at 11:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8689123392105103, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/190343-counterexample-function-help.html | # Thread:
1. ## counterexample, function help
Hey, I can't seem to get anywhere with this big question so if someone could work through this question with me so I understand that would be greatly appreciated.
a) If $g \in M_n$ give a counterexample to $g(x+y)=g(x)+g(y)$
b) Prove $g(\frac{1}{a}x_1 +...+ \frac{1}{a}x_a)=\frac{1}{a}g(x_1)+...+\frac{1}{a}g (x_a)$
c) Using part b, prove that if $G \le M_n$ is finite then $G$ fixes at least one element $k \in R^n$
I think for part c that if $|G|=a$, I can then pick some $x \in R^n$ and consider $x_i = h_ix$ for each element $h_1,...,h_a \in G$.
But I can't get any of part a,b or c anyway.
2. ## Re: counterexample, function help
what is $M_n$?
3. ## Re: counterexample, function help
M is the group of all rigid motions of the plane, n is dimension.
4. ## Re: counterexample, function help
do these rigid motions have to be origin-preserving? if not, then we can let g(x) = x + a, where a is any (non-origin) point in R^n.
then g(x+y) = x + y + a, whereas g(x) + g(y) = x + y + 2a.
5. ## Re: counterexample, function help
Originally Posted by Deveno
do these rigid motions have to be origin-preserving? if not, then we can let g(x) = x + a, where a is any (non-origin) point in R^n.
then g(x+y) = x + y + a, whereas g(x) + g(y) = x + y + 2a.
well M is the coarsest classification of orientation-preserving and orientation-reversing motion, and g(x)=x+a is a translation and thus, orientation-preserving. So you answer makes perfect sense, thanks.
Any thoughts on part b and c?
6. ## Re: counterexample, function help
I have thought of an approach for part c, but it does not use part b.
Any help here.
Thanks
7. ## Re: counterexample, function help
I have thought of doing it like [tex]g(\frac{1}{a}
sorry gys my computer is stuffing up and my working is wrong, any help
8. ## Re: counterexample, function help
If G is finite then every element of G must be a rotation or a reflection. So I get the rough idea behind part c, but I can't get part b.
Any help at all?
Thanks | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8519014716148376, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?s=78f45efb5c8e28edb4f98af11b81a095&p=4247029 | Physics Forums
## Work done by compressing a container of gas
I want to show that the work done by compressing a container of gas with uniform pressure is $$-\int_{V_i}^{V_F} p(V)dV,$$ where p(V) is the pressure of the gas as a function of volume. This equation was derived in my text for the special case of a piston, but I wanted a more general derivation.
So I started by writing down the equation for the power transmitted to the gas in the container: $$P=-\oint p \vec{v} \cdot d\vec{A},$$ where the integral is taken over the entire surface of the container and v is the velocity of some point on the container. Assuming p is uniform, we get that
$$P=-p\oint \vec{v} \cdot d\vec{A} = -p\frac{d}{dt} \oint \vec{r} \cdot d\vec{A} = -p\frac{d}{dt} \int \nabla \cdot \vec{r} dV = -3p\frac{dV}{dt}.$$
Integrating this equation with respect to time gives the wrong result by a factor of 3. What have I done wrong?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Not sure I understand the question. If you're compressing the gas, the pressure should be increasing, otherwise you're not really compressing the gas. Also, if you're calculating a force times a velocity, you're solving for power rather than work.
Quote by Ryoko If you're compressing the gas, the pressure should be increasing, otherwise you're not really compressing the gas.
By "uniform pressure" I mean constant throughout the container, not constant in time. As you can see, pressure is a function of volume in the equation.
Quote by Ryoko Also, if you're calculating a force times a velocity, you're solving for power rather than work.
Which I can then integrate to yield the work done.
## Work done by compressing a container of gas
By "uniform pressure" I mean constant throughout the container, not constant in time. As you can see, pressure is a function of volume in the equation.
Well I am a bit rusty on my calculus, but I'm noticing that your first function has p as function of volume. But the last function has p taken out as a constant. The problem is that p is still a function of the closed volume and can't be taken out as a constant.
You should be careful about this and check your notes. P should be the external pressure not the pressure of the gas. (external) Work is done against this external pressure. Pint may not even be uniform or definable. It is quite possible for ∫PintdV to exist but not be equal to the work done. In an extreme case, such as expansion into a vacuum, the work done is zero, but ∫PintdV is not.
Quote by Ryoko Well I am a bit rusty on my calculus, but I'm noticing that your first function has p as function of volume. But the last function has p taken out as a constant. The problem is that p is still a function of the closed volume and can't be taken out as a constant.
I'm taking the integral over the surface of the container at a fixed instant of time, so the volume is fixed when I perform the integral.
Quote by Studiot P should be the external pressure not the pressure of the gas. (external) Work is done against this external pressure. Pint may not even be uniform or definable.
I don't think this is a problem. The external pressure and the interior pressure at the surface of the container will be equal (unless the container has a large acceleration). It's true that "P_int may not even be uniform or definable", but I'm assuming it is uniform for the purposes of this derivation. This is, as I understand it, a good approximation for quasi-static processes.
At any rate, even if you think it's important to replace internal pressure with external pressure, neither the derivation nor the desired result changes, so we're still left with the original problem. IMO I don't think the problem is with the physics. I don't think my math was quite kosher, especially the step were I pulled the time derivative out of the integral.
If the pressures are equal on both sides of the container, no work is being done.
I don't think this is a problem. The external pressure and the interior pressure at the surface of the container will be equal (unless the container has a large acceleration). It's true that "P_int may not even be uniform or definable", but I'm assuming it is uniform for the purposes of this derivation. This is, as I understand it, a good approximation for quasi-static processes. At any rate, even if you think it's important to replace internal pressure with external pressure, neither the derivation nor the desired result changes, so we're still left with the original problem. IMO I don't think the problem is with the physics. I don't think my math was quite kosher, especially the step were I pulled the time derivative out of the integral.
With respect, you need to get the physics correct before performing calculations.
Here is a simplified physics argument along the lines of yours, you might like to reproduce using your surface integrals.
Referring to the diagram consider a volume of fluid with surface area A.
Let it suffer a small expansion to area A' under a uniform external pressure P.
Consider element of area dA of the surface and let its displacement along the normal be dn.
If this is expansion is carried out extemely slowly no energy of motion will be developed so the only mechanical work performed will be due to the enlargement of the volume.
Thus the work done by the fluid is
δW = Ʃ(P.dA)dn
= PƩdA.dn
=P x total increase in volume of small shell
=P dV
Integrating again (ie dW) from V1 to V2
gives the total work
Attached Thumbnails
I think if the pressure is constant while compressing, it is not likely to be an adiabatic process, meaning that there would be interaction with surroundings that you also need to consider, right?
Recognitions:
Gold Member
Quote by dEdt I want to show that the work done by compressing a container of gas with uniform pressure is $$-\int_{V_i}^{V_F} p(V)dV,$$ where p(V) is the pressure of the gas as a function of volume. This equation was derived in my text for the special case of a piston, but I wanted a more general derivation. So I started by writing down the equation for the power transmitted to the gas in the container: $$P=-\oint p \vec{v} \cdot d\vec{A},$$ where the integral is taken over the entire surface of the container and v is the velocity of some point on the container. Assuming p is uniform, we get that $$P=-p\oint \vec{v} \cdot d\vec{A} = -p\frac{d}{dt} \oint \vec{r} \cdot d\vec{A} = -p\frac{d}{dt} \int \nabla \cdot \vec{r} dV = -3p\frac{dV}{dt}.$$ Integrating this equation with respect to time gives the wrong result by a factor of 3. What have I done wrong?
Your problem was introducing the position vector $\vec{r}$.
$\nabla \cdot \frac{D\vec{r}}{Dt}$ is not equal to $\frac{D(\nabla \cdot\vec{r})}{Dt}$
You already had the result you needed, and didn't realize it. $$P=-\int p \vec{v} \cdot d\vec{A}=-p\int \vec{v} \cdot \vec{n}dA$$
where $\vec{n}$ is an outwardly directed normal to the boundary. $\vec{v} \cdot \vec{n}$ is just the component of boundary velocity normal to the present boundary location (the component tangent to the boundary surface doesn't contribute to volume increase). When this is integrated over the entire boundary surface, it gives you the rate of change of volume contained within the boundary. Thus,
$$\frac{dV}{dt}=\int \vec{v} \cdot \vec{n}dA$$
Recognitions: Gold Member I think the next thing you want to do is apply the divergence theorem to my last equation, and then combine the result with the continuity equation (differential mass balance equation) to eliminate del dot v, and replace it with the material time derivative of the natural log of density. $$\frac{dV}{dt}=\int \vec{v} \cdot \vec{n}dA$$ Applying the divergence theorem: $$\frac{dV}{dt}=\int (\nabla \cdot \vec{v})dV$$ The continuity equation gives: $$\nabla \cdot \vec{v}=-\frac{1}{\rho}\frac{D\rho}{Dt}$$ The right hand side of this equation represents physically the local fractional rate in volumetric expansion per unit time, following the material parcels. If we substitute this equation into the previous equation, we get: $$\frac{dV}{dt}=\int (-\frac{1}{\rho}\frac{D\rho}{Dt})dV$$ In this equation, the quantity $(-\frac{1}{\rho}\frac{D\rho}{Dt})dV$ represents physically the time rate of increase in volume of the material parcel dV. Chet
Recognitions: Gold Member I also wanted to mention that the material derivative D/Dt does not commute with the integral over surface area or volume. D/Dt is strictly a local microscopic derivative following a mass parcel. This is another mistake that you made in your derivation.
Thread Tools
| | | |
|------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Work done by compressing a container of gas | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 3 |
| | Introductory Physics Homework | 9 |
| | Advanced Physics Homework | 2 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 7 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461482763290405, "perplexity_flag": "head"} |
http://chemistry.stackexchange.com/questions/1221/how-do-i-balance-this-combustion-reaction | # How do I balance this combustion reaction?
I have butene being burned in a fuel rich engine and I can't seem to balance this reaction
$C_{4}H_{8}+a(O_{2}+3.76N_{2})->bCO_{2}+cH_{2}O+dC_{4}H_{8}+e(3.76N_{2})$
When I try write equations for the elements, I get 5 equations and 4 unknowns which can't be solved. Here are my equations:
Carbon: $4=b+4d$
Oxygen: $2a=2b+c$
Nitrogen: $3.76a=3.76e;\quad a=e$
Hydrogen: $2a=2c+8d$
Edit: I'm not sure if this makes a different but it says that water in the combustion products is removed before the dry analysis of the exhaust. The DRY analysis has a composition of 14.95% $CO_{2}$, 0.75% $C_{4}H_{8}$, 0% $CO$, 0% $H_{2}$, 0% $O_{2}$, and balance $N_{2}$ (I don't know what that means)
-
re last sentence: it means you've got a fifth equation for your five unknowns. By the way, at the end of your reaction equation, should the 3(...) actually be e(...)? – EnergyNumbers Sep 24 '12 at 11:05
as EnergyNumbers pointed out DRY analysis gives you have a fifth equation: $b/d = 14.95/0.75 = 19.933$. Solving those will give you $a = e = 5.994$, $b=3.331$, $c=5.326$ and $d=0.167$. – mythealias Oct 28 '12 at 18:52
## 2 Answers
Balanced equation for the combustion of $\ce{C4H8}$ is $$\ce{C4H8 + 6O2 -> 4CO2 + 4H2O}$$ Nitrogen is included only to indicate that it is present in the medium with some specific ratio to the oxygen amount --well technically it is included to make question look complicated.
$\ce{C4H8}$ in the products side is included to indicate that not all of it can be combusted but some escapes. And it really is arbitrary, depends on the the engineering of combustion room etc -which is why you have 4 equations, 5 unknowns: d is free to vary. But you are given the dry analysis to fix its value.
Looking at the equations given, you have
````b = c = 4(1-d)
a = e = b + c/2 = 6(1-d) = 3b/2
````
And dry analysis says ratio of $\ce{CO2}$:$\ce{C4H8}$ is 14.95:0.75 = 19.93 (b/d) after the combustion. And this is the last equation you need.
-
See this video for a complete description of a matrix-based approach. It starts by showing how to set up the matrix. It then takes each step of reducing the matrix and extracting coefficients.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543024897575378, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/10/03/homology/?like=1&source=post_flair&_wpnonce=5b12459335 | # The Unapologetic Mathematician
## Homology
Today we can define homology before I head up to the Baltimore/DC area for the weekend. Anyone near DC who wants to hear about anafunctors can show up at George Washington University’s topology seminar on Friday.
As a preliminary, we need to know what quotients in an abelian category are. In $\mathbf{Ab}$ we think of an abelian group $G$ and a subgroup $H$ and consider two elements of $G$ to be equivalent if they differ by an element of $H$. This causes problems for us because we don’t have any elements to work with.
Instead, remember that $H$ comes with an “inclusion” arrow $H\rightarrow G$, and that the quotient has a projection arrow $G\rightarrow G/H$. The inclusion arrow is monic, the projection is epic, and an element of the quotient is zero if and only if it comes from an element of $G$ that is actually in $H$. That is, we have a short exact sequence $\mathbf{0}\rightarrow H\rightarrow G\rightarrow G/H\rightarrow\mathbf{0}$. But we know in any abelian category that this short exact sequence means that the projection is the cokernel of the inclusion. So in general if we have a monic $m:A\rightarrow B$ we define $B/A=\mathrm{Cok}(m)$.
Now we define a chain complex in an abelian category $\mathcal{C}$ to be a sequence $\cdots\rightarrow C_{i+1}\rightarrow C_i\rightarrow C_{i-1}\rightarrow\cdots$ with arrows $d_i:C_i\rightarrow C_{i-1}$ so that $d_{i-1}\circ d_i=0$. In particular, an exact sequence is a chain, since the composition of two arrows in the sequence is the zero homomorphism. But a chain complex is not in general exact. Homology will be the tool to measure exactly how the chain complex fails to be exact.
So let’s consider the following diagram
where $g\circ f=0$. We can factor $f$ as $m\circ e$ for an epic $e$ and a monic $m=\mathrm{Im}(f)$. We can also construct the kernel $\mathrm{Ker}(g)$ of $g$. Now $g\circ m\circ e=g\circ f=0=0\circ e$, so $g\circ m=0$ because $e$ is epic. This means that $m$ factors through $\mathrm{Ker}(g)$, and the arrow $\mathrm{Im}(f)\rightarrow\mathrm{Ker}(g)$ must be monic.
Now, if the sequence were exact then $\mathrm{Im}(f)$ would be the same as $\mathrm{Ker}(g)$, and the arrow we just constructed would be an isomorphism. But in general it’s just a monic, and so we can construct the quotient $\mathrm{Ker}(g)/\mathrm{Im}(f)$. When the sequence is exact this quotient is just the trivial object $\mathrm{0}$, so the failure of exactness is measured by this quotient.
In the case of a chain complex we consider the above situation with $f=d_{i+1}$ and $g=d_i$, so they connect through $C_i$. We define $Z_i=\mathrm{Ker}(g)$ and $B_i=\mathrm{Im}(f)$, which are both subobjects of $C_i$. Then the “homology object” $H_i$ is the quotient $H_i=Z_i/B_i$. We can string these together to form a new chain complex $\cdots\rightarrow H_{i+1}\rightarrow H_i\rightarrow H_{i-1}\rightarrow\cdots$ where all the arrows are zero. This makes sense because if we think of the case of abelian groups, $H_i$ consists of equivalence classes of elements of $Z_i$, and when we hit any element of $Z_i$ by $d_i$ we get ${0}$. Thus the residual arrows when we pass from the original chain complex to its homology are all zero morphisms.
### Like this:
Posted by John Armstrong | Category theory
## 8 Comments »
1. Yay! Now we’re talking!
Seriously, reading these posts of yours makes me think about whether it’d be reasonably easy to feed this kind of homological algebra into Coq, or even, possibly, automate diagram chasing proofs with proof assistants.
Comment by | October 4, 2007 | Reply
2. [...] a couple weeks ago I defined a chain complex to be a sequence with the property that . The maps are called the “differentials” of [...]
Pingback by | October 16, 2007 | Reply
3. [...] We’ve defined chain complexes in an abelian category, and chain maps between them, to form an -category . Today, we define chain [...]
Pingback by | October 17, 2007 | Reply
4. [...] out the sequence with — the trivial space — in either direction. This is just like a chain complex, except the arrows go backwards! Instead of the indices counting down, they count up. We can deal [...]
Pingback by | July 20, 2011 | Reply
5. [...] Cartan’s formula in hand we can show that the Lie derivative is a chain map . That is, it commutes with the exterior derivative. And indeed, it’s easy to [...]
Pingback by | July 28, 2011 | Reply
6. [...] armed with chains — formal sums — of singular cubes we can use them to come up with a homology theory. Since we will use singular cubes to build it, we call it “cubic singular [...]
Pingback by | August 9, 2011 | Reply
7. [...] homology we’ve constructed is actually a functor. That is, given a smooth map we want a chain map , which then will induce a map on homology: [...]
Pingback by | August 10, 2011 | Reply
8. [...] The algebra of differential forms — together with the exterior derivative — gives us a chain complex. Since pullbacks of differential forms commute with the exterior derivative, they define a chain [...]
Pingback by | December 2, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 48, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328510165214539, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/68745/necessary-conditons-for-implicit-function | ## Necessary Conditons For Implicit Function
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Inverse Function Theorem provides sufficient conditions to determine when a function is defined implicitly by a relation. I would like to know some ways to determine when no such function is defined.
Below is a link to a specific example and conjecture.
http://math.stackexchange.com/questions/46750/how-to-prove-the-implicit-function-theorem-fails
-
## 4 Answers
In the same spirit as Michael's answer: Over the complex variables, there is a theorem due to W.F. Osgood (published in his "Lehrbuch der Funktionentheorie", 2. Aufl., Bd. 2, Parte 1, Leipzig 1929) about solvability of $w=f(z)$ for a system of holomorphic functions on a neighborhood of a point $a \in \mathbb{C}^n$, which is an isolated point of the set ${z:f(z)=b:=f(a)}$. This is nicely discussed in B.V. Shabat's book "Complex analysis" (part II, section 14, item 44-although I am not sure if this is included in the English translation of the book)-in terms of resultants and Weierstrass' Preparation Theorem. Shabat also refers to: M. Herve, "Several Complex Variables. Local Theory", Oxford 1963. And there is some information (on the topic of failure of the implicit function theorem) in the book by Aizenberg and Yuzhakov on residue theorems.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A natural approach would be to classify such singular points by the deficiency of the rank of the Jacobian. If the deficiency is one, you can solve for all but one of the variables and reduce the problem to a scalar equation. The rest is then quite straightforward: The equation f(x)=y, with f(0)=0 is solvable for x in a neighborhood of 0 if the leading term in the Taylor expansion of f is odd; it is not always solvable if the leading term is even. If the deficiency in the rank of the Jacobian is two, you end up with a system of two equations, generally quadratic at leading order. Discussing the solvability of such a system is still a manageable task.
-
In this context, a remarkable phenomenon to consider is also lack of uniqueness, that may be considered an instance of bifurcation.
-
@Pietro: I'm sorry, I don't quite follow. In the context, what is non-unique? Just to see if I understand correctly, are you referring to the case where the implicit relation is multi-valued (for example the relation from $x$ to $y$ near $(1,0)$ for the function $f(x,y) = x^2 + y^2 = 1$)? Thanks. – Willie Wong Jun 25 2011 at 11:58
Yes: e.g. for $f:\Lambda \times X\to Y$, I mean non-uniqueness, for a given $\lambda\in\Lambda$, of a point $x$ such that $f(\lambda,x)=0$. If this happens in any nbd of some $\lambda_0$, we say $\lambda_0$ is bifurcation point for the equation $f(\lambda,x)=0$. I meant to recall that a relevant case (thus not a pathology but an important phenomenon) where the picture of the IFT does not hold is this, bifurcation. – Pietro Majer Jun 28 2011 at 16:36
Just a comment about the link you posted: G=0 implies $4uv=2y^2-2x^2$ so when you substitute in $F=0$ you simply get $x^2+y^2+u^2+v^2=0$ which can be only satisfied by $x=y=u=v=0$ (if you work in $\mathbb{R}^4$...)
-
Yeah that result was mentioned I guess it was kind of assumed the variables are Real. I elaborated a little to say that this means you can't have x or y defined as functions of u and v in an open n-hood of any point (u,v) because the only point that's a candidate is (0,0) and even at this point any variation in u or v will no longer satisfy the system. What I'm curious about is if there are other ways to make conclusions like that. – LaLone Jun 25 2011 at 2:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274851679801941, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/54150/small-car-colliding-with-large-truck | # Small car colliding with large truck
A small car collides with a large truck. Why do both vehicles experience the same magnitude of force? Wouldn't the large vehicle experience less force than the small one?
-
## 2 Answers
The two vehicles experience a force of the same magnitude due to Newton's third law:
If object $A$ exerts a force $\mathbf F_{AB}$ on object $B$, then object $B$ will exert a fore $\mathbf F_{BA}$ on object $A$ and $$\mathbf F_{BA} = -\mathbf F_{AB}$$
However, what you're probably thinking about is that motion of the car is more drastically affected by the collision. This can be explained by Newton's second law. Let's say the truck has mass $M$ and the car has mass $m$. If the magnitude of the force that both vehicles experience is $F$, then the magnitudes of their respective accelerations are $$a_\mathrm{truck} = \frac{F}{M}, \qquad a_\mathrm{car} = \frac{F}{m}$$ and combining these we get $$\frac{a_\mathrm{truck}}{a_\mathrm{car}} = \frac{m}{M}$$ So if the mass of the car is a lot less than the mass of the truck, then the acceleration of the truck is much smaller than the acceleration of the car, and if you were to watch the collision, the truck would pretty much seem like it's motion was unaffected, but the car's motion will change quite a bit.
-
Thanks but how do you know the forces are the same? – user1530249 Feb 16 at 23:31
Does equal but opposite reaction mean equal but opposite force? – user1530249 Feb 16 at 23:33
Yup! See the edit where I wrote down Newton's third law. – joshphysics Feb 16 at 23:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9705168604850769, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/113054-implicit-differentiation-trignometric-equation-multiple-choice.html | # Thread:
1. ## Implicit Differentiation of a Trignometric Equation [Multiple Choice]
If $cos (xy) = x$, then $\frac{dy}{dx}=$
1. (A) $\frac{-csc (xy) - y}{x}$
2. (B) $\frac{csc (xy) - 1}{x}$
3. (C) $-csc (xy)$
4. (D) $\frac{-csc (xy)}{x}$
5. (A) $-csc (xy) - 1$
2. Originally Posted by StarlitxSunshine
If $cos (xy) = x$, then $\frac{dy}{dx}=$
1. (A) $\frac{-csc (xy) - y}{x}$
2. (B) $\frac{csc (xy) - 1}{x}$
3. (C) $-csc (xy)$
4. (D) $\frac{-csc (xy)}{x}$
5. (A) $-csc (xy) - 1$
I give up...Which one is it?
What's the problem, what don't you understand about the question? where are you stuck? Help me help you.
3. Originally Posted by StarlitxSunshine
If $cos (xy) = x$, then $\frac{dy}{dx}=$
1. (A) $\frac{-csc (xy) - y}{x}$
2. (B) $\frac{csc (xy) - 1}{x}$
3. (C) $-csc (xy)$
4. (D) $\frac{-csc (xy)}{x}$
5. (A) $-csc (xy) - 1$
Implicit differentiation gives
$-\sin(xy) (y + x y') = 1$
Solve for y':
$y' = \frac{\frac{-1}{sin(xy)}-y}{x}=\frac{-csc(xy)-y}{x}$
4. Originally Posted by VonNemo19
I give up...Which one is it?
What's the problem, what don't you understand about the question? where are you stuck? Help me help you.
>_< I know how you feel xD
I tried implicit differentiation, but I don't think I'm doing it right. I've never done a question where the sin function has two variables (both x and y) in it, so I'm not really sure how that works. And moreover, I can't seem to get past the first step of the implicit differentiation because I don't know what to do next. :33
5. Originally Posted by StarlitxSunshine
>_< I know how you feel xD
I tried implicit differentiation, but I don't think I'm doing it right. I've never done a question where the sin function has two variables (both x and y) in it, so I'm not really sure how that works. And moreover, I can't seem to get past the first step of the implicit differentiation because I don't know what to do next. :33
"implicit" means that it is "implied" that $y$ is a function of $x$. This means that when $x$ and $y$ are grouped together as a product, quotient, etc... it must be understood that one must employ the neccesary and appropriate method of differetiating.
For example, consider the function
$f(x)=y=x^2+x$
Taking the derivative is simple enough
$f'(x)=\frac{dy}{dx}=2x+1$
Well, what if we had simply rewrote before differentiating, and subtracted $y$ from both sides
$0=x^2+x-y$
No problem, $y$ is still a function of $x$, so the derivative "with respect to x" is
$0=2x+1-\frac{dy}{dx}$
And adding, we are back to where we started
$\frac{dy}{dx}=2x+1$.
In this case, it was very easy to solve for $y$, but sometimes - as in the problem you have provided - solving for $y$ is tedious, and in some cases, impossible. So, we understand $y$ to be an "implied" function of $x$and we differentiate.
Bye! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9575856328010559, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/26411/uniform-solutions-to-posts-problem-for-axiomatizable-theories/26413 | ## Uniform solutions to Post’s problem for axiomatizable theories
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Second Incompleteness Theorem says that if $T$ is a consistent (computably) axiomatizable theory which extends IΣ1, then $\mathrm{Con}(T)$ is not provable from $T$. By analogy with computability theory, the stronger theory $T + \mathrm{Con}(T)$ can be thought of as the "jump" of $T$. To abuse this analogy, I will use $T'$ to denote the theory $T + \mathrm{Con}(T)$. I will write $T \leq S$ when $S$ proves every axiom of $T$; I will also write $S \equiv T$ (resp. $T < S$) when $T \leq S$ and $S \leq T$ (resp. $S \nleq T$).
It is well-known that if $T$ is consistent there are plenty of axiomatizable theories $S$ such that $T < S < T'$. In the following questions $H$ will denote an operator (like $\mathrm{Con}$) that uses the computable axiomatization of $T$ to produce a sentence $H(T)$. I will write $T^H$ for the theory $T + H(T)$.
1. Is there a computable operator $H(T)$ such that $T < T^H < T'$ for every consistent axiomatizable theory $T$ extending IΣ1? Is there such an operator which moreover satisfies that $T \equiv S$ implies $T^H \equiv S^H$?
2. Is there a computable operator $H(T)$ such that $(T^H)^H \equiv T'$ for every consistent axiomatizable $T$ extending IΣ1? Is there such an operator which moreover satisfies that $T \equiv S$ implies $T^H \equiv S^H$?
Question 1 asks for a uniform solution to the analogue of Post's Problem for axiomatizable theories. Question 2 asks for a uniform "half-jump" operator.
-
A little-known fact: Q is not strong enough for the second incompleteness theorem as it is usually proved, because Q doesn't prove the Hilbert-Bernays conditions on the Bew predicate. In fact, I read recently that Q does not verify that Bew is closed under modus ponens. I have not read the proof, though. Sigma^0_1 induction is apparently enough for the theory to verify the Hilbert-Bernays conditions, but not Sigma^0_0 induction. I only learned this when I had to actually document the hypotheses required for the incompleteness theorems this winter to help out an undergrad. – Carl Mummert May 30 2010 at 3:01
Thanks Carl! I just replaced Q by ISigma_1 as you suggested. – François G. Dorais♦ May 30 2010 at 3:20
Very nice question, in particular #2. – Halfdan Faber May 30 2010 at 4:48
François, could you clarify what sense of computability you want here? Shall we assume that T is given by finitely many axioms over the base theory? Or do you want us to work with a program enumerating T? – Joel David Hamkins May 30 2010 at 20:53
Joel, I don't think it matters much what type of machine enumeration you use, but if you find it more comfortable you can assume that all enumerations are primitive recursive. Any computable enumeration is equivalent to a primitive recursive one via padding tricks. – François G. Dorais♦ May 30 2010 at 21:02
show 7 more comments
## 2 Answers
(Note: this has been rewritten to reflect the comments below).
The answer to #1 is basically yes, because the proof that the Lindenbaum algebra above T is atomless is completely constructive.
Start with a (consistent) theory T to which the second incompleteness theorem applies, which means that T + ~Con(T) is also consistent. Then there is a sentence S such that T + ~Con(T) neither proves nor disproves S (using the first incompleteness theorem via Rosser's trick). So T + ~Con(T)$\land$~S is stronger than T + ~Con(T), but is still consistent. This means that T + ~(Con(T)$\lor$S) is consistent, so T + Con(T)$\lor$S is stonger than T.
If T $\vdash$ (Con(T)$\lor$S) $\to$ Con(T) then T $\vdash$ S $\to$ Con(T). But this means T $\vdash$ ~Con(T) $\to$ ~S which is impossible. This shows that T + (Con(T)$\lor$S) < T+ Con(T) .
So we can let TH be T + (Con(T)$\lor$S).
-
That was blazing fast Carl! Any thoughts on #2? – François G. Dorais♦ May 30 2010 at 3:40
Thanks; I see my blind spot now. I rewrote the answer above. – Carl Mummert May 31 2010 at 3:29
1
I was afraid I had extra negation symbols; the code started to blur together in my browser, but I thought I had double-checked them. They should be fixed now. – Carl Mummert Jun 1 2010 at 4:41
1
Great ! – Joel David Hamkins Jun 1 2010 at 5:06
I have deleted my earlier comments, which pointed out a flaw in the original answer, because the issue has been completely addressed in the revised answer. – Joel David Hamkins Jun 2 2010 at 19:58
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The premise in your question that the Con operator itself has the desired property and serves as a jump operator is not universally true among the theories you consider. Specifically, you seem to assume that because $\text{Con}(T)$ is not provable in $T$, that $T+\text{Con(T)}$ is consistent. But this is not correct, because perhaps $T$ actually proves $\neg\text{Con}(T)$. One easy instance of this is the theory $T=PA+\neg\text{Con}(PA)$, which is consistent by the 2nd Incompleteness Theorem, but clearly proves $\neg\text{Con}(PA)$ and hence also $\neg\text{Con}(T)$. Thus, as weird as it sounds, $T$ is a consistent theory that proves its own inconsistency. In this case your theory $T'$ is inconsistent and the jump failed. Carl's theory $T^H$ in this case is consistent, but upon inspection you will find that it is equivalent to $T$. So for this theory $T$, your theory $T'$ jumped into inconsistency, and his theory didn't jump at all.
One can similarly replace $PA$ here with any representable theory $T_0$ and arrive at similar counterexamples, densely above any theory.
You can fix the question by considering only the case where $T'$ is consistent, which is surely what you had in mind. In this event, you would only apply the jump when it happens to arrive at a consistent theory. Since this question is not decidable from a presentation of the theory, however, even from a finite axiomatization, it may affect your motivation for considering computable versions of the half-jump, since even the full jump is not computable.
For this reason, and also because there is something a little arbitrary about having the jump only partially defined, it may be that a more robust jump arises from the Rosser sentence---there is no proof of me without a shorter proof of my negation---instead of $\text{Con}(T)$? This would put you back into the universal domain of all representable consistent theories.
-
I wasn't assuming that $T'$ is consistent; the inconsistent theory is the top of the lattice. However, you are right that using Rosser's sentence instead is perfectly justifiable. – François G. Dorais♦ May 30 2010 at 20:51
I don't know if Rosser's sentence gives a theory which is independent of the enumeration of T. Do you happen to know? – François G. Dorais♦ May 30 2010 at 21:30
I doubt it. At the very least, it would seem required for the theory to prove that the enumerations gave the same theory, not merely that this was true. – Joel David Hamkins May 30 2010 at 21:45
Why is it necessary to prove the equivalence internally? – François G. Dorais♦ May 30 2010 at 21:58
Not only can Rosser’s sentence depend on the enumeration, but in fact, it can also depend on the method of diagonalization. That is, there exist proof predicates for which Rosser’s fixed point equation has more than one solution up to provable equivalence. This is an old result of (IIRC) Guaspari and Solovay. (In contrast, Gödel’s sentence, and more generally any fixed point equation using just the provability predicate and Boolean connectives, is unique up to provable equivalence, for a fixed proof predicate.) – Emil Jeřábek Nov 8 at 11:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462215304374695, "perplexity_flag": "middle"} |
http://cms.math.ca/10.4153/CMB-2010-049-7 | Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CMB
Abstract view
# Exceptional Covers of Surfaces
Read article
[PDF: 163KB]
http://dx.doi.org/10.4153/CMB-2010-049-7
Canad. Math. Bull. 53(2010), 385-393
Published:2010-05-11
Printed: Sep 2010
• Jeffrey D. Achter,
Department of Mathematics, Colorado State University, Fort Collins, CO, USA
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: HTML LaTeX MathJax
## Abstract
Consider a finite morphism $f: X \rightarrow Y$ of smooth, projective varieties over a finite field $\mathbf{F}$. Suppose $X$ is the vanishing locus in $\mathbf{P}^N$ of $r$ forms of degree at most $d$. We show that there is a constant $C$ depending only on $(N,r,d)$ and $\deg(f)$ such that if $|{\mathbf{F}}|>C$, then $f(\mathbf{F}): X(\mathbf{F}) \rightarrow Y(\mathbf{F})$ is injective if and only if it is surjective.
MSC Classifications: 11G25 - Varieties over finite and local fields [See also 14G15, 14G20] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7320843935012817, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/10911/english-reference-for-a-result-of-kronecker/11036 | ## English reference for a result of Kronecker?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Kronecker's paper Zwei Sätze über Gleichungen mit ganzzahligen Coefficienten apparently proves the following result that I'd like to reference:
Let $f$ be a monic polynomial with integer coefficients in $x$. If all roots of $f$ have absolute value at most 1, then $f$ is a product of cyclotomic polynomials and/or a power of $x$ (that is, all nonzero roots are roots of unity).
However, I don't have access to this article, and even if I did my 19th century German skills are lacking; does anyone know a reference in English I could check for details of the proof?
-
4
As an aside, a great resource for finding old German papers is the GDZ website gdz.sub.uni-goettingen.de At the site you can search for and download whatever paper you happen to be interested in (for example, Kronecker's paper is there). – Ben Linowitz Jan 6 2010 at 16:22
And 'old German papers' includes (all, I think, of) Inventiones, for example. – Mariano Suárez-Alvarez Feb 1 2010 at 1:57
1
"Lectures on the theory of algebraic numbers" by Erich Hecke, Section 34, Lemma (a) p. 108, books.google.com/… Sorry for being so late! – Pierre-Yves Gaillard Sep 17 2010 at 5:25
## 6 Answers
If all the Galois conjugates of an algebraic integer $\alpha$ have absolute value at most 1, then the norm of this algebraic integer is a rational integer with absolute value at most 1. Hence either the algebraic integer is 0, or its norm is $\pm1$, and in the latter case all the Galois conjugates of $\alpha$ must have absolute value equal to 1. Now it's a well-known fact that the only algebraic integers all of whose conjugates have absolute value 1 are the roots of unity [Proof: bounds on the absolute values of the conjugates give bounds on the coefficients of the min polys, and so there are only finitely many possible min polys for $\alpha^n$, $n=1,2,3,\ldots$ (as the degrees are bounded too), and hence $\alpha^n=\alpha^m$ for some $m>n>0$], so there is a complete proof for you.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't know a reference, but here is a quick proof: Let the roots of the polynomial be `$\alpha_1$`, `$\alpha_2$`, ..., `$\alpha_r$`. Let
`$$f_n(x) = \prod_{i=1}^r (x- \alpha_i^n).$$`
All the coefficients of $f_n$ are rational, because they are symmetric functions of the $\alpha$'s, and are algebraic integers, because the $\alpha$'s are, so they are integers. Also, since $|\alpha_i| \leq 1$, the coefficient of $x^k$ in $f_n$ is at most $\binom{r}{k}$.
Combining the above observations, the coefficients of the `$f_n$` are integers in a range which is bounded independent of $n$. So, in the infinite sequence $f_i$, only finitely many polynomials occur. In particular, there is some $k$ and $\ell$, with $\ell>0$, such that `$f_{2^k} = f_{2^{k + \ell}}$`. So raising to the $2^{\ell}$ power permutes the list $(\alpha_1^{2^{k}}, \ldots, \alpha_r^{2^k})$. For some positive $m$, raising to the $2^{\ell}$ power $m$ times will be the trivial permutation. In other words,
`$$\alpha_i^{2^k} = \alpha_i^{2^{k+\ell m}}$$`.
Every root of the above equation is $0$ or a root of unity.
-
Snap ! – Kevin Buzzard Jan 6 2010 at 13:18
Ummm, I didn't mean that as a put down. There are plenty of circumstances where a reference is more useful than a proof. If GTaylor is writing a paper, he certainly wants to cite a published result, not an internet posting. – David Speyer Jan 6 2010 at 13:23
1
There are definitely circumstances where a reference is more useful than a proof. But here (a) the proof is short and (b) the OP makes it clear that they want to read the details, which surely justifies typing up a proof. – Kevin Buzzard Jan 6 2010 at 13:27
Thanks for the proofs both of you; I'm referencing the result in my thesis so yes, I need something to cite, but for something that admits quick proofs like this I feel I should definitely know how it works rather than just appeal to a source. – Gray Taylor Jan 6 2010 at 14:14
1
@Kevin: Many PhD committees (in the US at least) have the sentiment that the thesis should be the one mathematical document the candidate writes in which all details are spelled out. It's not necessarily that a lack of detail suggests that the candidate does not know how to prove the result (although that certainly could be the case, and readers of the thesis who are outside the subject area can find it hard to tell). In this case, I would argue that more value is added in just including a proof: an extra half page is nothing on the scale of most theses. – Pete L. Clark Feb 1 2010 at 9:14
show 1 more comment
Bombieri and Gluber's recent book "Heights in Diophantine Geometry" has a proof of this in chapter 1.
-
Another nice reference (with a short proof) is
G. Greiter, A simple proof for a theorem of Kronecker, Amer. Math. Monthly 85 (1978), no. 9, 756–757.
The proof in this paper is related to the proofs given above by Kevin and David, but is a bit more elementary.
-
Here is a good reference! http://www2.ucy.ac.cy/~damianou/kronecker.pdf
-
When I was a kid the standard reference for this result was Polya and Szego, Problems and Theorems in Analysis, Volume 2. It's question 200 in Part 8.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185649752616882, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/43642/kernel-of-gz-p2-z-gz-pz-is-the-lie-algebra-of-g-over-z-pz | ## kernel of G(Z/p^2 Z)->G(Z/pZ) is the lie algebra of G over Z/pZ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $G$ be an affine algebraic group defined over $\mathbf Z$. The kernel of the natural homomorphism $G(\mathbf Z/p^2\mathbf Z)\to G(\mathbf Z/p\mathbf Z)$, if abelian, is a group which comes along with the conjugation action of $G(\mathbf Z/p\mathbf Z)$.
In the case where $G$ is a classical group, this kernel is isomorphic (as a set with $G(\mathbf Z/p\mathbf Z)$-action) to the Lie algebra $\mathfrak g(\mathbf Z/p\mathbf Z)$ of $G(\mathbf Z/p\mathbf Z)$ (which comes with the adjoint action). It seems that this should be the case in general.
Does anyone know of a reference for this kind of thing?
-
## 1 Answer
Take a look at Waterhouse's book - Introduction to affine group schemes. I think Theorem 12.2 is what you're looking for.
-
Thanks. Thats exactly what I was looking for. – Amritanshu Prasad Oct 26 2010 at 8:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211193919181824, "perplexity_flag": "head"} |
http://nrich.maths.org/1947 | ### Just Rolling Round
P is a point on the circumference of a circle radius r which rolls, without slipping, inside a circle of radius 2r. What is the locus of P?
### Pericut
Two semicircle sit on the diameter of a semicircle centre O of twice their radius. Lines through O divide the perimeter into two parts. What can you say about the lengths of these two parts?
### Giant Holly Leaf
Find the perimeter and area of a holly leaf that will not lie flat (it has negative curvature with 'circles' having circumference greater than 2πr).
# Bound to Be
##### Stage: 4 Challenge Level:
$ABCD$ is a square of side 1 unit. Arc of circles with centres at $A, B, C, D$ are drawn in. Prove that the area of the central region bounded by the four arcs is: $(1 + \pi/3 + \sqrt{3})$ square units.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9091595411300659, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/28251/schematic-design-of-the-apparatus-photoelectric-effect?answertab=oldest | # Schematic design of the apparatus (photoelectric effect)
I think I understand the basic idea of photoelectric effect but there are two things in the schematic diagram of the apparatus for the investigation of the photoelectric effect which I do not understand.
Firstly, the emitting electrode is connected to a positive terminal of a battery. This implies there is an electron deficiency there, so where are the 'loose' electrons are coming from? I understand there might be some to start with but with the intensive light shone on the emitter shouldn't the emitter run out of those loosely bound electrons quite soon?
Second problem is what happens at the collecting electrode. It is connected to the negative terminal of the battery to create the retarding potential difference and I understand as the photoelectrons travel across the gap they experience a repulsive force. Now the book says that those with NOT enough kinetic energy will be stopped and no current will be registered. I am however interested in details as to what happens to those who do make it through. I suppose they would not make it all the way up to a galvanometer, so how exactly is the current produced? Do the photoelectrons 'push' the free electrons in the metal by repulsive force, just like it happens in a regular wire which is connected between the terminals of the battery? What eventually happens to those free electrons which are located near the negative terminal - do they get pushed 'inside' the battery - but this looks as if the battery is connected with the wrong polarity? Also I do not understand why the photoelectrons could push the free electrons in the metal - there are many more free electrons and their combined repulsive force should be much greater, shouldn't it?
This I do not understand.
-
## 2 Answers
There are electrons on both plates, collector and emitter. It is true, that there are less negatively charged electrons than positively charged atoms on emitter plate, which makes it positively charged and vice-versa on the collector plate, but there are electrons on both plates.
In fact, the total surplus charge on both plates exactly matches the expression for the capacity of capacitor
$$Q = C U = n q,$$
where $C$ is the capacitance of plates' arrangement (a constant), $U$ is the voltage of the battery (a constant), $n$ is an excess or deficiency number of electrons and $q$ is the charge of one electron (a constant). So in order the situation is in equilibrium you need the excess of $n$ electrons on collector plate and the deficiency of $n$ electrons on emitter plate.
Once some electrons make to the negatively charged collector plates, there are too many negative charges on the negative collector plate and too little negative charges on the positive emitter plate in order that the expression for the capacity above is fulfilled. Therefore electrons travel from collector plate to emitter plate through battery to establish equilibrium again. You get back-current which is actually filling the battery!
-
The simplest answer is that the diagram is wired backwards, electrons will flow to the positive terminal. One could argue that the wired as it is, that the intent is to provide a potential barrier to the electron flow so only those with an excess of energy will get to the other terminal and pump energy into the battery. However, the excess energy of the photon typically goes into heating the substrate and does not go into the kinetic energy of the electron.
-
This is simply incorrect. The bias in the diagram is exactly the one you want and need to measure the maximum kinetic energy of the liberated electrons. Further, it is a good thing that the number of conduction electrons on the anode is reduced because you want to interact only with bound (as opposed to free) electrons; otherwise you don't have a fixed potential barrier to overcome which allows you to detect the quantum nature of the energy transfer. – dmckee♦ Mar 9 at 22:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499444961547852, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/23842/list | Return to Answer
1 [made Community Wiki]
I also Had a quick look (maybe a little less quick), and although I very much like the other answer, which illustrates that it may be difficult to fix, I may have found a more specific error, which may be more helpfull as an answer to the question.
Firstly, I am a little confused as to what constitutes a stratification. I see two possibilities:
1) The one which is actually defined which allows the following stratification: $S_1=S_2=D^2\times [0,1]$ and they are glued along a closed disc in the interior of $D^2\times {1}$ of $S_1$ and the same disc in the interior of $D^2 \times {0}$ in $S_2$.
2) The one which I think is implied at some points: $S_i$ and $S_{i+1}$ may only be identified such that $U(S_i) \cup L(S_{i+1})$ is in fact a sub-surface in the 3-manifold.
I will describe my problems related to both definitions:
In the proof of prop 5.8 parts (2-3-4) he attaches "3-cell"s (I would write 3-disc as to avoid confusion with CW complex attachments of cells, or attach both a 2-cell and a 3-cell) $W$, and extends the stratification.
If we work under definition 2) above then this seems generally impossible because you would often also have to attach it at the top of $S_{i+1}$ to get the extra surface assumption in 2).
If we work under definition 1) above then this doesn't even make the new $F_{i+1}$ a surface in the simple example described above. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9722900390625, "perplexity_flag": "head"} |
http://mathhelpforum.com/number-theory/41538-powers-modulo-p-primitive-root-question.html | Thread:
1. powers modulo p and primitive root question
Let p be a prime number.
What is the value of 1 + 2 + 3 +......+(p-1) (mod p)
How would I begin to work this problem?
2. Originally Posted by duggaboy
Let p be a prime number.
What is the value of 1 + 2 + 3 +......+(p-1) (mod p)
How would I begin to work this problem?
Note that (p-k) = -k mod p, k = 1,2,3,4,...
1 + 2 + 3 +......+(p-1) (mod p) = 1 + (p-1)+ 2 + (p-2) +......+(p-1)/2 + (p+1)/2 (mod p) = p + p +......+p (mod p) = 0
3. ok is the idea here that p + p +...+p (modp) =0 because the modulo and the prime number are the same its always eqaul to zero? I'm a little confused why we want it to equal 0....
Thank you so very much : )
4. Originally Posted by duggaboy
ok is the idea here that p + p +...+p (modp) =0 because the modulo and the prime number are the same its always eqaul to zero? I'm a little confused why we want it to equal 0....
Thank you so very much : )
Ya you are right!
p mod p = 0 is the reason
5. Originally Posted by duggaboy
Let p be a prime number.
What is the value of 1 + 2 + 3 +......+(p-1) (mod p)
How would I begin to work this problem?
There is another way: $1+2+...+(p-1) = p\cdot \tfrac{p-1}{2}$, if $p$ is odd.
And so this is a multiple of $p$ thus it reduces to $0$ mod $p$.
If $p$ is even then it reduces to $1$.
Try a harder problem: find $1^k+2^k+...+(p-1)^k (\bmod p)$ where $k$ is positive integer.
6. ok....so if k =2 then we it would be....
1^2 + 2^2 + 3^2 +....+(p-1)^2
(p-1)(p+1)/2p ??
Or would you have to use another approach?? Ultimately you still want 0 mod p right?
7. Originally Posted by duggaboy
ok....so if k =2 then we it would be....
1^2 + 2^2 + 3^2 +....+(p-1)^2
(p-1)(p+1)/2p ??
Or would you have to use another approach?? Ultimately you still want 0 mod p right?
I am not exactly sure what you are doing.
Here is a hint, $p$ as a primitive root $a$ and so $\{ 1,2,...,p-1\} = \{ a,a^2,...,a^{p-1} \}$.
8. n n n
phi(p) = (p - 1) * p 1 * (p - 1) * p 2 ... (p - 1) * p k
1 1 2 2 k k
phi(p) = p-1.
Somthing like this? to get 0 mod p
how would you handle this one? I am very interested in findining out | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8974593281745911, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-geometry/171597-how-do-you-use-mean-value-theorem-prove.html | # Thread:
1. ## How do you Use the Mean Value Theorem to prove this
Use the Mean Value Theorem to prove that if p > 1 then $(1+x)^p$ $>$ $1+px$ for
$x$ $\in$ (−1, 0) U (0, $\infty$).
2. Originally Posted by maximus101
Use the Mean Value Theorem to prove that if p > 1 then $(1+x)^p$ $>$ $1+px$ for
$x$ $\in$ (−1, 0) U (0, $\infty$).
Dear maximus101,
Using the Taylor's theorem you could write,
$(1+x)^p=1+px+p(p-1)(1+c)^{p-2}x^2~\text{where}~0<c<x~or~x<c<0$------(A)
$p>1\Rightarrow{p(p-1)>0}$-----(1)
If $0<c<x~then~1<1+c$-----(2)
If $x<c<0~then~{1+x<1+c<1}$----(3)
$x>-1\Rightarrow{1+x>0}$------(4)
By, (3) and (4); $If~x<c<0\Rightarrow{0<1+c<1}$----(5)
By (2) and (5); for both cases, $0<c<x~and~x<c<0\Rightarrow{0<1+c}$------(6)
Therefore, by (1) and (6); $p(p-1)(1+c)^{p-2}x^2>0~if~x\neq{0}~and~x>-1$-----(6)
By (A);
$(1+x)^p-1-px=p(p-1)(1+c)^{p-2}x^2>0~if~x\neq{0}~and~x>-1$
$(1+x)^p>1+px~if~x>-1~and~x\neq{0}$
3. Hey, thank you for this it was very helpful could you explain how I could use the mean value theorem to prove it?
4. Originally Posted by maximus101
Hey, thank you for this it was very helpful could you explain how I could use the mean value theorem to prove it?
There are several mean value theorems; Rolle's mean value theorem, Cauchy's mean value theorem and the Taylor's mean value theorem. Please refer Mean value theorem - Wikipedia, the free encyclopedia. I have used the Taylor's mean value theorem in the above answer.
5. Originally Posted by Sudharaka
There are several mean value theorems; Rolle's mean value theorem, Cauchy's mean value theorem and the Taylor's mean value theorem. Please refer Mean value theorem - Wikipedia, the free encyclopedia. I have used the Taylor's mean value theorem in the above answer.
Hey, sorry for my mistake, I meant the first one showing
$f'(c)$= $\frac{f(b)-f(a)}{b-a}$
kind thanks
6. Originally Posted by maximus101
Hey, sorry for my mistake, I meant the first one showing
$f'(c)$= $\frac{f(b)-f(a)}{b-a}$
kind thanks
That is also know as Lagrange's mean value theorem. It is used to prove Taylor's theorem. Therefore it had been used in the answer.
7. ok I worked it out thank you | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9087750315666199, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/21973/factorization-of-fermionic-scattering-integral-in-2d-momentum-rep | Factorization of fermionic scattering integral in 2d momentum rep
the scattering integrals for fermions involves both momentum ($k$) and energy ($k^2$) conservation and a nonlinear phase space factor of a distribution function $f(k)$.
$$\begin{multline}I(k) = \sum_{k_1, k_2, k_3} \delta(k^2+k_1^2+k_2^2+k_3^2) \delta(k+k_1+k_2+k_3)\times \\ \Bigl[f(k)f(k_1)\bigl(1-f(k_2)\bigr)\bigl(1-f(k_3)\bigr) - f(k_2)f(k_3)\bigl(1-f(k)\bigr)\bigl(1-f(k_1)\bigr)\Bigr]\end{multline}$$
In energy space, energy conservation is linear and powerful factorizations are possible to compute the integrals fast. In momentum space, the nonlinear energy conservation constraint complicates factorizations.
Trivially, in 2 dim one can work in polar coordinates separating off the radial part. But has anybody seen a more efficient reduction of numerical complexity (e.g. mapping to FFT,...)
Thanks for any hint!
-
1
– David Zaslavsky♦ Mar 6 '12 at 20:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7859699130058289, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/255504/find-the-equations-for-the-hyperplanes-of-symmetries-of-the-n-cube?answertab=votes | Find the equations for the hyperplanes of symmetries of the $n$-cube
Given the $n$-cube in $\mathbb{E}^n$ with vertices $(\pm1,\dotsc,\pm1)$, I'd like to find the equations for the hyperplanes in $\mathbb{E}^n$ that are the mirrors of reflection symmetries of the cube.
Essentially, given an $n$-dimensional cube, there are $n$ hyperplanes such that reflection through these planes generates the entire automorphism group of the $n$-cube. They correspond to a reflection through a vertex, an edge, a face, and so on throughout the dimensions. For example, the $1$-cube is just the line segment $[-1,1]$. Then the hyperplane is just the point $x=0$. When $n=2$, we just have a square and the two hyperplanes are the lines $y=x$ and $x=0$.
For the $3$-cube, I calculated the automorphisms that generate the automorphism group explicitly and then used those to determine the hyperplane equations. I found that the three planes are $x=0, y=x,$ and $z=y$.
For any more dimensions, I wasn't able to work it out explicitly and so all I have is a conjecture based on the previous work. The same equations are used as we move up in dimension, just adding a single new equation each time in the newest variable. My conjecture is that given variables $(x_1,\dotsc,x_n)$, then the equations of the hyperplanes of reflection symmetries will be $$\begin{align*} x_1 &= 0 \\ x_2&= x_1\\ &\vdots \\x_n &= x_{n-1} \end{align*}$$ But I'm not sure how to go about proving that this is the case. Does anyone have any suggestions?
-
Surely $x=z$ must be a valid hyperplane in the $3$-cube case? In general I'd expect to see all $x_i=x_j$ (for $i\ne j$), seeing as the cube is invariant under re-ordering the coordinates. – Matt Pressland Dec 10 '12 at 14:43
I'm looking for the three that generate every other hyperplane. – chris Dec 10 '12 at 14:49
Oh, sorry, you did say that. (Although if I'm being pedantic, you shouldn't really say "the", there are other choices of generators). – Matt Pressland Dec 10 '12 at 15:01
1 Answer
These are the mirrors of the hyperoctahedral group. There are $n^2$ mirrors in total divided into two orbits: $n(n-1)$ mirrors of the form $x_j \pm x_k = 0$ for $1 \leq j < k \leq n$ and an additional $n$ mirrors of the form $x_k = 0$.
Indeed, the full group is generated by $n$ reflections: $n-1$ reflections in the mirrors $x_k - x_{k+1} = 0$ for $k < n$ together with the reflection in $x_n = 0$.
-
Perfect. I didn't realize there was a name for cubes and cross-polytopes together, but that makes sense because they're dual. Thanks. – chris Dec 10 '12 at 15:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356324076652527, "perplexity_flag": "head"} |
http://topologicalmusings.wordpress.com/2007/12/18/section-3-unordered-pairs/ | Todd and Vishal’s blog
Topological Musings
# Section 3 – Unordered Pairs
December 18, 2007 in Exposition, Math Topics, Naive Set Theory | Tags: Axiom of pairing, empty set, naive set theory, Paul Halmos, singleton
After postulating a couple of important axioms in the previous two sections, we now arrive at a couple of important results.
1. There exists an empty set. (In fact, there exists exactly one!)
2. The empty set is a subset of every set.
Indeed, to prove the first result, suppose $A$ is some set. Then, the set $\{ x \in A: x \not= x \}$ is clearly an empty set, i.e. it doesn’t contain any elements. To “picture” this, imagine an empty box with nothing inside it. In fact, we can apply the axiom of specification to $A$ with any universally false sentence to create an empty set. The empty set is denoted by $\emptyset$. The axiom of extension, on the other hand, guarantees there can be only one empty set.
Now, how do we argue that $\emptyset \subset A$, for any arbitrary set $A$? Well, the reasoning is an indirect one, and for most beginners, it doesn’t seem like a complete one. There is something in the argument that doesn’t feel quite “right!” However, there is nothing “incomplete” about the argument, and here it is anyway.
Suppose, for the sake of contradiction, the emptyset, $\emptyset$, is not a subset of $A.$ Then, there exists an element in $\emptyset$ that doesn’t belong to $A.$ But, the empty set is empty, and hence, no such element exists! This means our initial hypothesis is false. Hence, we conclude (maybe, still somewhat reluctantly) $\emptyset \subset A$.
Now, the set theory we have developed thus far isn’t a very rich one; after all, we have only showed there is only one set and that it is empty! Can we do better? Can we come up with an axiom that can help us construct new sets? Well, it turns out, there is one.
Axiom of pairing: If $a$ and $b$ are two sets, then there exists a set $A$ such that $a \in A$ and $b \in A$.
The above axiom guarantees that if there are two sets, $a$ and $b$, then there exists another one, $A$, that contains both of these. However, $A$ may contain elements other than $a$ and $b$. So, can we guarantee there is a set that contains exactly $a$ and $b$ and nothing else? Indeed, we can. We just apply the axiom of specification to $A$ with the sentence “$x = a$ or $x = b$.” Thus, the set $\{ x \in A: x = a \mbox{ or } x = b\}$ is the required one.
The above construction of a particular set illustrates one important fact: all the remaining principles of set construction are pseudo-special cases of the axiom of specification. Indeed, if it were given that there exists a set containing some particular elements, then the existence of a set containing exactly those elements (and nothing else) would follow as a special case of the axiom of specification.
Now, observe if $a$ is a set, then the axiom of pairing implies the existence of the set $\{ a, a \}$, which is the same as the set $\{ a \}$ and is called a singleton of $a$. Also, note that $\emptyset$ and $\{ \emptyset \}$ are different sets; the first has no elements at all, whereas the second has exactly one element, viz. the empty set. In fact, there is a minimalist (inductive) way of constructing the set of natural numbers, $N$, (due to von Neumann) using the axiom of infinity as follows.
$0 = \emptyset, 1 = \{ 0 \}, 2 = \{ 0, 1 \}, \ldots$
But, more on this later.
• 221,442 hits
## 1 comment
April 3, 2011 at 9:32 pm
Knut Flatland, Oslo, Norway
You write in the blog about unordered pairs:
“Suppose, for the sake of contradiction, the emptyset, Ø, is not a subset of A. Then, there exists an element in Ø that doesn’t belong to A. But, the empty set is empty, and hence, no such element exists! This means our initial hypothesis is false. Hence, we conclude (maybe, still somewhat reluctantly) Ø is a subset of A.”
I found it difficult to follow the reasoning by contradiction above, and when I looked it up in Paul Halmos’ book, the reasoning there was simpler and made more sense:
“The empty set is a subset of every set, or in other words Ø is a subset of A for every A. To establish this, we might argue as follows. It is to be proved that every element in Ø belongs to A; since there are no elements in Ø, the condition is automatically fullfilled. The reasoning is correct, but perhaps unsatisfying” He then goes on and talks about “vacuous” conditions and the advice, when proving anything about empty sets, to prove that it cannot be false.
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403171539306641, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/175272-complex-no.html | # Thread:
1. ## complex no
if z1=10+6i and z2=4+6i if z is any complex no such that the argument of (z-z1)/(z-z2)
is pi/4 then prove that mod(z-7-9i)=3sqrt(2)
can yu solve it graphically
2. Originally Posted by prasum
if z1=10+6i and z2=4+6i if z is any complex no such that the argument of (z-z1)/(z-z2)
is pi/4 then prove that mod(z-7-9i)=3sqrt(2)
can you solve it graphically
Graphically, the argument of the quotient $(z-z_1)/(z-z_2)$ is the angle between the lines $z\to z_1$ and $z\to z_2$. By the theorem about angles in the same segment (or rather its converse), that says that z is on a circle through $z_1$ and $z_2$. Another theorem (the one about the angle at the centre C being twice the angle at the circumference) says that the lines $z_1\to C$ and $z_2\to C$ must be at right angles. It's easy to deduce from that (see the diagram) that C must be at the point 7+9i and that the radius must be $3\sqrt2$. So z lies on the circle $|z-(7+9i)| = 3\sqrt2.$
Attached Thumbnails
#### Search Tags
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185511469841003, "perplexity_flag": "middle"} |
http://blog.divinetworks.com/tag/hayim-shaul/ | # How much data is needed to describe a message?
Posted on December 12, 2011 by
An old joke tells of a man sending a telegram to his brother inviting him to his son’s wedding. Initially he writes this message:
Dear brother,
You are invited to my son’s wedding, two weeks from now.
Looking forward to meeting you.
After learning the price of each word he reasons that Dear brother is redundant since his brother is going to be getting the message by hand, and he already knows how much he loves him. The words you are invited are also redundant, since it is obvious that once there is a wedding his brother is invited, similarly Looking forward to meeting you can be deleted. The words my son’s are also redundant because it wouldn’t make sense to report another man’s wedding, and so the man ended up with the shorter telegram:
wedding in two weeks
which exactly captures the information he wanted to transfer to his brother.
Given a message, in information theory we try to assess how many bits are needed to encode this message, or in other words given an encoded message how many bits are redundant and can be deleted. To do that we first need to quantify the amount of data encoded in a message of $n$ bits. In information theory this is referred to as entropy. The higher the entropy the more data is encapsulated in those bits (therefore fewer bits are redundant). Given a message $M = (m_1, m_2, ..., m_n)$ of $n$ symbols over an alphabet $\sigma = {\sigma_1, \sigma_2, ..., \sigma_s}$, of $s$ letters the entropy is given by this formula:
$Entropy(M) = - \sum_{i=1}^s Pr(\sigma_i)\log Pr(\sigma_i)$,
where $Pr(\sigma_i)$is the probability of an arbitrary letter in $M$ to be $\sigma_i$. In the simple case, $Pr(\sigma_i)=\frac{number\;of\;times\;\sigma_i\;appears\;in\;M}{n}$.
As can be easily seen the higher the entropy is the more random the message seems (i.e. There are less patterns in it), on the other hand, the message aaaaaaaaaa….aa has entropy 0 (remember that $\lim_{x \rightarrow 0} x\log x = 0$. For compression, we want to re-encode a message (with fewer bits) such that its entropy increases. In encryption (the other end of the spectrum), on the other hand, we want to re-encode a message (with the same number of bits) such that its entropy increases.
The joke we opened with goes on, as the man decides to delete the word wedding (because what other reason is there to be sending a telegram in the first place), and the words in two weeks (which is the appropriate time prior a wedding to be sending invitations), and so he returns home without sending any telegram at all.
### Like this:
Posted in Technology | |
### Follow Blog via Email
Join 26 other followers
### Follow Blog via Email
Join 26 other followers
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537090063095093, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/19172/list | ## Return to Answer
3 meant to put in a line, not a break
This is a simple computation using the asymptotic formula for $\log_{10}(n!)$. Computing with $\ln$ instead (just divide the results by $\ln(10)$), Maple gives $$\ln{n!} \sim \left(\ln(n)-1\right)n+\ln(\sqrt{2\pi})+\frac{\ln(n)}{2}+\frac{1}{12}n^{-1}-\frac{1}{360}n^{-3}+\frac{1}{260}n^{-5}-\frac{1}{1680}n^{-7}+O(n^{-9})$$ for the expanded version of (the logarithm of) Stirling's formula. So as long as 1 is larger than the remainder after taking the first 3 terms of the above formula, the formula is quite good. Only for few $n$ could you run into problems.
I wouldn't be surprised if this question got closed too - it was just too easy to answer using any CAS.
Edit: since I now understand the question better, and Noam Elkies reported a new result in his search, I figured I would try to add one more term to the approximation and see what I get. More specifically, use
log10((n/exp(1))^n*sqrt(2*Pi*n)*exp(1/12/n))
(in Maple notation) instead of the original formula. For $n=6561101970383$, this approximation gives exactly the same digits as displayed in Noam's answer for the exact answer.
In other words, I would conjecture that using this particular approximation, whatever counter-examples there might be would be so large that we may never be able to exhibit them. Call it an exact approximation for ultra-finitists if you will.
This is a simple computation using the asymptotic formula for $\log_{10}(n!)$. Computing with $\ln$ instead (just divide the results by $\ln(10)$), Maple gives $$\ln{n!} \sim \left(\ln(n)-1\right)n+\ln(\sqrt{2\pi})+\frac{\ln(n)}{2}+\frac{1}{12}n^{-1}-\frac{1}{360}n^{-3}+\frac{1}{260}n^{-5}-\frac{1}{1680}n^{-7}+O(n^{-9})$$ for the expanded version of (the logarithm of) Stirling's formula. So as long as 1 is larger than the remainder after taking the first 3 terms of the above formula, the formula is quite good. Only for few $n$ could you run into problems.
I wouldn't be surprised if this question got closed too - it was just too easy to answer using any CAS.
Edit: since I now understand the question better, and Noam Elkies reported a new result in his search, I figured I would try to add one more term to the approximation and see what I get. More specifically, use
log10((n/exp(1))^n*sqrt(2*Pi*n)*exp(1/12/n))
(in Maple notation) instead of the original formula. For $n=6561101970383$, this approximation gives exactly the same digits as displayed in Noam's answer for the exact answer.
In other words, I would conjecture that using this particular approximation, whatever counter-examples there might be would be so large that we may never be able to exhibit them. Call it an exact approximation for ultra-finitists if you will.
1
This is a simple computation using the asymptotic formula for $\log_{10}(n!)$. Computing with $\ln$ instead (just divide the results by $\ln(10)$), Maple gives $$\ln{n!} \sim \left(\ln(n)-1\right)n+\ln(\sqrt{2\pi})+\frac{\ln(n)}{2}+\frac{1}{12}n^{-1}-\frac{1}{360}n^{-3}+\frac{1}{260}n^{-5}-\frac{1}{1680}n^{-7}+O(n^{-9})$$ for the expanded version of (the logarithm of) Stirling's formula. So as long as 1 is larger than the remainder after taking the first 3 terms of the above formula, the formula is quite good. Only for few $n$ could you run into problems.
I wouldn't be surprised if this question got closed too - it was just too easy to answer using any CAS. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521663784980774, "perplexity_flag": "head"} |
http://theartofmodelling.wordpress.com/ | # 2013 AARMS Mathematical Biology Workshop
Posted on February 17, 2013 by
We are pleased to announce the 2013 AARMS Mathematical Biology Workshop to be held at Memorial University of Newfoundland, July 27-29, 2013 in St John’s, Newfoundland. Registration closes on May 17, 2013 and abstracts should be submitted by June 30, 2013.
Plenary speakers:
Edward Allen, Texas Tech University
Linda Allen, Texas Tech University
Steve Cantrell, University of Miami
Odo Diekmann, Utrecht University
Simon Levin, Princeton University
Mark Lewis, University of Alberta
Philip Maini, Oxford University
For complete details please visit the conference website*:
http://www.math.mun.ca/~ahurford/aarms/
*please note that there was a service outage for math.mun.ca on Monday Feb 18, but the link should work now.
Photo credit: Michelle Wille Photography
Photo credit: Michelle Wille Photography
Photo credit: Michelle Wille Photography
Posted in MPE2013 |
# Christmas gifts from Just Simple Enough!
Posted on December 21, 2012 by
And, no, not the gift of homework. The gifts of song and movie!
Song. An original song written by a graduate student about graduate student-supervisor meetings. It’s a catchy tune! Click Here.
Movie. This is a movie that I took in 2009 while attending a Mathematical Biology Summer School in Botswana. Click here. (Unfortunately, I encountered some technical difficulties uploading this to YouTube, but it’s still watchable, albeit in micro-mini).
Happy holidays everyone!
Posted in Fun, General, Math Bio is for everyone |
# Mathematical biology – by way of example
Posted on November 13, 2012 by
Mathematical biology takes many different forms depending on the practitioner. I take mine with one math and two biologys (the so-called “little m, big B”), but others like it stronger (“big M, little b”). Under my worldview, mechanistic models are a tool to analyze biological data; a tool that infuses our knowledge of the relevant biological processes into the analytical framework. That might sound very pie-in-the-sky, and so I’ve made up an example to illustrate what I mean. This example has been constructed so that it doesn’t require any advanced knowledge: if you know how to add and multiply – that’s all you’ll need to answer these questions.
In the example below, the relevant biological processes are described in the section what we know already. You will need to use logical thinking to relate the what we know already section to the data reported in the DATASHEET so that you can answer the questions.
If you have ever wondered ‘what is Theoretical Biology?’ this example helps to answer that question too. Specifically, the required steps to do modelling, as inspired by this example, would be: 1) to write down the information that goes in the what we know already section (you’d refer to these as the model assumptions); 2) to devise a scheme to relate what we know already with the biological quantities of interest (this is the model derivation step); and 3) to report the results of your analysis (model analysis and interpretation).
As you work through this example, think about the types of questions that you are able to answer and how fulfilling it is that careful thinking has enabled us to draw some valuable conclusions. Understand too, that a criticism of mathematical modelling is that, in reality, everything might not happen quite as perfectly as we describe it to happen in the what we know already section. These sentiments capture the good and the bad of mathematical modelling. Mathematical models enable new and exciting insights, but our excitement is temped because these insights are only possible owing to the assumptions that have been made, and while we do our best to make sure these assumptions are good, we know that these assumptions can never be prefect.
If this sounds like fun, then have a go at the example below. If you want to email me your answers, I can email you back to let you know how you did (see here for my email address).
—————————————-
INFLUENZA X
A new and unknown disease, Influenza X, has swept through a small town (popn. 100). Your task is to describe the characteristics of the disease. Health officials want to know:
1. How many days are citizens infected before they recover?
2. What fraction of infected citizens died from the disease?
3. What is the rate of becoming infected?
What we know already
During the epidemic, citizens can be classified into one of these four groups:
• Susceptible
• Infected
• Recovered, or
• Dead
As is shown in the diagram:
• Only Susceptible citizens can be Infected.
• Infected citizens either Die or Recover.
• Citizens must have been Infected before they can Recover.
• Only Infected citizens die from the disease.
• Once they have Recovered, citizens cannot be re-infected.
• All Infected citizens take the same number of days to Die or Recover.
• During the epidemic no one enters or leaves the city. No babies are born; no one dies of anything other than Influenza X.
During the epidemic all that was recorded was the number of citizens who were Susceptible, Infected or Recoverd on each day and the number of people who had Died up until that point. This information is summarized in the DATASHEET provided at the end of this post. This information is also presented graphically below and you’ll get a better understanding of the data by considering how the graphs and the DATASHEET are related (Question 4).
QUESTIONS
1. Fill in the missing values on the DATASHEET (below).
2. How many days are citizens infected before they recover?
3. What fraction of infected citizens died from the disease?
4. Label the axes on the graphs.
5. The transmission rate of Influenza X is 0.008 (the units have deliberately been omitted). Consider the graphs above and describe how this rate was estimated?
6. How is the unknown quantity from the DATASHEET calculated?
DATASHEET
Some definitions
• If a patient is infected on Day 1 and recovers on Day 4 that patient is infected for 3 days (i.e., Day 1-3 inclusive).
• Infected (cumulative) on Day T means the total number of citizens who have been infected any time from Day 1 to Day T (inclusive). Citizens who subsequently Died or Recovered are included in this number.
Related reading
For some of my older posts describing Mathematical Biology, you can start here.
|
# How to make mathematical models (even at home)
Posted on October 9, 2012 by
As a WordPress blogger, I get a handy list of search terms that have led people to my blog. A particularly memorable search term that showed up on my feed was ‘how to make mathematical models at home’. What I liked about this query was that it suggests mathematical modelling as a recreational hobby: at home, in one’s spare time; just for fun. This speaks to an under-appreciated quality of mathematical modelling – that it’s really quite accessible once the core principles have been mastered.
To get started, I would suggest any of the following textbooks*:
• A Biologist’s Guide to Mathematical Modeling by Sally Otto and Troy Day
• Modeling Biological Systems: Principles and Applications by James Haefner
• A Course in Mathematical Biology by Gerda de Vries, Thomas Hillen, Mark Lewis, Birgitt Schonfisch and Johannes Muller
• Dynamic Models in Biology by Stephen P. Ellner and John Guckenheimer
Now, I know, you want to make your own mathematical model, not just read about other people’s mathematical models in a textbook. To start down this road, I think you should pay attention to two things:
• How to make a diagram that represents your understanding of how the quantities you want to model change and interact, and;
• Developing a basic knowledge of the classic models in the ecology, evolution and epidemiology including developing an understanding of what these models assume.
This would correspond to reading Chapters 2 and 3 of A Biologist’s Guide to Mathematical Modeling.
A good way to start towards developing your own model would be to identify the ‘classic model’ which is closest to the particular problem you want to look at. If you’re interested in predator-prey interactions, this would be the Lotka-Volterra model, or if you’re asking a question about disease spread, then you need to read about Kermack and McKendrick and the SIR model. Whatever your question, it should fall within one of the basic types of biological interactions, and the corresponding classic model is then the starting point for developing your mathematical model. From there, the next step is to think about how the classic model you’ve chosen should be made more complicated (but not too complicated!) so that your extended model best captures the nuances of your particular question.
Remember that the classic model usually represents the most simple model that will be appropriate, and only in rare circumstances, might you be able to justify using a more simple model. For example, if the level of predation or disease spread for your population of interest is very low, then you might be able to use a model for single species population growth (exponential/logistic/Ricker) instead of the Lotka-Volterra or SIR models, however, if predation and disease spread are negligible, then it arguably wasn’t appropriate to call your problem ‘predator-prey’ or ‘disease spread’ in the first place. Almost by definition, it’s usually not possible to go much simpler than the dynamics represented by the appropriate classic model.
That should get you started. You can do this at the university library. You can do this for a project for a class. And, yes, you can even do this at home!
Footnotes:
*For someone with a background in mathematics some excellent textbooks are:
• Mathematical Models in Biology by Leah Edelstein-Keshet
• Mathematical Models in Population Biology and Epidemiology by Fred Brauer and Carlos Castillo-Chavez
• Mathematical Biology by J. D. Murray Part I and Part II
but while the above textbooks will give you a better understanding of how to perform model analysis, the ‘For Biologist’s’ textbooks listed in this post are still the recommended reading to learn about model derivation and interpretation.
|
# Testing mass-action
Posted on September 27, 2012 by
UPDATE: I wrote this, discussing that I don’t really know the justification for the law of mass action, however, comments from Martin and Helen suggest that a derivation is possible using moment closure/mean field methods. I recently found this article:
Use, misuse and extensions of “ideal gas” models of animal encounter. JM Hutchinson, PM Waser. 2007. Biological Reviews. 82:335-359.
I haven’t have time to read it yet, but from the title it certainly sounds like it answers some of my questions.
——————–
Yesterday, I came across this paper from PNAS: Parameter-free model discrimination criterion based on steady-state coplanarity by Heather A. Harrington, Kenneth L. Ho, Thomas Thorne and Michael P.H. Strumpf.
The paper outlines a method for testing the mass-action assumption of a model without non-linear fitting or parameter estimation. Instead, the method constructs a transformation of the model variables so that all the steady-state solutions lie on a common plane irrespective of the parameter values. The method then describes how to test if empirical data satisfies this relationship so as to reject (or fail to reject) the mass-action assumption. Sounds awesome!
One of the reasons I like this contribution is that I’ve always found mass-action to be a bit confusing, and consequently, I think developing simple methods to test the validity of this assumption is a step in the right direction. Thinking about how to properly represent interacting types of individuals in a model is hard because there are lots of different factors at play (see below). For me, mass-action has always seemed a bit like a magic rabbit from out of the hat; just multiply the variables; don’t sweat the details of how the lion stalks its prey; just sit back and enjoy the show.
Figure 1. c x (1 Lion x 1 Eland) = 1 predation event per unit time where c is a constant.
Before getting too far along, let’s state the law:
Defn. Let $x_1$ be the density of species 1, let $x_2$ be the density of species 2, and let $f$ be the number of interactions that occur between individuals of the different species per unit time. Then, the law of mass-action states that $f \propto x_1 \times x_2$.
In understanding models, I find it much more straight forward to explain processes that just involve one type of individual – be it the logistic growth of a species residing on one patch of a metapopulation, or the constant per capita maturation rates of juveniles to adulthood. It’s much harder for me to think about interactions: infectious individuals that contact susceptibles, who then become infected, and predators that catch prey, and then eat them. Because in reality:
Person A walks around, sneezes, then touches the door handle that person B later touches; Person C and D sit next to each other on the train, breathing the same air.
There are lots of different transmission routes, but to make progress on understanding mass-action, you want to think about what happens on average, where the average is taken across all the different transmission routes. In reality, also consider that:
Person A was getting a coffee; Person B was going to a meeting; and Persons C and D were going to work.
You want to think about averaging over all of a person’s daily activities, and as such, all the people in the population might be thought of as being uniformly distributed across the entire domain. Then, the number of susceptibles in the population that find themselves in the same little $\Delta x$ as an infectious person is probably $\beta S(t) \times I(t)$.
Part of it is, I don’t think I understand how I am supposed to conceptualize the movement of individuals in such a population. Individuals are going to move around, but at every point in time the density of the S’s and the I’s still needs to be uniform. Let’s call this the uniformity requirement. I’ve always heard that a corollary of the assumption of mass-action was an assumption that individuals move randomly. I can believe that this type of movement rule might be sufficient to satisfy the uniformity requirement, however, I can’t really believe that people move randomly, or for that matter, that lions and gazelles do either. I think I’d be more willing to understand the uniformity requirement as being met by any kind of movement where the net result of all the movements of the S’s, and of the I’s, results in no net change in the density of S(t) and I(t) over the domain.
That’s why I find mass-action a bit confusing. With that as a lead in:
How do you interpret the mass-action assumption? Do you have a simple and satisfying way of thinking about it?
________________________________
Related reading
This paper is relevant since the author’s derive a mechanistic movement model and determine the corresponding functional response:
How linear features alter predator movement and the functional response by Hannah McKenzie, Evelyn Merrill, Raymond Spiteri and Mark Lewis.
|
# Q1. Define independent parameterization
Posted on September 3, 2012 by
Mechanistic and phenomenological models
Mechanistic models describe the processes that relate variables to each other, attempting to explain why particular relationships emerge, rather than solely how the variables are related, as a phenomenological model would. Colleagues will ask me ‘is this a mechanistic model’ and then provide an example. Often, I decide that the model in question is mechanistic, even though the authors of these types of models may rarely emphasize this. Otto & Day (2008) wrote that mechanistic and phenomenological are relative model categorizations – suggesting that it is only productive to discuss whether one model is more or less mechanistic than another – and I’ve always thought of this as a nice way of looking at it. This has also led me to think that nearly any model, on some level, can be considered mechanistic.
But, of course, not all models are mechanistic. Here’s the definition that I am going to work from (derived from the Ecological Detective, see here):
Mechanistic models have parameters with biological interpretations, such that these parameters can be estimated with data of a different type than the data of interest
For example, if we are interested in a question that can be answered by knowing how the size of a population changes over time, then our data of interest is number versus time. A phenomenological model could be parameterized with data describing number versus time taken at a different location. On the other hand, a mechanistic model could be parameterized with data on the number of births versus time, and the number of deaths versus time; and so it’s a different type of data, and this is only possible because the parameters have biological interpretations by virtue of the model being mechanistic.
The essence of a mechanistic model is that it should explain why, however, to do so, it is necessary to give biological interpretations to the parameters. This, then, gives rise to a test of whether a model is mechanistic or not: if it is possible to describe a different type of data that could be used to parameterize the model, then we can designate the model as mechanistic.
Validation
In mathematical modelling we can test our model structure and parameterization by assessing the model agreement with empirical observations. The most convincing models are parameterized and formulated completely independently of the validation data. It is possible to validate both mechanistic and phenomenological models. Example 1 is a description of a series of three experiments that I believe would be sufficient to validate the logistic growth model.
Example 1. The model is $\frac{d N}{d t} = r N \left(1-\frac{N}{K}\right)$ which has the solution N(t) = f(t, r, K, $N_0$) and where $N_0$ is the initial condition, N(0).
Experiment 1 (Parameterization I):
1. Put 6 mice in a cage, 3 males and 3 females and of varied, representative ages. (This is a sexually reproducing species. I want a low density but not so few that I am worried about inbreeding depression). A fixed amount of food is put in the cage every day.
2. Every time the mice produce offspring, remove the offspring and put them somewhere else (i.e., keep the number of mice constant at 6 throughout Experiment 1).
3. Have the experiment run for a while, record the total time, No. of offspring and No. of the original 6 mice that died.
Experiment 2 (Parameterization II):
4. Put too many mice in the cage, but the same amount of food everyday, as for Experiment 1. Let the population decline to a constant number. This is K.
5. r is calculated from the results of Experiment 1 and K as (No. births – No. deaths)/(total time) = 6 r (1-6/K).
Experiment 3 (Validation):
6. Put 6 mice in the cage and the same amount of food as before. This time keep the offspring in the cage and produce the time series N(t) by recording the number of mice in the cage each day. Compare the empirical observations for N(t) with the now fully parameterized equation for f(t,r,K,N(0)).
The Question. Defining that scheme for model parameterization and validation was done to provide context for the following question:
• When scientists talk about independent model parameterization and validation – what exactly does that mean? How independent is independent enough? How is independent defined in this context?
If I was asked this, I would say that the parameterization and the validation data should be different. In the logistic growth model example (above), the validation data is taken for different densities and under a different experimental set-up. However, consider this second example.
Example 2. Another way to parameterize and validate a model is to use the same data, but to use only part of the information. As an example consider the parameterization of r (the net reproductive rate) for the equation,
$\frac{\partial u}{\partial t} = D\frac{\partial^2 u}{\partial x^2} + r u$ (eqn 1)
The solution to Equation (1) is u(x,t), a probability density that describes how the population changes in space and time, however, another result is that the radius of the species range increases at a rate c=$\sqrt{4rD}$. To validate the model, I will estimate c from species range maps (see Figure 1). To estimate r, I will use data on the change in population density taken from a core area (this approach is suggested in Shigesada and Kawaski (1997): Biological invasions, pp. 36-41. See also Figure 1). To estimate D, I will use data on wolf dispersal taken from satellite collars.
Returning to the question. But, is this data, describing the density of wolves in the core area, independent of the species range maps used for validation? The species range maps, at any point in time, provide information on both the number of individuals and where these individuals are. The table that I used for the model parameterization is recovered from the species range maps by ignoring the spatial component (see Figure 1).
Figure 1. The location of wolves at time 0 (red), time 1 (blue) and time 2 (green). The circles are used to estimate, c, the rate of expansion of the radius of the wolves’ home range at t=0,1,2. The population size at t=0,1,2 is provided in the table. The core area is shown as the dashed line. Densities are calculated by dividing the number of wolves by the size of the core area. The reproductive rate is calculated as the slope of a regression on the density of wolves at time t versus the density at time t-1. For this example, the above table will only yield two data points, (3,5) and (5,9).
While the data for the parameterization of r, and the validation by estimating c, seems quite related, the procedure outlined in Example 2 is still a strong test of Equation (1). Equation (1) makes some very strong assumptions, the strongest of which, in my opinion, is that the dispersal distance and the reproductive success of an individual are unrelated. If the assumptions of equation (1) don’t hold then there is no guarantee that the model predictions will bear any resemblance to the validation data. Furthermore, the construction of the table makes use of the biological definition of r, in contrast to a fully phenomenological approach to parameterization which would fit the equation u(x,t) to the data on the locations of the wolves to estimate r and D, and would then prohibit validation for this same data set.
So, what are the requirements for independent model parameterization and validation? Are the expectations different for mechanistic versus phenomenological models?
Posted in Great models |
# Blogging for MPE 2013
Posted on August 30, 2012 by
On the Mathematics of Planet Earth (MPE) webpage there is a call for bloggers. This is a great initiative and one that I would love to see really take off. There are already some good mathematical biology-related blogs out there:
and the MPE initiative is likely to bring more attention to blogging around the topic of mathematical biology.
Here at Memorial University of Newfoundland, as part of MPE, we are proud to be hosting the AARMS Summer School on Dynamical Systems and Mathematical Biology. This summer school consists of 4 courses over 4 weeks from July 15 to August 9, 2013. These courses can often be transferred for credit at the student’s home institution and will be taught by leading experts in each of the focus areas. The city of St John’s offers a vibrant downtown, urban parks and walkways, and stunning coastlines. More information to follow.
Posted in MPE2013 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211926460266113, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/19895?sort=votes | ## Universal group?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I can construct a finitely presented group $G$ with the following property (which I use to construct something else).
Given a finitely preseted group $\Gamma$, there is a subgroup $G'\le G$ of finite index such that $$\Gamma=G'/\langle\mathrm{Tor}\, G'\rangle ,$$ where $\mathrm{Tor}\, G'\subset G'$ is the set of all elements of finite order.
I think to call such group $G$ universal.
Questions:
• Was it already constructed?
• Does it already has a name? Is there any closely related terminology?
P.S.
• The group which I construct is in fact hyperbolic.
• My construction is simple, but it takes 2--3 pages. Let me know if you see a short way to do it.
• Here, the term "universal group" was used in very similar context (thanks to D. Panov for the reference).
• Thanks to all your comments, we call them "all-inclusive" actions now.
-
7
I advise against the word "universal", without more context at least. Call it Anton-universal or the Petrunin-Swiss-Army Group, or some useful modification of some synonym for "universal". Gerhard "Ask Me About System Design" Paseman, 2010.03.30 – Gerhard Paseman Mar 30 2010 at 23:48
2
Well, Swiss-Army Group is a nice name. But why not universal? --- after quick search I did not see that term "universal group" is used... – Anton Petrunin Mar 31 2010 at 0:34
7
The closest condition I've heard of is "SQ-universal": en.wikipedia.org/wiki/SQ_universal_group Your group satisfies a very strong form of "SQ-universal in the class of finitely presented groups". – Agol Mar 31 2010 at 1:33
2
This property seems far too specific to be called simply "universal". I'd go with something like "TQ-universal". If you want to know whether someone else has done this, I'd try looking at the work of Olshanskii and his students. – HW Mar 31 2010 at 1:41
2
Anton, by a "universal finitely presented group" one usually means a finitely presented group that contains each finitely presented group as a subgroup. Such groups can be constructed via Higman's embedding theorem. If $Q$ is such a group, it is possible to cook up a hyperbolic group $G$ such that $Q$ is a quotient of $G$, and the kernel is normally generated by elements of finite order. This is of course not the same as what you do. – Igor Belegradek Mar 31 2010 at 3:07
show 11 more comments
## 1 Answer
I was asked write in an answer to move the question to answered status.
Thank you all for your comments they were helpful for me and Dima.
-
I think if you accept your own answer, not only will members not think less of you, the question will not reappear because the MathOverflow user will not care anymore about your answer not being voted up. Gerhard "Ask Me About System Design" Paseman, 2011.10.08 – Gerhard Paseman Oct 9 2011 at 4:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313144683837891, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/76791/quanitative-de-moivrelaplace-theorem-reference-request/76799 | ## Quanitative de Moivre–Laplace theorem (reference request)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The classical de Moivre-Laplace theorem states that we can approximate the normal distribution by discrete binomial distribution:
$${n \choose k} p^k q^{n-k} \simeq \frac{1}{\sqrt{2 \pi npq}}e^{-(k-np)^2 / (2npq)}.$$
My question is: are there more precise, quantitative versions of this theorem in the literature? Are there good estimates how to measure the error? I am unfortunately not familiar with the subject but need a result of this type.
Of course there is always the option of going through existing proofs and checking the details, and turning them from "soft" to "hard", but I suspect this has to be already done. And maybe this is not optimal, maybe there are good accessible ways.
Can someone point me a good reference in this direction?
-
Note that $p+q=1.$ – Will Jagy Sep 29 2011 at 20:07
And positive. Maybe I was writing the question a bit quickly assuming the result is too well-known... – András Bátkai Sep 29 2011 at 20:17
The way I was taught it, the really good approximation is by taking the integral of an appropriate normal PDF from $k - (1/2)$ to $k + (1/2).$ I guess you are saying something a bit different. – Will Jagy Sep 29 2011 at 20:17
Yes, you are doing something different. In beginning statistics classes, the idea is that the normal CDF is available in printed tables and in software (essentially the error function erf). So you take the fixed distribution with the same mean and variance as your binomial distribution. The integer parameter $k$ does not appear in the description of which normal distribution. – Will Jagy Sep 29 2011 at 20:22
1
Littlewood wrote a famous paper about very accurate estimates for the tails of the distribution, which might not be what you want (but it might). I haven't read it and can't access it, but you could start from this paper: On Littlewood's Estimate for the Binomial Distribution, B.D. McKay Adv. Appl. Prob. 21, 475-478 (1989) and related papers. – Zen Harper Sep 30 2011 at 3:54
show 1 more comment
## 3 Answers
Firstly, I think by "qualitative" you mean "quantitative". Secondly, while there is a huge literature on the quantitative versions of the central limit theorem, the canonical results can be found in Feller's Vol 2. For the center of the distribution there is the Berry-Esseen theorem, for the tails there is the large deviations theory, the introduction to which is also covered by Feller.
EDIT If you really care about the specific approximation of the binomial by the normal (or vice versa) you are just talking about the higher terms in the Stirling approximation to the factorial (and hence to the binomial coefficients). You can read all about it in, eg, Graham/Knuth/Patashnik's Concrete Math.
-
Thank you. Unfortunately, applying these results you only get a $1/\sqrt{n}$ term, if I interpret it correctly. I was hoping that the special structure of the problem could lead to sharper results. (I edited the typo, thank you...) – András Bátkai Sep 29 2011 at 20:40
But I will have a closer look at Feller. – András Bátkai Sep 29 2011 at 20:41
See the edit for enhanced answer... – Igor Rivin Sep 29 2011 at 20:53
Thanks for the edited version! Yes, This is what I am after somehow... – András Bátkai Sep 29 2011 at 20:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You just want a local limit theorem for a sum of i.i.d. Bernoulli random variables. A standard reference (not just for Bernoulli r.v.!) is "Sums of Independent Random Variables" by Petrov, in particular Chapter VII, §3.
-
-
Thanks, that is a very nice summary of the results you can also get from the wikipedia page of the normal distribution. I was hoping there is something deeper. – András Bátkai Sep 29 2011 at 20:03
also stats stackexchange – psd Sep 29 2011 at 20:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189502596855164, "perplexity_flag": "middle"} |
http://scicomp.stackexchange.com/questions/3025/factor-a-non-symmetric-matrix-into-the-product-of-a-sparse-symmetric-matrix-and | # Factor a non-symmetric matrix into the product of a sparse symmetric matrix and a diagonal matrix plus a low rank correction
I have a non-symmetric matrix, where the non-symmetry only appears at a subset of points. This arises due to the particular manner on which boundary conditions are applied in a Cartesian grid method. I'm looking for a way to factor the matrix into the product of a symmetric sparse matrix and a diagonal matrix with a low rank correction so I can symmetrize the system as suggested here:
How far is a non-symmetric discretization of an elliptic operator from the continuous operator itself?
-
## 1 Answer
There is no way you really mean product since the product of a low rank matrix with anything is also low rank. Given non-symmetric $A$, perhaps you are looking for
$$S = \frac 1 2 (A + A^T)$$ $$N = \frac 1 2 (A - A^T).$$
If $A = S + N$ is symmetric in most of the domain, then the non-symmetric part $N$ will be low rank and sparse.
Of course a sum is much less useful than a product if you were interested in solving this system. In particular, if $N$ is nonzero at boundaries, the rank is not small enough for specialized tricks to apply. What was really intended in the question you reference is to reformulate your problem, not to brute-force decompose or symmetricze.
-
I suppose I should have said an identity matrix plus a low rank correction. – John Mousel Aug 9 '12 at 14:29
@JohnMousel Please fix the title and body of your question. – Jed Brown Aug 9 '12 at 17:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392623901367188, "perplexity_flag": "head"} |
http://rjlipton.wordpress.com/2011/08/05/give-me-a-lever/ | ## a personal view of the theory of computation
by
The role of levers in finding mathematical proofs
Archimedes of Syracuse (c. 287 BC to c. 212 BC) was a Greek mathematician, whose insights led him to many inventions and practical ideas, and to work in physics and astronomy.
Today I wish to talk about the mathematical equivalent of the lever.
Archimedes invented many things, but not the lever, which may have been invented by Archytas of Tarentum. Archimedes is famous for the quote:
Give me a place to stand on, and I will move the Earth.
Of course what he meant was: in principle with a large enough lever and a place to stand the strength needed to move even something as heavy as the Earth would be possible. This is pretty impressive given that the Earth’s mass is ${5.9722 \times 10^{24}}$ kg. His standing place would have needed to be somewhere beyond the Andromeda Galaxy—see here for some simple estimates—but his mathematical proof of possibility needed no galactic figures.
Levers
I was just recently in Ann Arbor at the Coding, Complexity and Sparsity Workshop, which was organized by Anna Gilbert, Martin Strauss, Atri Rudra, Hung Ngo, Ely Porat, and Muthu Muthukrishnan. It was a wonderful experience, and I am planning shortly to discuss some of the great talks that were given there. The workshop website should soon have the talks available so that you can at least see the slides, although there is nothing like being in the room listening to a good talk.
I realized during the workshop that several of the talks—not all—had essentially used a lever to solve their particular problem. In this sense a lever is some trick, insight, or idea that one allows one to make a start on solving a problem. It is not the full solution; it is not an essential idea. There may be ways to solve the problem without the lever, but the lever does allow paraphrasing Archimedes: Give me a place to start, and I will prove the theorem.
One thing that I realized too was that papers and even talks often gloss over any lever. For some reason the writers and speakers do not think it is worth making a big deal about it. One possible reason is that to them, who are likely experts in the area, the lever seems so simple—why state it explicitly? Another reason is that the lever often is a pretty simple idea, so why make a big deal out of it? Often other parts of the proof are much harder and more technical.
An example of a trivial lever is a transformation that is used often in analysis. Suppose one needs to prove something about a function ${f(x)}$. Replace the function by
$\displaystyle g(x) = f(x) -f(0).$
Often this change does not affect the theorem being proved, but now ${g(0) = 0}$, which could dramatically reduce the number of cases that are needed later in the proof. It is a lever in my sense.
Archimedes’ Lever
It seems that Archimedes himself actually used the lever as a proof lever in my sense. Or rather he used the lever to convince himself something was true, and then generated a proof by other means.
Archimedes is known for estimating or computing areas and volumes by the older method of exhaustion, which sandwiches the object being analyzed between simpler shapes, computes their areas or volumes, and uses them for upper and lower bounds or to demonstrate convergence. For example, by wrapping a sphere of radius ${1}$ in progressively tighter polygonal bounds, one can prove the value ${(4/3)\pi}$ for the volume. However, it seems this was not his idea of first resort.
According to scholarship summarized here, Archimedes instead considered a cone of height ${2}$ and base radius ${1}$. He found that for each ${x}$, ${0 \leq x \leq 2}$, he could hang a slice of the cone and a slice of the sphere whose areas added up to ${2\pi x}$ at distance ${1}$ on one side of the lever. These would balance a circle of radius ${\sqrt{2}}$ hung at distance ${x}$ on the other side of the lever. The circles formed a cylinder of height ${2}$ whose center of gravity was at distance ${1}$ on the other side. Since all the mass on the near side was at distance ${1}$, the two masses and hence volumes had to be equal. Since Archimedes knew the cylinder had volume ${4\pi}$ and the cone had volume ${8\pi/3}$, he obtained volume ${4\pi - 8\pi/3 = 4\pi/3}$ for the sphere.
Archimedes was so enchanted by this that he had a cylinder and a sphere placed on his gravestone. We believe he felt that his infinitesimal slices presaged a new way of calculating—which Newton and Leibniz turned into the calculus 1,800 years later. Perhaps he even perceived the singular difference between his foliations of the sphere and cone on one side, and the cylinder on the other side.
Two Other Simple Examples
Maximum-likelihood estimation often involves setting parameters to maximize the probability of a series of independent events. Since the events are independent, this is just the product
$\displaystyle p = \prod_i p_i$
of the events. With many events, ${p}$ may be a very small number, so that numerical accuracy becomes a problem, and differentiating this formula to find a maximum may also consume much effort. However, since the logarithm is a continuous strictly increasing function in the range of these probabilities, it is equivalent to maximize the log of this product. If one wishes to keep all quantities non-negative, one can instead minimize
$\displaystyle \sum_i \log(1/p_i).$
This preserves numerical accuracy and is easier to differentiate.
The paper “How Powerful are Random Strings” by Eric Allender, Luke Friedman, and William Gasarch, which we featured here, also starts with a lever. Instead of using the random-string set directly as an oracle, they create a different oracle out of the overgraph ${\{(x,y): y \geq f(x)\}}$ of a related entropy function ${f}$. They show that the two oracles are equivalent for the complexity reductions used in their main theorem, but find the overgraph-oracle easier to analyze. As usual see their paper for details.
A Nontrivial Example
I would like to give a beautiful example of a lever from one of the talks at the workshop. David Woodruff spoke on “${(1+\epsilon)}$–Approximate Sparse Recovery, which is based on joint work with Eric Price, and will appear this fall at FOCS 2011.
The problem is to discover a ${k}$-sparse vector that approximates a given signal. Here ${k}$-sparse means that the vector has at most ${k}$ non-zero coordinates. This is a major problem in the area of approximation and compressed sensing. I will not try to even begin to survey this huge body of work.
David at the beginning of his talk said: let’s consider just the special case of ${k=1}$. This was his lever. He did not make any fuss about the lever, and he proceeded to use this lever in both upper bound and lower bound results. He did point out that the ability to solve the case where ${k=1}$, where there is one large signal, is easily justified. One can take the input vector
$\displaystyle x_{1},x_{2}, \dots, x_{n}$
and use random sampling to divide the signal up into ${k}$ pieces. Likely the pieces will have one big signal—that is, it is likely that ${k=1}$ will hold. Then one could use the ${k=1}$ algorithm on each piece to find all the ${k}$ large signals. There is a bit more needed to make this upper reduction process, but the idea is fairly simple. A great lever.
Once David uses this lever, all his calculations after that are much simpler. There is a fair bit of estimation and employment of inequalities needed to make the ${(1+\epsilon)}$ recovery algorithm work correctly. These calculations are much easier to follow, and probably were easier to discover in the first place, by restricting the situation to ${k=1}$.
This Lever Extends
David, in his talk, also outlined several lower bound theorems. Each was for different versions of the the recovery problem, since there were several parameter regimes and models.
Lower bounds are hard to prove in general—we have so few of them. But in this area of signal recovery the general paradigm is possible since the lower bounds are essentially counting how many measurements are needed to solve a recovery problem. This sounds like information theory or communication complexity theory, and it is. David shows how to use known—often very deep—results from both areas to prove that too few measurements would lead to a violation.
Again the lever to the recuse. The proof for the ${k=1}$ case is often not too difficult—that is not to say easy. Then to prove the general case one must be careful. Lower bounds on computing one object can change when we are trying to compute several objects. But the lever helps point out that this obstacle must be addressed. David uses some communication product theorems—again some are quite powerful and deep—to prove his lower bounds.
Open Problems
What are some of your favorite levers? Is this notion helpful to make explicit?
### Like this:
from → History, People, Proofs
33 Comments leave one →
1. August 5, 2011 11:57 pm
How would a ‘lever’ differ from ‘equivalence’?
• rjlipton *
August 6, 2011 8:07 am
Bhupinder Singh Anand,
A lever is, in this sense, a device that makes progress on a problem. It need not be an equivalence. This is seen in the lower direction where the lever is used by David but is not exactly the same problem.
2. Allen Knutson
August 6, 2011 12:22 am
“Archimedes’ Theorem” is that if you place a sphere inside a cylinder of the same radius and height, and consider the horizontal inward projection of the cylinder onto the sphere, that is area-preserving. In particular the easy area of the cylinder (2 r * 2 pi r) equals the area of the sphere. Or, if you want to draw a map using this, you’ll get the areas right. (But you won’t turn great circles, which people want to navigate along, into straight lines on the map, so it’s not so popular.)
I suspect that’s why the sphere and cylinder are on his gravestone, not for the volumetric calculation.
• August 6, 2011 3:19 am
According to Plutarch, it is what we wrote:
Πολλῶν δὲ καὶ καλῶν εὑρετὴς γεγονὼς λέγεται τῶν φίλων δεηθῆναι καὶ τῶν συγγενῶν ὅπως αὐτοῦ μετὰ τὴν τελευτὴν ἐπιστήσωσι τῷ τάφῳ τὸν περιλαμβάνοντα τὴν σφαῖραν ἐντὸς κύλινδρον, ἐπιγράψαντες τὸν λόγον τῆς ὑπεριχῆς τοῦ περιέχοντος στερεοῦ πρὸς τὸ περιεχόμενον.
Wikipedia says the same here, but of course we consulted a primary source .
• August 11, 2011 10:37 pm
Funny—now (five days later) the Google Translate of the Greek seems to have changed from what I remember, and it’s not so clear. I wish I’d saved it, as I sure thought it showed enough words to prove my point. Anyway, here is the link to the translation I used:
——-
Plutarch (AD 45-120), Parallel Lives: Marcellus
“And although he made many excellent discoveries, he is said to have asked his kinsmen and friends to place over the grave where he should be buried a cylinder enclosing a sphere, with an inscription giving the proportion by which the containing solid exceeds the contained.”
• August 23, 2011 10:16 am
“… inventor of many and good [things] , it is said he pleaded his friends and his relatives that after the ceremony they would mount on the grave that would contain him the sphere inside the cylinder, inscribing the ratio of supremacy of the containing solid over the contained.”
I tried to keep the words as close to the original as possible. Studied ancient Greek in high school, it is mandatory in Greece. These scripture is of the Hellenistic period and quite similar to modern Greek, apart from some weird grammar. Golden-era ancient Greek are a lot harder to translate.
3. Alan Kay
August 6, 2011 7:10 am
Hi Dick
There is a very nice “lever” used by Newton to show why a particle inside a hollow sphere out in space will not feel the gravity of the sphere. This one works with junior high school kids. But for years I’ve been asking physicists for a simple proof of “the biggie”, which is that the attraction of two non enclosed bodies acts as though the force is being exerted from the center of the bodies.
Anyone know of any levers for this one?
Cheers,
Alan
• Jarred
August 6, 2011 7:46 am
@Alan Kay:
• Alan Kay
August 6, 2011 8:56 am
I’m supposed to wade through a 98 minute oral tradition lecture for your answer?
Most readers are more than 5 times more efficient than this!
And isn’t the purpose of asking a specific question to solicit a specific answer first, before moving on to larger contexts.
Let me try again here. I’m looking for a lever that works for junior high school students and have already polled quite a few physicists for this (including Leon Ledermann, If you know of such an approach, then please just write it down in a few sentences.
Best wishes,
Alan
• August 6, 2011 7:11 pm
SpongeBob’s “Proof Without Words” is a start (some additional hand-waving will be required).
• James
August 6, 2011 12:51 pm
Obviously, the force is not the same. Let A be a sphere centered about some point c, and let x be a point mass outside of A. Suppose A is orbiting around x. Then if we start crushing A into its center c, x will start feeling more gravitational force.
Despite this, the center of mass, c, of A will *move* as if it were a point mass of constant mass, orbiting around x. This is because the acceleration of c doesn’t depend on how sparse A is.
To prove this, I’ll kind of cheat, but not badly. We’ll pretend that A is actually two points, a1 and a2, of equal mass. Then c lies halfway between a1 and a2.
It’s easy to check that the acceleration of c is just the average of the accelerations of a1 and a2. (If a1, a2 were different masses, it would be the weighted average).
OK, I have to go now, but work through the calculations. It should take you 10 mintues. You’ll see the acceleration you get for c, as a center of mass, is equal to the acceleration you would get for c’, a point mass.
4. mathchick
August 6, 2011 8:02 am
GAUSS LAW FOR GRAVITY
PLUS intuitive (fluid flow) statement of the divergence theorem.
Apply to a sphere.
Thus, gravity attraction of uniform sphere is same as point mass.
• Alan Kay
August 6, 2011 8:49 am
For junior high students?
Cheers,
Alan
5. mathchick
August 6, 2011 8:04 am
Similarly, for any homogenous body, or apply to a cube, and approximate the body as a limit
of cube unions.
• Alan Kay
August 6, 2011 8:58 am
Ditto
• August 6, 2011 1:05 pm
+1
6. August 6, 2011 1:13 pm
Of course, what counts as trivial is always dependent on what level of abstraction you’re looking at at any time. Personally, I’m tickled by the switch of the two-population mean alternative hypothesis μ1 > μ2 to μ1-μ2 > 0, which makes the sampling distribution of the point estimate x’1-x’2 a sum of observations, and thus just another normal variable (in the limit).
7. Cristopher Moore
August 6, 2011 2:13 pm
Probably the biggest (and most widely effective) lever in physics is that the simplest possible answer is the right one. Of course, while this very well born out by experience—even Einstein’s equations are the simplest possible way to couple mass and energy to the curvature of space-time—this is ultimately a religious belief about the nature of reality.
On the other hand, in mathematics the simplest possible answer is _often_ right, and the probability seems to go up when we ask beautiful, natural questions. For instance, many natural combinatorial problems have bijections between them. It’s easy to invent ugly problems that are unrelated to everything else, but the beautiful problems are often related to each other. Indeed, this is a good way to tell whether the problem you’re working on is beautiful or not.
8. Alan Kay
August 6, 2011 7:36 pm
August 6, 2011 7:11 pm
The Sponge Bob example starts with a point source and is one way to guess “inverse square” if there is reason to think that this is the way various kinds of radiation propagate.
But it is a different matter to deal with something that is not a point source and is close. There are reasonable symmetry arguments to make the idea that the composite attraction of the many particles is along the line of the center, but it is not easy to make a simple argument that the composite also acts as though all the mass is at the center.
By the way note that light radiation near a large glowing body does not act as though it comes from a point source (e.g. Sun and Earth). This is one of the problems with Mathchick’s “intuitive” explanation.
• August 8, 2011 8:33 am
Alan, the simplest elementary argument I can think of for point-source equivalence is to combine the “Sponge Bob” geometric argument with a “telescoping series” argument (see Wikipedia article of the same name).
That is, replace a single shell of mass ${m}$ with ${n+1}$ concentric shells having alternating mass ${+m,-m,+m, ... ,-m,+m}$. Obtain the same net force by telescoping the series from the left and from the right, thus proving that the force exerted by a point mass (telescoping from the left) is equal to the force exerted by a shell having the same mass (telescoping from the right).
While non-rigorous, this approach at least unites a fundamental geometric insight (“Sponge Bob”) with a fundamental combinatorical insight (a telescoping series). Needless to say, these are two fabulous levers.
To deal with young formalists who insist on rigor … make sure the school library has a paperback volume of Spivak’s slender masterpiece Calculus on Manifolds. That’s sure to keep them busy for awhile!
• Alan Kay
August 8, 2011 9:53 am
Hi John
We are talking about 8th graders here, so I think Calculus on Manifolds is for a later time.
The “telescoping theories” argument is an interesting one, but it doesn’t seem to be particularly intuitive (“-m” ?).
I have been waiting to see if anyone would suggest “experimental math” using a computer. For example, a very simple vector sum argument establishes that the line of the force will be through the center of the sphere.
What we want is the distance of the force through this line. But just as you can get a nice estimate for pi by sampling a square with a circle in it — 8th graders find this interesting and fun — you can also get a pretty good estimate of the resultant force by sampling a sphere and just adding in each force vector on the located particle.
This is not a proof but it is “highly suggestive”. And could lead to a more exhaustive non-random sampling of the sphere that suggests it could be made up of differential “particles” (as mathchick suggested).
Cheers,
Alan
• August 8, 2011 1:08 pm
Alan, it’s true that Calculus on Manifolds is pretty tough sledding for kids … and yet, at pretty much any age it can be inspiring to read the Prefaces and Forewords to pretty much any mathematical textbook. This reason alone is sufficient justification to include at few advanced math books in any school library … a good recent example (IMHO) is Bill Thurston’s Foreword to Mircea Pitici’s anthology Best Mathematical Writing 2010.
9. Jim Blair
August 7, 2011 9:12 am
I am not a big fan of “Laws of Form” by G. Spencer-Brown, but “A Note on the Mathematical Approach” that appears at the beginning of the book is one of the more remarkable attempts to leverage a simple idea into a grand scheme of things:
“The theme of this book is that a universe comes into being when a space is severed or taken apart. The skin of a living organism cuts off an outside from an inside. So does the circumference of a circle in a plane. By tracing the way we represent such a severance, we can begin to reconstruct, ….., the basic forms underlying linguistic, mathematical, physical and biological science, …..”
“Laws of Form” was written over 40 years ago when the idea of “multiverses” would have been dismissed as “Gee Whiz Physics”.
If the Theory of Multiverses takes off, the historians might have to promote him from “eccentric character” to “eccentric prophet”.
• Alan Kay
August 7, 2011 9:33 am
Computer folks will recognize that “Laws of Form” is essentially “Sheffer Stroke Logic” (which was actually identified earlier in the 19th century by Charles Peirce). It lives on today as NAND and NOR logic either of which are sufficient to construct everything else. (To me) the most interesting thing in Brown’s exposition is the notation, which does help paper and pencil evaluation of complex expressions. However, the logical paradoxes from this kind of logic merely “generate time” in the computer world, and are much easier to understand in that form.
10. Richard
August 7, 2011 8:10 pm
http://www.tricki.org
11. August 8, 2011 6:30 am
The history of many branches of mathematics can be read as the sequential discovery of increasingly powerful levers. Consider for example the following nine levers of dynamics:
Lever 1: First law Conservation of energy constrains individual trajectories (Newton).
Lever 2: Second law Non-decreasing entropy further constrains ensembles of trajectories (Clausius).
Lever 3: Extremal action Symmetry is associated to further conservation laws (Noether).
Lever 4: Path integral relations Causal separability further constrains Hilbert dynamics; coordinate-free notations become essential (Faddeev-Popov).
Lever 5: Geometric dynamics Extends the above constraints to non-Lagrangian dynamics on non-Hilbert state-spaces (Cartan, Mac Lane, Ashtekar and Schilling).
Lever 6: KAM dynamics Spectral theorems generalized to integrable systems (Kolmogorov, Arnold, Moser).
Lever 7: Quantum operations In essence, the integration of measurement theory with path integrals (e.g., Mikhail Mensky’s book).
Lever 8: Dynamic compression Noisy/observed/evolved systems require exponentially fewer dynamical dimensions (Choi, Kraus, Carmichael & many more; see also Kohn-Sham).
Lever 9: Hyperpolarization engines The 21st century’s natural generalization of 20th century low-temperature physics and 19th century heat engines.
These levers are knit together by an evolving mathematical toolset that increasingly emphasizes considerations of universality and naturality. Every STEM generation fondly imagines itself near the end of this quest for dynamical universality and naturality … and yet nature keeps unveiling surprising new dynamical “levers” … which is good news for everyone (young researchers particularly).
What dynamical levers are missing from this list? What new dynamical levers may be coming in the 21st century? If you got `em, please post `em.
12. Anonymous
August 8, 2011 9:46 am
A reliable way to produce good approaches to research problems (and levers?), in my opinion, is simply to question assumptions. The problem is in knowing what assumptions one is making to begin with. Many examples come to mind of how one can be blind to ones assumptions; one example is the “connect all the dots of a 3 x 3 grid using 4 lines and without lifting your pencil off the page” problem (which I assume most everyone knows). Here, the assumption people often make without realizing it is that the lines should stay inside the grid. Of course they are not required to, and being aware of this is one key of one approach to solving to the problem. (If you want another example, look up http://www.physicsforums.com/showthread.php?t=365381 and the heading “Murray Gell-Mann: Quark and the Jaguar: A Moment of Illumination”.)
Once one becomes aware of the assumptions one is making, and questions them, and finds some of them to be erroneous, one can then use this as a basis for where to go next. To outsiders who have long believed in the assumptions themselves, the fruits of such work can seem like “magic”, though they are anything but.
13. Alan Kay
August 8, 2011 1:21 pm
Hi John
I like your notion that the prefaces and forwards of advanced math books can be inspiring to younger kids — in fact were for me in the 40s and 50s (and very likely turned me towards a degree in pure math).
Still, I was lucky to have a few adults around who could answer questions, and were willing to.
Leon Lederman likes to say “If you read something that you don’t understand, keep on reading and revisiting it and pretty soon it will be *hauntingly familiar*.
Cheers,
Alan
14. Jim Blair
August 9, 2011 9:50 am
I am curious about Alan Kay’s comment that “Laws of Form” is essentially “Sheffer Stroke Logic”.
Where would I find that intersection of ideas?
• Alan Kay
August 9, 2011 10:41 am
My recollection (from 40 plus years ago) is that he referenced Sheffer early, and Peirce a little later. I just dug the book out from the stacks in my library, and this is the case.
I first encountered this book because it was one of many that were listed in the Whole Earth Catalog. It was slim (which was nice), and obscure (which was annoying — especially when most of the obscurity was in his choice of terms and phrasing). This was odd, because his notation for “a crossing” was a pretty nice invention.
I think that the obscure way it is written has led to its attraction in some quarters — it seems more philosophically interesting than it is.
If one has done some “60s digital design” where thinking directly in NAND or NOR logic becomes second nature, then the deep correspondences become evident. The first published account of this kind of logic was done by Sheffer in 1913, and later it was discovered that Peirce had a section on these universal operators in the early 1880s in his unpublished writings.
Having waded through Russell & Whitehead as a teenager in the 50s, this was another example where the “computer version” of the ideas (and especially the paradoxes) was so much clearer and simpler.
Cheers,
Alan
15. Jim Blair
August 9, 2011 2:40 pm
In Appendix I of “Laws of Form”. Spencer-Brown offers proofs of Sheffer’s postulates which he claims are the first such proofs:
My impression of Brown’s view of Sheffer: “Been there, done that, let’s move on.”
If Brown were reinventing the wheel, I am guessing Bertrand Russell would have noticed, having worked with both Sheffer and Brown.
Instead, the following quote is attributed to Russell:
“In this book Mr. Spencer-Brown has suceeded in doing what is very rare indeed. He has revealed a new calculus of great power and simplicity. I congratulate him.”
### Trackbacks
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502519369125366, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/111684?sort=oldest | ## Does every bipartite graph with 512 edges have an induced subgraph with 256 edges?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have a (simple) bipartite graph with $2^k$ edges. Is it true that there is a subset of the vertices such that their induced subgraph has exactly $2^{k-1}$ edges?
I know that the answer is no for general graphs, since you can take a $K_6$ plus a disjoint edge. I also know that if we don't require the number of edges to be a power of 2, the answer is again no as shown by a $K_{5,9}$ plus a disjoint edge. I suspect that the answer to my question is also no.
-
3
Is there some reason you would believe this to be true? – Igor Rivin Nov 6 at 22:20
4
It's hard to make a counterexample like $K_{a,b}$ plus a few edges, since these would contain an induced complete bipartite graph with parts of size $2^{\lfloor \log_2 a \rfloor},2^{\lfloor \log_2 b\rfloor}$. – Douglas Zare Nov 7 at 2:51
2
Trivial comment: It does work for trees with an even number of edges. Just successively remove leaves until half the edges remain. – Tony Huynh Nov 7 at 11:48
3
Is 512 / 256 just chosen randomly to make the question more memorable? Or do you know stuff about graphs with 16, 32, 64 etc edges. – gordon-royle Nov 8 at 6:43
1
Just randomly to keep the question tex-free and simple. – domotorp Nov 8 at 10:57
show 14 more comments
## 5 Answers
It's possible I messed up in my calculations, so by all means check it to be sure, but I think the following is an example of what you seek:
Start with a copy of $K_{5,103}$ with vertices $v_1\ldots v_5,w_1\ldots w_{103}$ and remove the edges $v_iw_i$ for $1\leq i\leq 5$. Add two copies of $K_2$ to make a total of 512 edges. Then unless I messed up somewhere in my calculation, no induced subgraph can have exactly 256 edges in it.
There were another of other near misses I found (similar initial setups which had exactly 1 way (up to relabelling of vertices) to remove vertices leaving exactly 256 edges), so it shouldn't be hard to modify this construction to find an example if it turns out the above example fails after all.
-
4
Remove 50 of the degree 5 vertices, one of the degree 4 vertices, and all of the degree 1 vertices. Looks tweakable though. Gerhard "Back To The Bipartite Farm" Paseman, 2012.11.06 – Gerhard Paseman Nov 7 at 4:33
If you are going to tweak it, you have to avoid an extra edge plus K5,51; similarly K4,64 and 4 edges plus Kp,252/p will also be problems. Gerhard "Looks Like Counterexamples Completely Obstructed" Paseman, 2012.11.06 – Gerhard Paseman Nov 7 at 4:44
Nice try but Gerhard's first comment shows how to cut it. – domotorp Nov 7 at 7:41
Here is another near miss that might be salvaged. Take K13,29, and take another point and make it degree 5 into the (previously) complete graph while keeping the result bipartite. This also fails, but it seems to fail in not many ways. There might still be a counterexample that is not far from a complete graph. Gerhard "Ask Me About System Design" Paseman, 2012.11.07 – Gerhard Paseman Nov 7 at 16:54
Interesting... somehow I didn't notice that $K_{4,64}$ sitting in there. I need to go look at the loops I ran to figure out why they didn't detect it. – ARupinski Nov 7 at 22:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Domotorp and ARupinski likely know this already, but I thought I would record this as an initial foray into cornering a counterexample by a process of elimination. I will not bother with the general case, but focus on the specification of 256 out of 512. Let G be the collection of all bipartite graphs with 256 edges.
I will consider bipartite graphs only, and my concern is with the size of the smaller vertex set and how many edges can come from it. Certainly any node with degree at least 256 will contain an induced subgraph from G. Further any two nodes in the small set with combined degree of 256 or greater will also contain a subgraph from G. There is likely a better characterization than the following: any three vertices with combined degree of 383 and any 4 vertices with combined degree of 510 will produce a subgraph from G. (Note I am focusing on small independent vertex sets.)
Of course we can ignore vertices of degree 0. If we can characterize nicely the graphs with, say, an independent set of 3 vertices and a large number of edges (but fewer than 383) which do not have a subgraph from G, we might be able to use this to classify such graphs with larger independent sets, working our way up to 23 vertices, the rough square root of 512.
EDIT 2012.11.11 Unfortunately the analysis below is not quite right. One can find a subgraph of $K_{4,96}$ with $3*96=288$ edges which contains no induced subgraph from G. It turns out that if there are enough edges and the degrees of the larger set are anything but a multiset of 3's with at most one 2, then the conclusion holds and indeed $267$ edges are enough. I am confident that this line of investigation will produce something useful, but the treatment below is not enough. In particular, I am now unsure there is no counterexample which is not a subgraph of, say, $K_{7,n}$ for some $n$. END EDIT 2012.11.11
EDIT 2012.11.09 This problem is not exactly one about submultisets of integers and number theory, but taking that slant cuts a wide swath in the forest of bipartite graphs on 512 edges.
The major reason for needing 382 edges coming from 3 independent points while requiring less than 270 points coming from 4 independent points can be viewed as purely number-theoretic: given a=3 and b=127, there are no integers c and d such that $0 \leq c \leq a$ and $0 \leq d \leq b$ and $cd=256$. So $K_{3,127}$ is a graph of 381 edges which has no induced subgraph belonging to G. However, number theory can be used to show 382 edges from 3 points suffice, as we can either remove one of the three points and work with the remaining 2, or we look at the one point with degree 128 and note it has enough neighbors of the right degree that we just need to remove neighbors of degree 3 (or smaller degree if we run out) to achieve an induced subgraph from G.
That 4 points requires a lot fewer edges results from just needing enough residue classes mod 4 to take care of any problems: either there is a $K_{4,64}$ subgraph hidden, or there are at least enough vertices of degree 1,2, and 3 to adjust the sum mod 4. As a result, it is clear that $252+ 4*3$ is enough edges to find a subgraph from G, so let's be generous and say a combined degree of 280 suffices for 4 vertices.
We can now leverage that estimate and say that for 5 (and 6 and 7) vertices that 350 (and 420 and 490) edges respectively between them are enough to find a subgraph from G, either by removing neighbors of the 5 vertices, or by removing the vertex among the five with smallest degree, reducing it to a previous case.
Since 8 divides 256, we need either find a subgraph of the form $K_{8,32}$ or enough vertices of smaller degrees to finish the job. Rough estimates give 304 as a sufficient combined degree, which we can now leverage to say that no counterexamples on 512 edges from 13 vertices will be found.
Likely we can extend it by analyzing the case of 12 vertices further, but I will save that for later. I now suspect that domotorp will not get his counterexample for bipartite graphs with $2^n$ edges for $n \lt 10$. END EDIT 2012.11.09
Gerhard "Inching His Way Toward Bounty" Paseman, 2012.11.08
-
It looks like $K_{3,127}$ does not contain an induced subgraph from G, but it seems I can replace 383 by 382 in the above, and the number 510 can probably be substantially lowered to below 300. Gerhard "I'm Surprised Too. Go Figure" Paseman, 2012.11.09 – Gerhard Paseman Nov 9 at 19:36
I think I see a way to show that any counterexample with 512 edges needs to have at least 14 vertices in the smaller set . It may be a quicker path to show the answer is actually yes. Gerhard "Race You To A Proof" Paseman, 2012.11.09 – Gerhard Paseman Nov 9 at 19:59
Here is where I could use some help: I think $K_{12,31}$ has no subgraph from G and thus we need 372 + 4*11 edges from 12 vertices. If so then we can push 13 to 14. Gerhard "Is It Just My Imagination?" Paseman, 2012.11.10 – Gerhard Paseman Nov 10 at 21:58
Actually, I can argue more comfortably: either $K_ {14,32}$ is a subgraph or else we have 31 vertices of degree at most 14 and 4 more vertices to make up the lack, so 486 edges from 14 vertices is sufficient to find a graph from G. Gerhard "Ask Me About System Design" Paseman, 2012.11.10 – Gerhard Paseman Nov 10 at 22:11
Hi Gerhard, I cannot follow everything (like if there is no K_{14,32} subgraph, then why is it guaranteed that we have 31 vertices of degree at most 14?) but I do hope your approach will lead to a counterexample. – domotorp Nov 11 at 7:15
show 2 more comments
Not an answer, but I like this question, so I decided to document a dark alley I went down which should be avoided (or more optimistically tweaked).
Begin Dark Alley.
Initially, I tried to get a counterexample out of purely number theoretic considerations. That is, let $2^{2k-1}-1$ be a Mersenne prime. Let $G$ be the graph consisting of the complete bipartite graph $K_{2^k+1, 2^k-1}$ together with an isolated edge $e$. Note that $G$ contains exactly $2^{2k}$ edges. Let $H$ be an induced subgraph of $G$ with exactly $2^{2k-1}$ edges. Note that if $e \in H$, then $K_{2^k+1, 2^k-1}$ contains an induced subgraph with exactly $2^{2k-1}-1$ edges. But this is impossible since $2^{2k-1}-1$ is prime. Unfortunately, $K_{2^k+1, 2^k-1}$ itself has an induced subgraph with $2^{2k-1}$ edges, by taking a subset of size $2^{k}$ from the left side and one of size $2^{k-1}$ from the right side. This is yet another instance of Douglas Zare's comment of don't try to make a counterexample which is a complete graph plus a few edges.
End Dark Alley.
-
You are right Dear domotorp,
Suppose the second largest eigenvalue of bipartite graph $G$ is one, i.e, $\lambda_2=1$. In this case, $G$ belong to the finite type, (there are infinite number of such graphs), bipartite graphs. With some calculation, we can see that your conjecture is true for this type of graphs. Also, we can say more, if we use some other spectral techniques.
The Paper Reference:
Petrovic M., On graphs with exactly one eigenvalue less than -1, J. Combin. Theory Ser. B 52 (1991), 102-112.
-
2
Can you please expand on your answer? I can't how to see to prove the conjecture for the class of bipartite graphs with $\lambda_2=1$. – Tony Huynh Nov 12 at 12:13
Dear Tony, the calculation is long for each type of those graphs. But, I checked two infinite classes and they were true. I will try to write my calculating for those cases(just is a computing), but I do not have free time these days. You can see the paper and directly see the result. I added the reference, I hope it can be helpful. – Shahrooz Nov 12 at 20:48
If they are long, it might not be important to work out the details at this moment, as probably a counterexample exists. But if you have time, I am more than happy to read it. – domotorp Nov 12 at 21:28
Even though this is not a complete answer, there are enough elements in this posting that I think someone can use to show domotorp where not to look for a counterexample. I also want to separate it from the marginally useful clutter in the other post of mine. Recall that I am focused on showing every bipartite graph H with exactly 512 edges has a subset of vertices which yields G, our target of an induced subgraph of H with exactly 256 edges.
A useful fact: if n is a prime power, if M is a multiset of positive proper divisors of n with sum equal to n, then there is a submultiset M' of M with sum kn/p, where k and p are positive integers, k is less than p, and p is the prime dividing n. A corollary of this fact is that any positive number of independent vertices whose degree sum is at least 256 and whose neighbors have degrees which are precisely powers of 2 will have an induced subgraph G on precisely that set of independent vertices. Edit: so that the corollary reads correctly, assume a subgraph H of K_a,b with degree sum of the a vertices at least 256 and the degrees of the b vertices are appropriate powers of 2. Then G subgraph of K_a,b' exists as an induced subgraph of H. End Edit.
From the corollary we get that domotorp won't find any counterexample graphs H which are subgraphs of K_a,b for a=1 or a=2. Further, for a=3 or 4, there won't be any counterexamples because at least two of the independent vertices of H will have degree sum at least 256. However, I want to refine the case of a=4 a bit.
Let J be a subgraph of K_4,b with number of edges n = 256 + 3k for some nonegative integer k. Then J also has an induced subgraph G: remove the b vertices of degree 3 and whatever else is needed to achieve the target number of edges. If J has a wealth of degrees, remove vertices of degrees 1,2, or 4 until n=256+3k as above. Otherwise J has less than 264 edges or else the b vertices all have degree 3, with at most one exception which must be of degree 2. Now from the four independent vertices, remove from J that vertex which has smallest degree. The result will either have less than 260 edges, or will have a b vertex of degree 1 or at least two of degree 2, or a single edge will be removed leaving a K_3,b subgraph. In the first case J had less than 350 edges, the second and third cases will yield the goal graph G, and the final case will yield no G unless b is at least 128. The upshot is that if J has more than 381 edges, it will have a target graph G as an induced subgraph.
I worried the case a=4 to bits for a couple of reasons: one is to establish that any H which is a subgraph of K_5,b will have four of the five independent vertices with degree sum more than 400 (and thus will not be a counterexample), and two is to put a Rube Goldbergian type cap on this post for a=6. This proof idea is neat, and might be extendible, but I am going to give others the chance to do it.
Let H be a subgraph with 512 edges and be a subgraph of K_6,b. Note that if any two of the six independent vertices have degree sum at least 256 or any of those six has degree less than 12, I can turn to cases for a=2 and a=5 and assert that a target G exists.
I will now find four of the six vertices and hope that there is a G that uses those vertices. Note that we may assume the four vertices have degree sum at least 256.
First consider the degree sums of the six mod 3. The sum of the sums is 2=512 mod 3. Suppose two of the degrees mod 3 are 2. Then the remaining four have degree sum equal mod 3 to 256, so I can use that to produce G. So assume at most one of the six degrees is 2 mod 3. Then if there is one other nonzero degree mod 3, there is at least one which is zero mod 3, and those two I exclude from the four to get another degree sum equal mod 3 to 256, and again I get G. The remaining case that resolves nicely is if none are 2 mod 3, and again I can find G.
The last case is that one of the six vertices has degree 2 mod 3, and all the rest are 0 mod 3. Of the remaining 5, I can try to choose some subset of 4 and hope for that subgraph to not fall in the case where the multiset of b vertices foils me by having all threes or all threes and one two. But if I am so unlucky, then I take all 5 vertices to get a degree set with some fours or some ones, as well as some threes or twos (remember at least 12 degrees will be incremented, although some of them might have started at 0). So I can guarantee a multiset of degrees that allow the composition I want, and gain my prize graph G.
The things I do for bounty!
Gerhard "Wait Till You See Seven" Paseman, 2012.11.14
-
Looks like a=7 will be easy. Take the 4 vertices from the seven with largest degree sum. If there is no G, then the b multiset will be mostly threes, so change it by adding a fifth vertex. Gerhard "Can It Be That Easy?" Paseman, 2012.11.14 – Gerhard Paseman Nov 15 at 6:32
This suggests a strengthening of domotorps conjecture. Let H be a bipartite subgraph of (a,b) independent vertices with a<=b and 512 edges, and let c=ceil(a/2). Let H' and H'' be the induced subgraphs having those c or c+1 vertices among the a vertices with maximal degree sum. Then G having 256 edges will be an induced subgraph not only of H but also of at least one of H' or H". This might be a statement that gordonroyle can use. Gerhard "Thoughts, One Cent; Conjectures, Two" Paseman, 2012.11.14 – Gerhard Paseman Nov 15 at 7:04
I don't see why this conjecture would be true. Although for 512 it might happen to be true, but e.g. for 128 it is easy to make a counterexample. Take a $K_{9,14}$ and add two independent edges. The only induced subgraphs with 64 edges are $K_{8,8}$ and $K_{7,9}$ plus an independent edge. – domotorp Nov 15 at 7:28
Maybe I should drop the condition a<=b. Anyway I am doing my best to provide potential solution paths, even if I can only handle the cases of a < 8 currently. Gerhard "Missed It By That Much" Paseman, 2012.11.15 – Gerhard Paseman Nov 15 at 15:38
Try this: Let H have 512 edges inside K_a,b. Pick c which is half (or a little more) of the a vertices with degree sum >= 256. If you can't find a subset of the at most b many vertices to induce a subgraph using the c vertices, the induced degree multiset is quite restricted (has all but at most c-2 members with the same value, or the degree sum is not far from 256). If adding another a vertex of moderate degree to the c vertices does not help, then H is real close to a K_d,f. Vague, but I think worth pursuing. Gerhard "Maybe This Arrow Will Hit" Paseman, 2012.11.15 – Gerhard Paseman Nov 15 at 20:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397201538085938, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/72052/list | ## Return to Question
Post Made Community Wiki by S. Carnahan♦
1
# How to resolve a disagreement about a mathematical proof?
I am having a problem which should not exist. I am reading what I believe to be an important paper by a person - let me call him/her $A$ - whom I believe to be a serious and talented mathematician. A lemma in this paper is proven by means of an argument which, if correct, is a highly elegant piece of mental acrobatics in the spirit of Grothendieck, where a complicated situation is reduced to a simple one by embedding the objects of study in much larger (but ultimately better) object. Unfortunately, the beauty of this argument is - for me - marred by a doubt about its correctness. In my eyes, the argument rests upon a confusion of two objects which are not equal and should not be, but have the same name by force of an abuse of notation going awry. A dozen of emails exchanged with $A$ did not clear up the situation, and I start feeling that this is unlikely to improve; what is likely is that after a few more mails the correspondence will degenerate into a flamewar (as any prolonged arguments with my participation seem to do, for some reasons unknown). The fact that $A$ is not a native English speaker adds to the difficulty.
At this point, I can think of several ways to proceed:
• Let go. There is a number of reasons for me not to choose this option; first of all, I really want to know whether the proof of the lemma is correct or not (even though there seems to be a different proof in literature, although not of that beauty), but this has also become, for me, a matter of idealism and an exercise in tenacity (in its cheapest manifestation - it's not like writing emails is hard work...).
• Construct a counterexample. This is complicated by the fact that I am attacking the proof, not the theorem (which seems to be correct). Yet I think I have done so, and $A$ failed to really address the counterexample. But given the frequent misunderstandings between us (not least because of the language barrier) I am not sure whether $A$ has realized that I am talking counterexamples at all - and whether there is a way to tell this without switching to what will be probably understood as an aggressive tone.
• Request $A$ to break down the argument into simple steps, eschewing abuse of notations. This means, in the particular case I am talking about, requesting $A$ to write two pages in his/her free time and respond to some irritating criticism of these pages with the prospect of seeing them destroyed by a counterexample. I am not sure this counts as courteous. Besides, the paper is about 10 years old - most authors do not even bother answering questions on their work of such age.
• Go public (by asking on MO or similarly). This is something I really want to avoid as long as there is no other way. Neither criticizing $A$ as a person/scientist, nor devaluing the paper (which consists of far more than the lemma in question...) is among my goals; besides I cannot rule out as improbable that the error is on my side (and my experience shows that even in cases when I could rule this out, it still often was on my side).
• Have a break and return to the question in a month or so. I am expecting to hear this (seems to be a popular answer to lots of questions...) yet I am not sure how this can be of any use.
These ideas are all I could come up with and none of them sounds like a good plan. What am I missing? Is my problem a common one, and if yes, does it have a time-tested solution? Can it be answered on this general scale? Is it a real problem or an artefact of my perception?
PS. This is being posted anonymously in order to preserve genericity (of the author and, more importantly, of $A$). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96522456407547, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?t=416301 | Physics Forums
Recognitions:
Gold Member
## Thermodynamic properties of ideal gases
Here are some general questions regarding my current reading. I am looking in my text at 2 equations for specific energy and specific enthalpy:
$u = u(T,v)\qquad(1)$
$h = h(T,p)\qquad(2)$
Question 1: Are not the properties fixed by any 2 independent priorities? Why have we chosen to speak of u as u(T, v) in lieu of u(T, p) and the same for h ? Is it more convenient to out them in these terms for some reason?
Now, if we put (1) and (2) in differential form, we have:
$$du = \left(\frac{\partial{u}}{\partial{T}}\right)_v dT + \left(\frac{\partial{u}}{\partial{v}}\right)_T dv\qquad(3)$$
$$dh = \left(\frac{\partial{h}}{\partial{T}}\right)_p dT + \left(\frac{\partial{h}}{\partial{p}}\right)_T dp\qquad(4)$$
Question 2:
It says that for an ideal gas:
$$\left(\frac{\partial{u}}{\partial{v}}\right)_T \text{ and }\left(\frac{\partial{h}}{\partial{p}}\right)_T$$
are equal to zero. Can someone clarify this? Is there some mathematical reasoning behind this? Or is this simply something that we have observed? Or both?
Question 3:
Going along with assumptions above (i.e., dh/dp = 0 and du/dv = 0) we can assert that for an ideal gas, specific energy and specific enthalpy are both functions of temperature alone, correct?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
The thermodynamic properties of ideal gases were originally derived as the limit case of results obtained from experiences with real gases. Later on, Boltzmann showed they could be computed from the statistical description of an assemble of non-interacting, point-like particles
Recognitions:
Gold Member
Science Advisor
Quote by Saladsamurai Here are some general questions regarding my current reading. I am looking in my text at 2 equations for specific energy and specific enthalpy: $u = u(T,v)\qquad(1)$ $h = h(T,p)\qquad(2)$ Question 1: Are not the properties fixed by any 2 independent priorities? Why have we chosen to speak of u as u(T, v) in lieu of u(T, p) and the same for h ? Is it more convenient to out them in these terms for some reason? Now, if we put (1) and (2) in differential form, we have: $$du = \left(\frac{\partial{u}}{\partial{T}}\right)_v dT + \left(\frac{\partial{u}}{\partial{v}}\right)_T dv\qquad(3)$$ $$dh = \left(\frac{\partial{h}}{\partial{T}}\right)_p dT + \left(\frac{\partial{h}}{\partial{p}}\right)_T dp\qquad(4)$$ Question 2: It says that for an ideal gas: $$\left(\frac{\partial{u}}{\partial{v}}\right)_T \text{ and }\left(\frac{\partial{h}}{\partial{p}}\right)_T$$ are equal to zero. Can someone clarify this? Is there some mathematical reasoning behind this? Or is this simply something that we have observed? Or both? Question 3: Going along with assumptions above (i.e., dh/dp = 0 and du/dv = 0) we can assert that for an ideal gas, specific energy and specific enthalpy are both functions of temperature alone, correct?
Q1: The state is fixed by any two independent, intensive, properties. As far as I know the choice is arbitrary (but I'm not 100% certain off the top of my head).
Q2: There is both a mathematical reason and it is something that can be observed.
Q3: Yes. In fact, that's what the math will show when you solve equations [3] and [4] above. I'm sure there should be a proof in your Thermo book. You'll probably see the more useful form of du = CvdT and dh = CpdT which also shows that for an Ideal Gas the internal energy and enthalpy are functions of temperature alone.
Hope this helps.
CS
Recognitions:
Gold Member
## Thermodynamic properties of ideal gases
Quote by stewartcs Q1: The state is fixed by any two independent, intensive, properties. As far as I know the choice is arbitrary (but I'm not 100% certain off the top of my head). Q2: There is both a mathematical reason and it is something that can be observed. Q3: Yes. In fact, that's what the math will show when you solve equations [3] and [4] above. I'm sure there should be a proof in your Thermo book. You'll probably see the more useful form of du = CvdT and dh = CpdT which also shows that for an Ideal Gas the internal energy and enthalpy are functions of temperature alone. Hope this helps. CS
Q1: Right. I am thinking that the choices made here are more useful then others in some regard.
Q2: OK.
Q3: Right, however integrating (3) and (4) armed with the knowledge that dh/dp = 0 and du/dv = 0 is hardly a "proof" (and by the way, this is how my thermo book does it. Keep in mind this is "engineering thermodynamics.")
I was trying to come up with an intuitive way to show why dh/dp = 0 and du/dv = 0 (if one exists) for ideal gases. I was thinking along the lines that since an ideal gas is very compressible that when we change the pressure or specific volume "differentially" the effect is negligible.
Thanks again!
Casey
Recognitions:
Gold Member
Science Advisor
Quote by Saladsamurai Q3: Right, however integrating (3) and (4) armed with the knowledge that dh/dp = 0 and du/dv = 0 is hardly a "proof" (and by the way, this is how my thermo book does it. Keep in mind this is "engineering thermodynamics.")
Well that's surprising that they don't go into detail! I'll see if I can help. First we have the total differential for u = u(T,v):
$$du = \left(\frac{\partial{u}}{\partial{T}}\right)_v dT + \left(\frac{\partial{u}}{\partial{v}}\right)_T dv$$
Also using the fundamental differential: du = Tds - pdv and dividing by dv while holding T constant we get:
$$\left(\frac{\partial{u}}{\partial{v}}\right)_T = T \left(\frac{\partial{s}}{\partial{v}}\right)_T - p$$
This next part is the part you probably haven't gotten to yet in your book which you would have figured out for yourself I'm sure once you read the associated chapter:
Use one of Maxwell's relations:
$$\left(\frac{\partial{s}}{\partial{v}}\right)_T = \left(\frac{\partial{p}}{\partial{T}}\right)_v$$
Substituting that relation into the previous equation gives:
$$\left(\frac{\partial{u}}{\partial{v}}\right)_T = T \left(\frac{\partial{p}}{\partial{T}}\right)_v - p$$
Now for an Ideal Gas we have the equation of state: pv = RT
Differentiating while holding v constant gives:
$$\left(\frac{\partial{p}}{\partial{T}}\right)_v = \frac{R}{v}$$
Now substitute that into the previous equation gives:
$$\left(\frac{\partial{u}}{\partial{v}}\right)_T = T \frac{R}{v} - p$$
and since the Ideal Gas equation of state can be arrange as p = RT/v we have the proof you are looking for:
$$\left(\frac{\partial{u}}{\partial{v}}\right)_T = T \frac{R}{v} - p = p - p = 0$$
This shows that internal energy is not dependent on the specific volume (since it is 0 just above).
Now from the very first equation we are only left with the first term on the RHS (since the second term was just shown to be zero):
$$du = \left(\frac{\partial{u}}{\partial{T}}\right)_v dT$$
Which confirms your intuition that the internal energy of an Ideal Gas is dependent on temperature alone.
Note that the coefficient in the last equation is defined as Cv (i.e. the specific heat capacity at constant volume). Or as I said before, the more familiar form is du = CvdT.
Hope that helps.
CS
Recognitions: Gold Member Science Advisor BTW, the same can be shown true for the enthalpy (I was just too lazy to type anymore)! CS
Recognitions:
Gold Member
Quote by stewartcs BTW, the same can be shown true for the enthalpy (I was just too lazy to type anymore)! CS
No! This is more than enough! I will go through this in a little bit and make sure I understand it. And then I will return and post the solution to the enthalpy part now that I know where the starting point is.
Thanks for all of your time CS!
~Casey
You might like to review this thread. http://www.physicsforums.com/showthr...light=enthalpy
Thread Tools
| | | |
|--------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Thermodynamic properties of ideal gases | | |
| Thread | Forum | Replies |
| | Mechanical Engineering | 6 |
| | General Physics | 0 |
| | Calculus & Beyond Homework | 1 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 3 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471952319145203, "perplexity_flag": "middle"} |
http://mathhelpforum.com/discrete-math/94157-genetic-algorithm-print.html | # Genetic Algorithm
Printable View
• July 1st 2009, 10:14 AM
Sampras
Genetic Algorithm
Find the value of $x$ that maximizes $\sin^{4}(x)$, $0 \leq x \leq \pi$ to an accuracy of at least one part in a million. Use a population size of fifty and a mutation rate of $1/(\text{twice the length of string})$.
So randomly select a population of 50 binary string of length 8. Decode them into base 10. Look at their fitness levels (e.g. $\sin^{4}(x)$). Now exclude $25$ of the strings with the lowest fitness levels. Use crossover between random pairs of strings to get 25 "child strings." Now use a mutation rate of $1/16$ on this new population of strings? Because you dont want a population of strings with end digit 0. This will cause domination.
Is this generally correct? How would you decide the length of the strings?
• July 2nd 2009, 09:28 AM
Sampras
The maximum value of $\sin x$ is $1$. So the maximum value of $\sin^{4} x$ is $1$. So maybe use a string length of $3$? Because $111 = 7$ in base 10.
All times are GMT -8. The time now is 01:00 PM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8641897439956665, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-math-topics/170620-linear-programming-problems-find-corresponding-values-slack-variables-print.html | # Linear Programming Problems? Find the corresponding values of the slack variables.
Printable View
• February 8th 2011, 06:39 PM
tn11631
Linear Programming Problems? Find the corresponding values of the slack variables.
So i'm not sure where this question would fit within the categories given, but it is a linear programming problem and it is as follows:
Consider the linear programming problem
Maximize z=2x+5y
subject to
2x+3y $\leq$10
5x+y $\leq$12
x+5y $\leq$ 15
x $\geq$0, y $\geq$0
Part (a): verify that x=[1] (this is a vector) is a feasible solution. For part (a) i
[2]
had no problems verifying it but I need it for part (b)
Part (b) For the feasible solution in a, find the corresponding values of the slack variables.
I started setting it up in canonical form but there are three constraints but two items in the vector given in (a) so i'm stuck. I had started adding in the slack variables as in: 2x+3y+u=10 etc.. but I don't know how to do this one when there are three constraints and two items in the vector.
• February 8th 2011, 07:30 PM
CaptainBlack
Quote:
Originally Posted by tn11631
So i'm not sure where this question would fit within the categories given, but it is a linear programming problem and it is as follows:
Consider the linear programming problem
Maximize z=2x+5y
subject to
2x+3y $\leq$10
5x+y $\leq$12
x+5y $\leq$ 15
x $\geq$0, y $\geq$0
Part (a): verify that x=[1] (this is a vector) is a feasible solution. For part (a) i
[2]
had no problems verifying it but I need it for part (b)
Part (b) For the feasible solution in a, find the corresponding values of the slack variables.
I started setting it up in canonical form but there are three constraints but two items in the vector given in (a) so i'm stuck. I had started adding in the slack variables as in: 2x+3y+u=10 etc.. but I don't know how to do this one when there are three constraints and two items in the vector.
You have one slack variable for each constraint, write it in terms of x and y (rearrange the constraint equation with the slack variable on the left and every thing else on the right). Now substitute x=1, y=2 into each of these equations...
CB
• February 8th 2011, 07:32 PM
CaptainBlack
Quote:
Originally Posted by tn11631
So i'm not sure where this question would fit within the categories given, but it is a linear programming problem and it is as follows:
Consider the linear programming problem
Maximize z=2x+5y
subject to
2x+3y $\leq$10
5x+y $\leq$12
x+5y $\leq$ 15
x $\geq$0, y $\geq$0
Part (a): verify that x=[1] (this is a vector) is a feasible solution. For part (a) i
[2]
had no problems verifying it but I need it for part (b)
Part (b) For the feasible solution in a, find the corresponding values of the slack variables.
I started setting it up in canonical form but there are three constraints but two items in the vector given in (a) so i'm stuck. I had started adding in the slack variables as in: 2x+3y+u=10 etc.. but I don't know how to do this one when there are three constraints and two items in the vector.
You have one equation for each inequality constraint each with a different slack variable. Solve for the slack variable in each and put x=1, y=2 to get the value of the slack variable corresponding to each constraint.
CB
• February 8th 2011, 07:38 PM
tn11631
Quote:
Originally Posted by CaptainBlack
You have one equation for each inequality constraint each with a different slack variable. Solve for the slack variable in each and put x=1, y=2 to get the value of the slack variable corresponding to each constraint.
CB
Thats what I was doing on my paper but then I got confused because I had u=10-2x-3y and v=12-5x-y but then what about x+5y $\leq$15? I don't know what to do there because there are only two variables x and y so there should be two slack variables u, and v but then im still left with x+5y $\leq$15. Thanks !
• February 8th 2011, 08:10 PM
CaptainBlack
Quote:
Originally Posted by tn11631
Thats what I was doing on my paper but then I got confused because I had u=10-2x-3y and v=12-5x-y but then what about x+5y $\leq$15? I don't know what to do there because there are only two variables x and y so there should be two slack variables u, and v but then im still left with x+5y $\leq$15. Thanks !
$w=15-x-5y$
so at the given point $w=15-1-10=4$
CB
• February 8th 2011, 08:14 PM
tn11631
Ohh wow don't I feel dumb for missing that :) maybe too much math for one night. Thanks so much!
All times are GMT -8. The time now is 01:39 PM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145416617393494, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/65951?sort=oldest | ## Solvable PDE Problems [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm currently in a PDE course where one of the requirements is to find a common PDE problem and explain how to solve it.
The problems found easily on google won't help me, since every student has to find a different PDE problem and they are all have been chosen by other students.
So please answer to this question if you have any suggestions.
( Excuse me if this isn't the right place to ask this question. )
-
It would help if you could note some of the PDE problems that have already been chosen by other students (if possible). However, I would imagine that if you added some useful insight of your own regarding how to solve a given PDE, then it might not matter so much if another student has already selected it. For example, there can be several ways in which you can motivate the selection of a given solution to a fixed PDE. – Amitesh Datta May 25 2011 at 11:50
1
You might try asking this on math.stackexchange.com, where I think it will receive a warmer reception. – Harry Gindi May 25 2011 at 12:10
## 1 Answer
The PDE that I shall suggest is quite common and therefore it is likely that it has already been selected by another student. However, the analysis of this PDE is vast and very interesting.
The motivation is as follows: let $D$ be the unit disk in the plane (i.e., ${x\in \mathbb{R}^2: \left|x\right|\leq 1}$) and let $f$ be a continuous function defined on the boundary of $D$. We wish to find a harmonic function $u$ defined in the interior of $D$ (i.e., ${x\in\mathbb{R}^2:\left|x\right|<1}$) whose boundary values are $f$; i.e., $u$ is a continuous function required to satisfy the Laplace equation $u_{xx}+u_{yy}=0$ and the function $F$ defined on $D$ by the rule $F(x)=u(x)$ if $\left|x\right|<1$ and $F(x)=f(x)$ if $\left|x\right|=1$ is continuous. This is called the Dirichlet problem in the unit disk.
Similarly, let $1\leq p<\infty$ and let $f\in L^p(\mathbb{R})$. We wish to find a harmonic function $u$ defined in the upper half plane such that $u(x,0)=f(x)$ almost everywhere on the real line. This is called the Dirichlet problem in the upper half plane.
There exist approaches to both problems that use general measure theory in a particularly enlightening manner. I will briefly sketch the solutions; if you wish to see a more comprehensive treatment, you may look at Walter Rudin's Real and Complex Analysis (2nd. edition), chapter 11, and Loukas Grafakos' Classical Fourier Analysis, chapter 2, pages 84-87.
Solution to Dirichlet's problem in the unit disk: the general approach is to define $u$ as the Poisson integral of $f$. More precisely, we define $u(re^{i\theta})=\frac{1}{2\pi} \int_{-\pi}^{\pi} P_r(\theta - t)f(t) dt$ for $0\leq r < 1$, where $P_r(t)= \frac{1-r^2}{1-2r\cos(t)+r^2}$ is the Poisson kernel.
Solution to Dirichlet's problem in the upper half plane: the general approach is to first define the Poisson kernel $P_t(x)=c\frac{t}{x^2+t^2}$ (for $t>0$, $x\in\mathbb{R}$, and $c=\frac{1}{\pi}$) and then define $u(x,t)=(P_t * f)(x)$; the convolution of $P_t$ and $f$ on the real line. Since ${P_t}_{t>0}$ is an approximate identity on $\mathbb{R}$, it follows that $u(x,t)$ converges to $f(x)$ in $L^p$ as $t\to 0$. In fact, this convergence is a.e. (the proof is non-trivial; one approach is to use maximal functions) and this implies that we have solved the Dirichlet problem in the upper half plane.
I hope that I have helped and I apologize for the somewhat sketchy proofs! I have certainly noted some non-trivial facts and I recommend you to look at Rudin and Grafakos for the details. Of course, I should add that the solutions that I have presented will be much more meaningful if you are familiar with measure theory and elementary complex analysis.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540932774543762, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/44617/list | ## Return to Answer
3 added 146 characters in body; added 20 characters in body
If you restrict to the case where `$R \in \{ \pm1 \}^l$` you can encode an arbitrary function `$f\colon \{\pm1\}^l\to \pm1$` with appropriate choice of the $V_i$ by augmenting your problem to instead return $1$ if $R = V_i$ for some $i$ and $-1$ otherwise. So by a counting argument If you have an algorithm to solve your original problem then you can solve the augmented problem without adding much by first finding the closest $V_i$ and then checking if $V_i=R$. Since the augmented problem can encode any function it will in general have exponential circuit complexity, and therefore would require an the original problem will also have exponential time circuit complexity (and therefore has no subexponential "algorithm". although your question is more amenable to circuits than algorithms since there's non-uniformity in $l$.algorithm").
2 deleted 1 characters in body; edited body
If you restrict to the case where `$R \in \{ \pm1 }^n$ \}^l$` you can encode an arbitrary function `$f : {\pm1}^n f\colon \rightarrow {\pm1}$ {\pm1\}^l\to \pm1$` with appropriate choice of the $V_i$ by augmenting your problem to instead return $1$ if $R = V_i$ for some i $i$ and $-1$ otherwise. So by a counting argument your problem will in general have exponential circuit complexity and therefore would require an exponential time "algorithm". although your question is more amenable to circuits than algorithms since there's non-uniformity in $n$.l\$.
1
If you restrict to the case where $R \in { \pm1 }^n$ you can encode an arbitrary function $f : {\pm1}^n \rightarrow {\pm1}$ with appropriate choice of the $V_i$ by augmenting your problem to instead return $1$ if $R = V_i$ for some i and $-1$ otherwise. So by a counting argument your problem will in general have exponential circuit complexity and therefore would require an exponential time "algorithm". although your question is more amenable to circuits than algorithms since there's non-uniformity in $n$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280639290809631, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/194754-help-needed-combination-problem.html | # Thread:
1. ## Help needed on combination problem
Prove that the total number of selectionsthat can be made out of the letters "DADDY DID A DEADLY DEED" is 1919.
does anyone have the solution to this??
-thank you
2. ## Re: Help needed on combination problem
Originally Posted by blackhat98
Prove that the total number of selectionsthat can be made out of the letters "DADDY DID A DEADLY DEED" is 1919.
does anyone have the solution to this??
Please tell us what is meant by a selection?
POST SCRIPT:
I made a guess as to what this question means.
If one expands $\left( {1 + x} \right)^2 \left( {1 + x + x^2 } \right)\left( {\sum\limits_{k = 0}^3 {x^k } } \right)^2 \left( {\sum\limits_{k = 0}^9 {x^k } } \right)$
we get a polynomial of degree 19.
Each coefficient in the expansion tell the number of possible selections.
For example: the term in the expansion $191x^9$ tells us that there are 191 ways to select nine items from that list.
If we sum all the coefficients we get $1920$.
But that includes the empty selection.
Remove it and there is $1919.$
Nice method!
4. ## Re: Help needed on combination problem
Originally Posted by Plato
Please tell us what is meant by a selection?
POST SCRIPT:
I made a guess as to what this question means.
If one expands $\left( {1 + x} \right)^2 \left( {1 + x + x^2 } \right)\left( {\sum\limits_{k = 0}^3 {x^k } } \right)^2 \left( {\sum\limits_{k = 0}^9 {x^k } } \right)$
we get a polynomial of degree 19.
Each coefficient in the expansion tell the number of possible selections.
For example: the term in the expansion $191x^9$ tells us that there are 191 ways to select nine items from that list.
If we sum all the coefficients we get $1920$.
But that includes the empty selection.
Remove it and there is $1919.$
thank you Plato.... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919708251953125, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/145552/extending-a-k-lipschitz-function | # Extending a $k$-lipschitz function
Given $f: Y\subset\mathbb{R}\to \mathbb{R}$ a $k$-lipschitz function, (i.e $|f(x)-f(y)|\leq k|x-y|$ for any $x,y\in Y$) I need to prove the existence of a $k$-lipschitz function $g:\mathbb{R}\to \mathbb{R}$ such that $g|_Y=f$.
My answer when $f$ is bounded is considering $$g(x)=\inf_{y\in Y}\{f(y)+k|x-y|\}.$$
Is it correct?. How do you find $g$ when $f$ is not bounded?.
-
Is there any information about what kind of subset $Y\subseteq \mathbb{R}$ is? Open, connected, etc.? – Zev Chonoles♦ May 15 '12 at 18:09
@ZevChonoles $Y\subset \mathbb{R}$ can be any subset. – Gastón Burrull May 15 '12 at 18:11
3
Your definition of $g$ works, even if $f$ is unbounded. Notice that the sum $f(y)+k|x-y|$ is bounded from below for any fixed $x$ (use the Lipschitz property of $f$). – user31373 May 15 '12 at 18:11
@LeonidKovalev I don't think so because $f(y)$ is not bounded. Can you explain why is bounded? I didn't well understood. – Gastón Burrull May 15 '12 at 18:16
@LeonidKovalev My definition of $g$ does not always work because $f$ not necessarly has a lower bound. – Gastón Burrull May 15 '12 at 18:29
## 1 Answer
An alternate explicit construction:
First, you can continuously extend to the closure $\bar{Y}$ using the Lipschitz condition.
Then, since $\bar{Y}$ is closed, for every $x\in \mathbb{R}\setminus \bar{Y}$ one can find $x_- = \max \bar{Y}\cap \{ y < x\}$ and $x_+ = \min \bar{Y}\cap \{y > x\}$. Then just linearly interpolate: $$g(x) = f(x_-) + \frac{f(x_+) - f(x_-)}{x_+ - x_-} (x - x_-)$$
But let me explain Leonid Kovalev's comment. Notice that fixing some arbitrary $x' \in Y$, we have that for any $x\in\mathbb{R}$ now chosen to be fixed
$$f(y) - f(x') + k|x-y| \geq f(y) - f(x') + k|x' - y| - k|x-x'|$$
from triangle inequality. But using the $k$ lipschitz property you have that
$$f(y) - f(x') + k|x' - y| \geq 0$$
so the expression
$$f(y) - f(x') + k|x-y| \geq -k|x-x'|$$
where the right hand side is independent of $y$. Or, in other words
$$f(y) + k|x-y| \geq f(x') - k|x-x'|$$
so the expression you want to take the infimum of (in $y\in Y$) is bounded from below by some constant, and hence the infimum exists.
-
Can be simplified: $f(y)+k|x-y|\ge f(x)$, so $f(x)$ is the lower bound for the $\inf$ in the definition of $g$. – user31373 May 15 '12 at 19:36
2
@Leonid: $f$ is not defined at $x$ by assumption. In fact, we are trying to define an extension of $f$ to $x$, no? – Willie Wong♦ May 15 '12 at 19:42
@WillieWong Ty, both answers was very good explained!! – Gastón Burrull May 15 '12 at 19:47
1
Of course, that was a stupid comment on my part. – user31373 May 15 '12 at 19:51
1
@Gaston: the two numbers are the same. As $x\in \mathbb{R}\setminus \bar{Y}$ which is open, there exists some $\epsilon$ such that $\bar{Y}\cap \{y\leq x\} = \bar{Y}\cap\{y < x\} = \bar{Y} \cap \{y \leq x - \epsilon\}$. – Willie Wong♦ May 16 '12 at 6:14
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190059900283813, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/119779-directional-derivative-help-needed.html | # Thread:
1. ## Directional Derivative Help Needed!
Let f: $R^2 \rightarrow R$ be given by
f(x,y) :=
$\frac{xy^2}{x^2+y^4}$ if (x,y) $\neq$ (0,0)
0 otherwise
(a) Show that $d_u f(0,0)$ exists for all the vectors $u \in R^2$ and that $d_u f(0,0)$ = $\frac{b^2}{a}$ if u = (a,b) with a $\neq$ 0.
(b) Show that f is not differentiable at (0,0)
2. (1)what is the diefinition of directional derivative? make sure you understand the definition !
If u=(a,b), then (x,y)=(at,bt) tend to (0,0) as t tend to 0, then what is the limit ?
(2) if (x,y) tend to 0 with the path x=ky^2, then what can you say about the limit?
3. So... I got that the limit as h $\rightarrow$ 0 of $\frac{f((0,0)+hu) - f(0,0)}{h} = 0$ meaning $d_uf(0,0)$ exists.
for u = (a,b),
it would be $\frac{(at)(bt)^2}{(at)^2+(bt)^4}$ and take that as t $\rightarrow$ 0, but I'm not sure how to do that...
I don't know what you mean by the path...
4. $\frac{(at)(bt)^2}{(at)^2+(bt)^4}\to 0, \mbox{as} t\to 0, provide that a \neq 0$.
If (x,y) tend to (0,0) along the path $x=ky^2$,then
$\frac{xy^2}{x^2+y^4}=\frac{ky^4}{k^2y^4+y^4}=\frac {k}{k^2+1} \to \frac{k}{k^2+1}, \mbox{as} y\to 0$
the limit depent on the value of k, it is not continous at (0,0).
5. for the second part, if f was $\frac{xy^2}{x^2+y^2}$ (still 0, if (x,y) = (0,0)), what would I set x equal to? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287252426147461, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/206645-finding-critical-points-extrema-local-absolute-trends-open-intervals-print.html | # Finding Critical Points & Extrema (Local & Absolute), & Trends on Open Intervals
Printable View
• November 2nd 2012, 04:37 PM
SwatchAndGo
Finding Critical Points & Extrema (Local & Absolute), & Trends on Open Intervals
Hi.
First post on the forum, but regardless of which, need some assistance.
I have the following equation: $f(x)=\frac{x}{x+3}$
My purpose is, to quote the text:
Quote:
Find the critical numbers of f (if any). Find the open intervals on which the function is increasing or decreasing and locate all relative extrema. Use a graphing utility to confirm your results.
I know I must take the derivative of the equation to find the cp's, however, I don't know exactly how to do it "the shortcut way" (where the fraction is made into a negative exponential power).
When I use the quotient rule, I get the following: $f'(x)=\frac{3}{(x+3)^2}$
After that, I'm not sure how to factor the equation to get the cp's. I know that's done by setting it equal to zero; factoring is where I'm (Headbang).
• November 2nd 2012, 04:39 PM
Prove It
Re: Finding Critical Points & Extrema (Local & Absolute), & Trends on Open Intervals
Quote:
Originally Posted by SwatchAndGo
Hi.
First post on the forum, but regardless of which, need some assistance.
I have the following equation: $f(x)=\frac{x}{x+3}$
My purpose is, to quote the text:
I know I must take the derivative of the equation to find the cp's, however, I don't know exactly how to do it "the shortcut way" (where the fraction is made into a negative exponential power).
When I use the quotient rule, I get the following: $f'(x)=\frac{3}{(x+3)^2}$
After that, I'm not sure how to factor the equation to get the cp's. I know that's done by setting it equal to zero; factoring is where I'm (Headbang).
The equation $\displaystyle \begin{align*} \frac{3}{(x + 3)^2} = 0 \end{align*}$ does not have any solutions, so there are not any critical points.
• November 2nd 2012, 05:51 PM
SwatchAndGo
Re: Finding Critical Points & Extrema (Local & Absolute), & Trends on Open Intervals
Alright. How did you get to that conclusion? Or should that be just assumed at this point?
Also, I'm working on the next problem on the set that was assigned to me:
$f(x)=\frac{x+4}{x^2}$ => Quotient Rule => $f'(x)=\frac{(x^2)(1)-(x+4)(2x)}{(x^2)^2}$ => Simplify => $f'(x)=\frac{x^2-2x^2-8x}{x^4}$
So far, I came up with this as its derivative (using the quotient rule):
$f'(x)\frac{-x(x+8)}{x^4}$
Not exactly sure how to solve for zero (if there are any solutions) (Thinking)
• November 2nd 2012, 06:15 PM
MarkFL
Re: Finding Critical Points & Extrema (Local & Absolute), & Trends on Open Intervals
In your first problem, you correctly differentiated to obtain:
$f'(x)=\frac{3}{(x+3)^2}$
Now, the only way for this to equal zero is when the numerator is zero, but it is a non-zero constant, so this can never happen for any $x$.
You do have a critical value from the denominator of $x=-3$, but since this is a root of even multiplicity the sign of the derivative will not change across this point. So we know the function is increasing on:
$(-\infty,-3)\,\cup\,(-3,\infty)$
We know the original function has a vertical asymptote at $x=-3$ and a horizontal asymptote at $y=1$.
For your second problem, you may simplify your derivative by dividing out the common factor of $x$, i.e.:
$f'(x)=-\frac{x+8}{x^3}$
Now, equate the numerator to zero to find stationary point(s) and equate the denominator to zero to find point(s) where the sign of the derivative may change. Since the denominator has a root of odd multiplicity we can expect the derivative will change sign across this point. Once you have identified your critical numbers, divide the number line at these points and test the resulting intervals.
edit: Can you identify the asymptotes of the original function?
• November 2nd 2012, 06:41 PM
SwatchAndGo
Re: Finding Critical Points & Extrema (Local & Absolute), & Trends on Open Intervals
Ok, so even though $f(x)=\frac{x}{x+3}$ has a vertical asymptote at $-3$, the function's slope is still positive across the asymptote.
V.A. @ x = 0, H.A. @ y = 4? (For the original equation)
I didn't understand where you got $-\frac{x+8}{x^3}$ from until I understood you canceled out the x: $\frac{-x(x+8)}{x^4}$. But don't the top terms $x$ and $8$ tie to each other through addition and therefore you can't factor out x from one without the other?
I'd get something along these lines: $-\frac{1}{x^2}+\frac{8}{x^3}$, which doesn't exactly make it obvious where to go from there.
What do stationary points refer to?
There's basically only one cp that can be identified, which is at 0, a V.A.
• November 2nd 2012, 06:56 PM
MarkFL
Re: Finding Critical Points & Extrema (Local & Absolute), & Trends on Open Intervals
No, the slope isn't positive across $x=-3$, but it is positive on either side. That's why I gave two intervals, to exclude $x=-3$.
For the second problem, the vertical asymptote is $x=0$, but the horizontal asymptote is not $y=4$. The degree in the denominator is greater than the degree in the numerator, so the horizontal asymptote is $y=0$. You may confirm this by considering:
$\lim_{x\to\pm\infty}\frac{x+4}{x^2}=\lim_{x\to\pm \infty}\left(\frac{1}{x}+\frac{4}{x^2} \right)=0$
In the second problem, when you go to simplify the derivative, since $x$ is a factor in both the numerator and denominator, we can divide it out.
Stationary points refer to where the derivative is equal to zero. If the derivative changes sign across this point, then it is a relative extremum.
There is another critical value, which you can find by equating the numerator to zero:
$x+8=0$
$x=-8$
So test the sign of the derivative on the intervals $(-\infty,-8),\,(-8,0),\,(0,\infty)$ to find where the original function is increasing/decreasing.
All times are GMT -8. The time now is 02:15 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256887435913086, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/30262/does-this-relation-about-direction-of-particles-make-sense/30293 | # Does this relation about direction of particles make sense?
Maybe I've just stared at this statement too long and I've missed something obvious. Nevertheless, here's the problem: Landau-Lifshitz vol. 1§16, problem 1.
Consider (classical) collision of two particles in center of mass coordinates. Before the collision, the particles are just traveling towards each other and in CM coordinates then the direction of the velocities of two particles should be opposite to each other, i.e. $$\theta_1 = \theta_2 + \pi ,$$ where $\theta_i$ is the angle between the velocity of particle $i$ and the coordinate axis.
However, in the solution in Landau-Lifshitz, they state that "In the C system, the corresponding angles are related by $\theta_1 = \pi - \theta_2$."
Is there a mistake in L-L? If not, can you explain me the relation above?
-
$\theta_1=\theta_2+\pi$ is correct in CM for any two-particle reaction. However, since you did not restate the problem in your question, it's hard to assess the intent of $\theta_1=\pi-\theta_2$. – Terry Bollinger Jun 18 '12 at 1:33
## 1 Answer
The difference comes from the picture--- the $\theta_1$ and $\theta_2$ in the original statement are both relative to the positive x-axis, while in the solution $\theta_2$ is the final angle relative to the initial velocity of the corresponding particle, so if the velocity is along the x-axis, $\theta_1$ is the angle relative to the x-axis, and $\theta_2$ is relative to the minus x axis.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228137135505676, "perplexity_flag": "head"} |
http://motls.blogspot.com/2012/07/lhc-higgs-does-it-deviate-from-standard.html?m=0 | # The Reference Frame
## Saturday, July 07, 2012
... /////
### LHC Higgs: does it deviate from the Standard Model too much?
A textbook pedagogic exercise on chi-squared distributions
On the 2012 Independence Day, the world of physics has made a transition. The question "Is there a Higgs-like particle with mass near $126\GeV$?" has been answered.
Weinberg's toilet, i.e. Glashow's model of the SM Higgs boson. Explanations below.
The ATLAS Collaboration gave a 5-sigma "Yes" answer; via the rules of the normal distribution, this translates to the risk "1 in a million" that the finding is noise i.e. that it is a false positive.
Even CMS, despite its inability to use the correct Comic Sans fonts, has been able to do the same thing: 5 sigma discovery, "1 in a million" certainty. If you combine the two independent experiments, the risk of a false positive multiplies: the LHC says that there is a 7-sigma (via the Pythagorean theorem), or "1 in a trillion" (via a product), risk of a false positive. Compare it with 1-in-10 or 1-in-3 or 9-in-10 risk of a false positive that is tolerable in the climate "science" or various medical "sciences".
Once this question is answered, the new question is: What is the right theory that describes the precise behavior of the new particle?
There is one popular candidate: the so-called Standard Model. Steven Weinberg gave it its name and he also constructed much of it. Besides old and well-known things, it also includes one Higgs doublet which gives rise to one physical Higgs polarization.
Various trained and active physicists have worshiped the God particle by rituals that only physicists have mastered so well. Stephen Wolfram called it a hack and this word was actually used by some currently active physicists, too. High school and Nobel trip pal of Weinberg, Shelly Glashow, called it the Weinberg toilet. It's something your apartment needs but you're not necessarily too proud about it.
The Standard Model is popular because it's minimal in a certain sense that is easily comprehensible for humans: it contains a minimum number of such toilets. Whether it's so great for Nature or for a landlord or for a crowded aircraft serving strange combinations of food for lunch to have the minimum possible (consistent) number of toilets remains to be seen; minimizing the number or the sophistication of toilets actually isn't necessarily Nature's #1 priority. Now, however, we may already give an answer to the question:
How much are the 2011-2012 LHC data compatible with the minimum solution, the Standard Model?
To answer the question, we must look at somewhat more detailed properties of the new particle and apply a statistical procedure. As an undergraduate freshman, I didn't consider fancy statistics to be a full-fledged part of physics. Can't you measure things accurately and leave the discussion of errors to the sloppy people who make errors? However, the good enough experimenters in Prague forced me to learn some basic statistics and I learned much more of it later. Statistics is needed because even physicists who love accuracy and precision inevitably get results that are incomplete or inaccurate for various reasons.
The text will be getting increasing technical as we continue. Please feel free to stop and escape this blog with a horrified face. Of course, I will try to make it as comprehensible as possible.
Collecting the data
We could consider very detailed properties of the new particle but instead, we will only look at five "channels" – five possible recipes for final products into which the Higgs particle may decay (not caring about some detailed properties of the final products except their belonging to one of the five types). We will simply count how many times the LHC has seen the final products of one of the five types that look just like if they come from the Higgs.
Karel Gott (*1939 Pilsen), a perennial star of the Czech pop-music, "Forever Young" (in German; Czech). The video is from Lu-CERN, Switzerland.
In each of the five groups, some of them really come from the Higgs; others come from processes without any Higgs that are just able to fully emulate the Higgs. The latter collisions – called the "background" – are unavoidable and inseparable and they make the Higgs search or any other search in particle physics harder.
The LHC has recorded over a quadrillion (1,000 trillions or million billions) collisions. We will only look at the relevant ones. For each of the five groups below, the Standard Model predicts a certain number, plus minus a certain error, and the LHC (ATLAS plus CMS which we will combine, erasing the artificial separation of the collisions into two groups) has measured another number. The five groups are these processes:\[
\eq{
pp&\to h\to b\bar b \\
pp&\to h\to \gamma\gamma \\
pp&\to h\to \tau^+\tau^- \\
pp&\to h\to W^+ W^- \\
pp&\to h\to ZZ
}
\] You see that in all of them, two protons collide and create a Higgs particle, among other things. This particle quickly decays – either to a bottom quark-antiquark pair; or two photons; or two tau-leptons; or two W-bosons; or two Z-bosons.
Now open the freshly updated Phil Gibbs' viXra Higgs combo java applet (TRF) and play with the "Channel" buttons only, so that you don't damage this sensitive device. ;-)
If you're brave enough, switch the button from "Official" to "Custom" and make five readings. In each of them, you just check one of the boxes corresponding to the five channels above; $Z\gamma$ yields no data and we omitted this "sixth channel". Then you look at the chart you get at the top.
Oh, I forgot, you must also switch "Plot Type" (the very first option) from "Exclusion" to "Signal" to make things clear. The graphs then show you whether you're close to the red, no-Higgs prediction or the green, yes-Higgs prediction.
In each of the five exercises, you move your mouse above the Higgs mass $126\GeV$ or, if not possible, $125\GeV$ which is close to the $126\GeV$ that the LHC currently predicts (the average of ATLAS and CMS). A small rectangle shows up and you read the last entry, "xsigm". It tells you how much the combined LHC data agree with the yes-Higgs prediction of the Standard Model. "xsigm=0" means a perfect agreement, "xsigm=-1" means that the measured number of events is one standard deviation below the predictions, and so on. You got the point. Of course, you could read "xsigm" from the graph itself.
For the five processes, you get the following values of "xsigm":\[
\begin{array}{r|l}
\text{Decay products}& \text{xsigm}\\
\hline
b\bar b & +0.35 \\
\gamma\gamma& +2.52 \\
\tau^+\tau^-& -1.96 \\
W^+ W^- & -1.75 \\
Z^0 Z^0 & +0.32
\end{array}
\] I added the sign to indicate whether the observations showed an excess or a deficit; I am not sure why Phil decided to hide the signs. You see that only the $\gamma\gamma$ i.e. diphoton channel has a deviation greater than two standard deviations. But there are two others that are close to two standard deviations, too: a deficit of ditau events and the W-boson pairs.
On the other hand, there are five channels which is a rather high number: it matches the number of fingers on a hand of a vanilla human. With a high number of entries, you expect some entries to deviate by more than one sigma by chance, don't you? How do we statistically determine how good the overall agreement is?
We will perform the so-called chi-squared test
Because both signs indicate a discrepancy between the Standard Model and the observations, we will square each value of "xsigm" and sum these squares. In other words, we will define\[
Q = \sum_{i=1}^5 {\rm xsigm}_i^2.
\] The squaring is natural for general mathematical reasons; it's even more natural if you realize that ${\rm xsigm}$ are actually quantities distributed along the normal distribution with the standard deviation of one and the normal distribution is Gaussian. It has $-{\rm xsigm}^2/2$ in the exponent.
Fine, so how much do we get with our numbers?\[
\eq{
Q&=0.35^2+2.52^2+1.96^2+1.75^2+0.32^2\\
Q&= 13.48
}
\] The result is approximate. Yes, the signs cancelled here, anyway.
At this moment, we would like to know whether the $Q$ score is too high or too low or just normal. First, it is clearly too high. It's because $Q$ was a sum of five terms and each of them was expected to equal one in average; this "expectation value of ${\rm xsigm}^2$" is what defines the standard deviation. So the average value of $Q$ we should get is $5$. We got $13.48$.
How bad the disagreement is? What is the chance that we get $13.48$ even though the expected average is just five?
Looking at the $\chi^2$ distribution
To answer this question, we must know something about the statistical distribution for $Q$. We know that the average is $\langle Q\rangle=5$ but we need to know the probability that $Q$ belongs to any interval. By definition of the $\chi^2$-distribution, the quantity $Q$ defined as the sum of squares of (thanks, JP) five normally distributed quantities (with a vanishing mean and with the unit standard deviation) is distributed according to the chi-squared distribution. If you prefer mathematical symbols,\[
Q\sim \chi_5^2.
\] The subscript "five" is the number of normally distributed quantities we're summing. This distribution is no longer normal. After all, the negative values are strictly prohibited, something that is unheard of in the world of normal distributions.
The probability density function (PDF) for this distribution – and the PDF is what we normally call "the distribution function" – may be explicitly calculated by composing the five normal distributions in a certain way and computing the resulting integral in spherical coordinates. The result is\[
\eq{
d\,Prob(Q\in(Q,Q+dQ))&= dQ\cdot \rho(Q)\\
\rho(Q) &= \frac{Q^{k/2-1}\exp(-Q/2)}{2^{k/2}\Gamma(k/2)}
}
\] The exponent in the exponential is just $-Q/2$ without squaring because it comes from the Gaussians but $Q$ already includes terms like ${\rm xsigm}^2$. The additional power of $Q$ in the numerator arises from the integral over the four angular spherical coordinates in the 5-dimensional space. The remaining factors including the Gamma-function are needed to normalize $\int\rho(Q)\dd Q=1$.
Now, the cumulative distribution function (CDF) – the probability that $Q$ is found in the interval $(-\infty,Q)$ for some value of $Q$ (hope you won't confuse the $Q$'s: there's no real potential for confusion here) is an integral of the PDF and it may be expressed in terms of some special function. Then you just substitute the numbers to get the probability.
However, such integrals may be calculated directly if you have Wolfram Mathematica 8. For example, if you ask the system to compute
Probability[q > 5., Distributed[q, ChiSquareDistribution[5]]]
you will be told that the result is 41.588%, not far from the naive "50%" you would expect. The deviation arises because 5 is really the average and that's something else than the "median" which is the value that divides equally likely (50%) intervals of smaller and larger values.
Now, you just modify one number above and calculate
Probability[q > 13.48, Distributed[q, ChiSquareDistribution[5]]]
The result is $0.019$ i.e. $1.9\%$. The probability is about 1-in-50 that the $Q$ score distributed according to this $\chi^2$ distribution is equal to $13.48$ or it is larger. How should you think about it?
Counting civilizations
Imagine that the Standard Model is exactly correct. There are millions of planets in the Universe. Each of them has an LHC that has accumulated – if we count the combination from $7\TeV$ as well as $8\TeV$ collisions and from the year 2011 as well as 2012 after their own Jesus Christ – about 20 inverse femtobarns of high-energy proton-proton collisions.
There is an Internet on each planet, invented by their own Al Gore. If I have horrified you by the idea of a million of Al Gores, let me mention that there is also The Reference Frame on each planet that has processed the local LHC results and calculated $Q$. However, the collisions were random so they also got a different number of events of various kinds and a different value of $Q$.
The probability $1.9\%$ means that $1.9\%$ of the planets found $Q$ that was at least $13.48$. So if you believe that the Standard Model is correct, you must also believe that by chance, we belong among the top $1.9\%$ of the planets that were just "unlucky" enough because their LHC measured a pretty large deviation from the Standard Model.
Is that shocking that by chance, we belong among the "top $1.9\%$ offenders" within the planetary physics communities? This value of $1.9\%$ is slightly greater than a 2-sigma signal. So you may say that the group of five channels we have looked at provides us with a 2-sigma evidence or somewhat stronger evidence that the Standard Model isn't quite right.
This is too weak a signal to make definitive conclusions, of course. It may strengthen in the future; it may weaken, too. After the number of collisions that is $N$ times larger than those that have been used so far, the number "2" in this overall "2 sigma" will get multiplied roughly by $\sqrt{N}$ if the deviation boils down to Nature's real disrespect for the exact laws of the Standard Model.
However, it will be mostly moving between 0 and 1.5 sigma in the future (according to the normal distribution: a higher spurious confidence level in sigmas would drop there following the function $CL/\sqrt{N}$) if the deviation from the Standard Model we observe and we quantified is due to noise and chance. It's too early to say but the answer will be getting increasingly clear in the near future. For example, if the signal is real, those overall 2 sigma may grow to up to 4 sigma after the end of the 2012 run.
Stay tuned.
Posted by Luboš Motl
|
Email This BlogThis! Share to Twitter Share to Facebook
Other texts on similar topics: experiments, LHC, mathematics, string vacua and phenomenology
#### snail feedback (18)
:
reader Mephisto said...
Lubos, are you able to do the statistics better in the medical "sciences"? You want to do clinical studies on a 1 million population sample to get your 5-sigma accepted p-level? Who do you think will pay you for that? Nobody will. So we do exactly what you did with the CMS and ATLAS results. We combine the results of several smaller studies to get a stronger statistical evidence, it is called metaanalysis
reader Luboš Motl said...
This is complete bullshit what you're writing, Mephisto.
If you get a 2-sigma signal in a medical study, the sample needed for a 5-sigma signal assuming that the signal was actually real is just 2.5^2 = 6.25 times larger, not one million times larger.
It's enough to multiply the sample by six and you will switch from a 5% risk of a false positive (false discovery) to a 1-in-a-million risk of a false positive.
Who will fund this 6-times-larger samples? Anyone who funds science. Who is funding the smaller samples and scientists who are satisfied with 2-sigma signals? Sponsors of junk scientists, pseudoscientists, and hacks like you who are ignorant about basic science and basic statistics.
reader Smoking Frog said...
a 1 million population sample
Do you mean a sample size of 1 million, or a sample from a population of 1 million? I hope you mean sample size, but it's kinda hard to explain why. :-)
reader Mephisto said...
I do not wish do spam you very interesting blog about the statistics of collider experiments. So feel free the remove the first as well as this second (and last comment of mine). But I think you underestimate it a little. In every university hospital, there are professional statisticians who are responsible for the results and I believe they know what they are doing. Every request for a research grant needs a calculation of a sample size needed to provide sufficient statistical power and this is done by statisticians. (google something like "Department of medical biostatistics", you find it at every university doing research)
reader Luboš Motl said...
They may be departments and you may give them pompous names but they're still not doing a good job because we still don't know the answers to many common questions about the health impact of various things although lots of tests have been done.
reader HB said...
If Kolmogorov-Smirnov test is used, then one gets p-value = 0.43.
reader Peppermint said...
Thanks for the expert physics commentary and statistics refresher. I had forgotten what chi-squared was XD
reader Tom Weidig said...
Believe me those statisticians are remarkably clueless, and mostly blindly apply statistics. I have analysed one such study myself and found it to be flawed and told everyone.
Result: The statistician never replied to me, the concerned medical researchers replied to my arguments with "it's correct because it was done by a statistician.", the others didn't understand statistics or thought it is a minor point or that i should be more constructive...
I am convinced that 80-90% of all studies are wrong... and one of the reason is that those bad researcher train bad researchers train bad researchers.
reader Dilaton said...
Thanks for this nices reminder of the Chi-squared test Lumo, and I quite like its overall result for the actual deviation from the SM, using these five decay channels :-)
reader NumCracker said...
Dear Lubos, could you please make the same analysis for 1st data-block presented in last press-release (2011) and compare it channel-by-channel with the current results ?
I would like to know by this calculations if there is a trend when comparing the old 7-Tev data with 7-Tev + 8-Tev data sets ... maybe it would help us to see a real preliminar deviation from SM or just a statistical fluctuation expected to die till 12/2012.
reader David Nataf said...
Thank you for doing this calculation.
Are you confident that there are no covariances between the different experiments? I don't know how the experiments work, but maybe whatever causes a deficit in one channel causes a surplus in other channels?
reader Luboš Motl said...
Dear Numcracker, you're invited to calculate it with the old numbers if you have them.
The more complete data are clearly superior. You won't learn anything interesting by looking at a weaker dataset. The 2011 collisions are just a random subset of the 2011+2012 collisions.
The word "trend" is misleading because it suggests that one could extrapolate it. But this is complete nonsense, of course. Much of the current deviations are clearly noise and the proportion of noise was even higher after 2011, in a smaller dataset, and noise obviously can't be extrapolated via trends.
reader Luboš Motl said...
A repeatable negative correlation between CMS and ATLAS? Very exciting proposal. How would this miracle exactly occur? ;-)
I can imagine positive correlations. A part of the systematic errors are shared by the whole LHC. Most of the error here is statistical, I hope, and it is not correlated between CMS and ATLAS.
Some collisions are measured by the CMS device and methods, some by ATLAS, but this separation may be completely removed. Just think about it.
reader Dilaton said...
... but only CMS has a TD which can negatively influence their results ... :-P
reader JP said...
In, "the quantity Q defined as the sum of five normally distributed quantities", "squares of" is missing.
reader Luboš Motl said...
Thanks, fixed, a small credit to you added.
reader NumCracker said...
Dear Lubos, after reading your post one tends to believe new particles would be right to the corner, as proposed today in arXiv:1207.1445.
However, it may be that LHC's data analysis was made in a too naive way. Consider for instance arXiv:1207.1451 for QCD PDF corrections.
What is your opinion/bet on this issue ? Is this gamma-gamma signal real or just result from usual suspects ?
reader Luboš Motl said...
Dear NumCracker, the blog entry newer "by one" is about 1207.1445.
I am completely uncertain - literally 50 to 50 - on whether or not the deviations are compatible with the Standard Model and whether they have to go away after more complete data or more precise predictions.
If the deviations are real, it's rather hard to get new physics that is compatible with the suppressed diphoton channel etc., as argued in 1207.1445 and probably in other papers to emerge soon.
Post a Comment
| Subscribe to: all TRF disqus traffic via RSS [old]
To subscribe to this single disqus thread only, click at ▼ next to "★ 0 stars" at the top of the disqus thread above and choose "Subscribe via RSS".
Subscribe to: new TRF blog entries (RSS) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397661685943604, "perplexity_flag": "middle"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.